SSS’14 Conference
October 8, 2014, 5:53 pm
Filed under: Uncategorized | Tags: ,

[The following blog report on SSS’14 was written by my student George Saad]


16th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2014) was held in Paderborn, Germany, from Sep 28 to Oct 1, 2014. This conference discusses the design and development of distributed systems with fault-tolerance and self-* properties.

SSS 2014 had seven sessions: self-stabilization I/II/III; Dependable Systems; Formal Methods, Safety, and Security; Ad-hoc, Sensor and Mobile Networks and Cyberphysical Systems; and Cloud Computing, P2P, Self-organizing and Autonomous Systems.

On the first day, which was tutorial day on Self-Organizing Physical Systems, Marco Dorigo, a professor in Universite Libre de Bruxelles and University of Paderborn, gave the fourth talk for Self-organizing Swarms. In this talk, Dorigo showed how to use ants to find the shortest path of a pair of nodes in a network using artificial pheromones. The ants choose one path from a set of paths stochastically, depending on the amount of pheromones of previous ants visited such paths. Note that there are other strategies considered for solving this problem such as separation, alignment and cohesion. Interestingly, finding the shortest path in this way can be used to obtain arg min f(x).

However, artificial pheromones are not practical in many situations. Thus, goal search and path formation are studied in the absence of pheromones. In particular, robots are assigned as points in order to form a path to the goal, after which all other robots will follow such path.

Homogenous robots can cooperate together to perform tasks that cannot be done by individual robots, such as crossing big holes and climbing steep hills. Adaptive rotation is one of the strategies to enable a group of robots to climb high hills. Moreover, the robots do light-flashing to synchronize in order to cooperate properly.

Heterogeneous robots are also considered. Note that they are heterogeneous in the sense that they have different capabilities. Thus, they work together to empower their combined capabilities in order to perform harder tasks. For instance, there are three types of robots: eye-robot, arm-robot and foot-robot. In a popular scenario, they cooperate together to look for and bring a book. First, a eye-robot flies seeking for the book. Once the book is located. A set of foot-robots is notified in order to move to that location carrying an arm-robot. Eventually, the arm-robot catches the book. After that, all these robots return back to the starting point.

In Self-Stabilization I session, Volker Turau, a professor in Hamburg University of Technology, presented his paper, “A self-stabilizing algorithm for edge monitoring problem”.

In wireless sensor networks, the nodes sense the environment and transmit (or forward) the data in the network.  In the presence of adversary, a set of compromised nodes may disrupt the network in the sense of corrupting or dropping messages. Thus, a set of nodes is chosen in order to monitor all communications on edges using k-hop knowledge. In k-hop knowledge, a monitoring node x can monitor the communication on any edge whose endpoint is reachable by at most k hops from node x. The challenging task is to find the minimum set of monitoring nodes in a network. This problem is NP-Complete even for 1-hop knowledge.

Two distributed approximation algorithms are provided as previous work to solve this problem in synchronous model with no transient faults. In this paper, the authors provided a self-stabilizing algorithm, which finds the minimum set of monitoring nodes in the presence of transient faults in asynchronous model. In this algorithm, each node has a state which determines if this node is in or out of the monitoring set, and each node maintains a set of monitoring nodes for each of its adjacent edges. In each step of the algorithm, the state of each node changes only after having the permission of all neighboring nodes.

We presented our paper, Self-Healing Computation, in Self-Stabilization II session. Our contribution is that we developed a self-healing algorithm, for computation networks, which 1) detects computation corruptions made by Byzantine nodes; and 2) segregates such nodes, so that eventually no more corruptions occur.

We show that our self-healing algorithm reduces asymptotically the message cost compared to non-self-healing algorithms. Moreover, our experimental results show that the message cost is reduced by a factor of 425 compared to the naïve computation for a network of size 8k.

In this paper, we have an interesting result: informally, given any tree of size n, and each node survives independently with a constant probability, the probability of having a subtree of surviving nodes of size Ω(log n) is at most ½.

In Self-Stabilization III session, Toshimitsu Masuzawa, a professor in Osaka University, presented the paper, “Edge Coloring Despite Transient and Permanent Faults”. In this paper, the authors provided a self-stabilizing algorithm, which colors the edges of an arbitrary graph so that every node has no two edges of the same color in the presence of Byzantine adversary and in ring topology. The basic idea is that coloring is performed in steps, where one node x proposes colors to its adjacent edges in one step. After setting these colors, if a neighboring node y has two incident edges of the same color, then node y proposes a different color to the edge (x, y). However, this kind of color proposals may not terminate in the presence of a Byzantine node. To overcome this problem, the authors used a rotating priority procedure, where each node has a priority to propose a color in case of conflict, and these priorities change in a round-robin fashion. Unfortunately, this algorithm does not color the graph with the minimum number of colors required. My consideration is that will their algorithm color the graph properly and terminate in case that there is a good node surrounded with two Byzantine nodes in a ring topology?

Also, Hung Tran-The, a graduate student in Universidade de Lisboa, presented his paper, “Tight Bounds for Stabilizing Uniform Consensus in Mobile Networks”. He provided a self-stabilizing algorithm, in which an agreement of nodes is obtained in the presence of crashes, send-omissions or general omissions. However, they claimed that there is no self-stabilizing algorithm in the presence of Byzantine adversary. This is arguable due to the existence of Byzantine Agreement. My consideration is that cannot we implement Byzantine agreement as a self-stabilizing algorithm?

Beside these scientific contents, I have a few comments about Paderborn city. Paderborn is a beautiful and quiet city in Germany. In this city, the mayor of Paderborn invited us to Paderborn Town-Hall. He gave us a presentation showing how beautiful Paderborn city is.


He told us that “Pader” of “Paderborn” means water spring, where Paderborn has many water springs.



We also visited a wonderful palace, Schloss Corvey, which is 57 km away from Paderborn.




Afterwards, we spent good time in a restaurant near to the palace.




Workshop on Applied Multi-Party Computation

Following is a very nice report of a Workshop on Applied Multiparty Computation at which my students Mahnush Movahedi and Mahdi Zamani gave talks.  Enjoy.    –   Jared

Workshop on Applied Multi-Party Computation: A Summary

Mahnush Movahedi and Mahdi Zamani

Recently, we attended a great workshop on applied Multiparty Computation (MPC) hosted by Microsoft Research Redmond and organized by Seny Kamara and Payman Mohassel. The workshop consisted of 21 talks distributed in two days and mostly focused on practical applications of MPC. Although most talks were very interesting and valuable, we decided to summarize only those that were more related to our current research. For a great summary of the entire workshop, see the related posts in MPC Lounge blog by a team from Aarhus Crypto Group.

Secure Computation in 2029: Boom, Bust, or Bonanza (keynote speech), David Evans, University of Virginia

David gave a great talk about the past and future of MPC. He started with a few interesting predictions about where MPC will be in the year 2029 (15 years from now). He explained that the reason for choosing 2029 is inspired by an essay by Neil DeGrasse Tyson and some experiments indicating that scientific developments double every 15 years (see this for example). Using Google Scholar, David showed that until 1999, the number of papers published about MPC was about 160 while this number increases to about 700 until 2013. He justified the doubling time of 15 years again by referring to an interesting paper by Shafi Goldwasser in PODC 1997, where she correctly predicted a similar growth for MPC.

figure david-evans.jpg

Figure 1. David Evans

In order to predict the development of MPC in 2029, David estimated the U.S. government investment on MPC in the past 1-2 years to be about $100 million, which is roughly about the annual investment spent by the government on art fields [A] . He then made three exciting predictions about MPC in 2029.

  • Claim 1. The MPC industry should be bigger than anti-malware industry in 2029. David justified this by extrapolating investments on IT security and predicting that it will probably decrease by 2029 due to progress in developing secure software.
  • Claim 2. High cost will no longer be the main impediment to the widespread use of MPC (at least for two-party computation). He explained this by estimating the cost of securely comparing two genomes (e.g. in genetic matchmaking) using MPC in 2004 (which is about $100 million) and estimating its cost in 2029 (which is about $0.001). For security against a malicious adversary, this cost in 2004 is estimated to be about $80 billion while in 2029 it is predicted to be about $0.005 per secure comparison.

figure secure-sequencing-passive.png
Figure 2. Estimated cost of building MPC for genome
comparison in the semi-honest model (courtesy of David Evans).
  • Claim 3. We don’t yet know what the “killer app” for MPC is and it’s probably not privacy. David argues that the amount of money people pay for privacy is usually very small. So, he predicts that the killer app for MPC will not just use MPC for privacy, and there are aspects of MPC that will probably be very useful in some applications. There was a panel on the business case for MPC after David’s talk, where people discussed more about the killer app for MPC. The panel’s video is available here.
David finally stated three things that really matter for the future of MPC.
  • What if the outputs leak? In MPC, the output of computation is revealed to everyone. This output may leak some information to the adversary. Is there a way to measure or limit this leakage?
  • How to ensure users accept and trust the MPC-based application?
  • How to implement MPC with low (human) cost?
David finished his presentation by talking about his recent book Dori-Mic and the Universal Machine!, which is for any “toddler who is falling behind in theoretical computer science” and in general for curious children of all age! Watch David’s presentation here.

Broadcast (and Round) Efficient Secure Multiparty Computation, Juan Garay, Yahoo! Labs

Juan started with a history of MPC for arithmetic circuits using the share-compute-reveal paradigm, where a certain fraction of parties are malicious. In this paradigm, inputs are first secret-shared among all parties using a Verifiable Secret Sharing (VSS) scheme. In order to get the sum of all inputs, each party simply adds its shares locally to find a share of the sum. On the other hand, the product of input shares is not necessarily a valid share of the product of the inputs. Beaver [2] solves this by precomputing a multiplication triple in an offline phase and using it to compute a share of the product in the online phase.

figure juan-garay.png

Figure 3. Juan Garay

In order to perform VSS in a point-to-point network, parties should have access to a broadcast channel, which ensures that all parties receive the same message broadcast by the sender. Juan argues that due to the lack of an efficient broadcast channel, VSS schemes should be measured in terms of broadcast complexity, and the goal should be to minimize the number of broadcasts in VSS. Juan defined a relaxed type of secret sharing called Weak Secret Sharing (WSS), where the recipients reconstruct either the value jointly held by them or a special symbol ⊥ indicating that the dealer is malicious. He then defines VSS by extending the definition of WSS such that the recipients always reconstruct the joint value even if there is cheating in the reconstruction phase.

Juan presented a linear VSS protocol for an unbounded static adversary (t<n/2 ). The protocol uses only two broadcasts in the sharing phase and none in the reconstruction—what he calls a (2,0)-broadcast VSS. This is achieved by increasing the number of rounds in the sharing phase, which makes their protocol (20,1)-round. He compares this to the state-of-the-art VSS protocol of Kumaresan et al. [10] that is a (2,2)-broadcast and (3,2)-round VSS. They first construct a (2,1)-broadcast WSS protocol and then use its sharing phase to build a (3,0)-broadcast, (9,1)-round VSS based on the construction of Rabin and Ben-Or [13]. The number of broadcasts is reduced further to build a (2,0)-broadcast VSS using the moderated VSS of Katz and Koo [8], where the dealer acts as the moderator to simulate broadcast. Such a broadcast is called a modercast. At the end of his talk, Juan explained that their VSS scheme can be used for constructing efficient pseudosignatures, an information-theoretic analogue of digital signatures introduced by [4]. Such signatures can be created in a setup phase to be used for simulating future invocations of broadcast or constructing efficient anonymous channels. Watch Juan’s presentation video here.

 Asynchronous MPC with t<n/2 Using Non-Equivocation, Aniket Kate, MMCI, Saarland University

Aniket presented work on Asynchronous MPC (AMPC), where it is assumed that all messages sent by the parties are eventually delivered but with indefinite delays. In the synchronous setting, MPC has been solved with t<n/2 malicious parties (with cryptographic assumptions) but in the asynchronous setting, the best known algorithm tolerates t<n/3 malicious corruptions. These resiliency bounds are primarily due to the limitations of implementing a reliable broadcast channel that is necessary for constructing an Asynchronous Verifiable Secret Sharing (AVSS) protocol.

In order to verify correctness of a sharing, parties need to prevent the adversary from equivocation, which means making conflicting statements to different parties. A mechanism that makes equivocation impossible is called non-equivocation, which can also be transferable to allow a receiver to verifiably transfer the authenticated statement to other parties. Non-equivocation can be implemented using an increment-only counter and a digital signature oracle, which can be constructed using trusted hardware modules readily available in commodity computers and smartphone devices [12].

Their AMPC improves the resiliency bound from t<n/3 to t<n/2 and improves the communication complexity from O(n4κ) to O(n3κ) when compared to the state-of-the-art AMPC of Beerliova et al. [3], where κ is the security parameter. The protocol assumes the existence of a transferable non-equivocation mechanism, which the authors believe is more practical than a synchronous broadcast round assumed by [3]. Watch Aniket’s presentation video here.

Quorums Quicken Queries: Efficient Asynchronous Secure Multiparty Computation, Mahnush Movahedi, University of New Mexico

Mahnush motivated her talk by the fact that guaranteeing synchrony in most real applications is difficult and even impractical. Thus, in order to use MPC in such applications, certain techniques are required to deal with asynchrony in a malicious setting. One can design a scalable MPC protocol in this setting by delegating computation of each gate to a logarithmic set of parties called a quorum, in which at most a certain fraction of parties are malicious. The quorum then computes the gate using any MPC protocol. Mahnush explained that distribution of communications and computations among several quorums requires incorporating extra coordination efforts between parties, which they handle using a number of efficient tools and techniques.

figure mahnush-movahedi.jpg

Figure 4. Mahnush Movahedi

In a setup phase, a set of n quorums are created using the quorum building algorithm of King et al. [9] optimized with the Byzantine agreement scheme of Braud-Santoni et al. [6], which costs soft-O(1) bits per party. Mahnush argued that the parties in each quorum require a mechanism to coordinate with other quorums on when to start computation on asynchronous inputs. In other words, they need to wait for sufficient number of inputs to be received until they start the computation.

To this end, they propose to count the number of ready inputs using a distributed data structure called a τ -counter, where τ=n−t is the threshold on the number inputs to be received before the circuit is evaluated, and t<n/8 is the number of malicious parties. Using a τ -counter, MPC can be solved without assuming any synchrony points in the algorithm.

In order to send the output of each quorum to the next quorum, parties in the first quorum generate a joint random number and simply mask the output by adding it to the random number. This ensure that no coalition of malicious parties can get together to learn the intermediate outputs. Parties of the next quorum then participate in an MPC to unmask the inputs and evaluate the associated gate over the inputs. The protocol requires each party to send soft-O(m/n) bits and compute soft-O(m/n) operations, where m is the size of the circuit. Watch Mahnush’s presentation video here.

Secure Collaborative Statistics in Credit Rating Using Linear Programming, Tomas Toft, Aarhus University

At the start of his talk, Tomas described how MPC can be used to implement a secure collaborative credit rating for Danish dairy farmers who are willing to get loans. Credit rating is one of the interesting classical problems that is solved using MPC. Tomas described how this problem can be modeled by linear programming. A linear program consists of n variables and m constraints and the goal is to maximize an objective function. A solution for a linear program is an assignment to the variables such that the constraints hold.

One well-known strategy for solving linear programs is the simplex method. Tomas argued that despite an exponential worst-case complexity, simplex is efficient in practice and can be easily implemented since it only requires integer arithmetic operations and comparisons. Their protocol solves the linear programming problem using black-box access to secure modulo arithmetic of Damgard et al. [7] along with additional sub-protocol for comparison (see Lipmaa and Toft [11]).

To solve a linear program for n=285 variables and m=4 constraints, the presented protocol requires 2mn+6m2+n≃2700 multiplications and n+3m≃300comparisons. A Java implementation of their protocol that uses Amazon’s cloud services shows that the running time for this computation is around 5 minutes. The implementation is being demoed for actual banks using real data which allows bankers to jointly rank the farmers based on their credit scores. For more information about Tomas motivation for this problem see another blog post about his talk here.

Securely Solving Standard Network Flow Problems with Secure Multi-Party Computation, Abdelrahaman Aly, C.O.R.E., Univesité catholique de Louvain

Abdel introduced a new class of problems in which a graph is shared between parties as their inputs. The parties want to evaluate a function such as max-flow or shortest path over this shared graph. In most combinatorial problems such as various graph problems, the execution path depends on the input data. Thus, even if all input data are appropriately shared or encrypted among the parties, the execution path itself may reveal some information to the adversary.

As an example for a branching that depends on a secret input, Abdel mentioned a counter loop, where the stopping condition is dependent on the input. In this example, the adversary can exploit branching patterns and running time of the algorithm in order to obtain some information about the secrets if this loop is implemented in a non-secure traditional way. Thus, it is necessary to translate the traditional (non-secure) algorithms on graphs to secure versions. Unfortunately, this translation process increases the computation cost of the algorithm. This overhead depends on the type of the algorithm and graph. For example, the asymptotic complexity of the traditional Edmonds-Karp algorithm to find max-flow is O(|V|^2|E|) and its optimized version is O(|E|^2|V|) , while the complexity of its secure version is O(|V|^5) , where |V| and |E| are the number of vertices and edges in the graph respectively. Unfortunately, the actual costs of their protocol is still very high for practical purposes.

MPC in Large Networks with an Application to Anonymous Broadcast, Mahdi Zamani, University of New Mexico

Mahdi argued that MPC with many players is becoming of increasingly importance, with the growth of modern networks. He suggested that it is possible to perform secure computations efficiently over such large networks. One motivation is that MPC can be used to protect freedom of speech in large social networks like Twitter by providing anonymous communication. In addition, MPC in large networks could be used for analysis of massive data. He proposed an MPC protocol that is secure against a malicious adversary corrupting up to a 1/10 fraction of the parties. Their protocol sends in average O(mlog^3 n) bits and computes O(mlog^4 n) operations in O(d) rounds, where m is the size of the circuit with depth d .

figure mahdi-results.png

Figure 5. Mahdi’s microbenchmark results

Mahdi proposed a method to reduce the communication cost of their protocol by performing local communications in logarithmic-size groups of parties called quorums, where the number of adversary-controlled parties in each quorum is at most a certain fraction. The quorums are created in an offline phase using the Byzantine agreement protocol of Santoni et al. [6]. The offline phase also uses the fully homomorphic encryption protocols of Brakerski et al. [5] to evaluate a small-depth circuit. This is done to generated a sufficient number of Beaver [2] multiplication triples.

In the online phase, each input is secret shared in a quorum using Shamir’s secret sharing scheme. Each gate of the circuit is computed by a quorum over shares, where multiplication is performed using Beaver’s triples. Parties in the quorum send the result of this computation to all quorums associated with gates that need this result as input. It is necessary to securely send the output from one quorum to another without revealing any information to any individual party or to any coalition of adversarial parties. Mahdi solves this by creating fresh shares of the output for the target quorum, where the new shares are values of a new random polynomial that still evaluates to the (secret) output at 0.

Mahdi argued that the anonymous broadcast problem can be reduced to the problem of Multi-Party Sorting (MPS), where parties want to jointly sort their inputs. In order to perform MPS, they compute the sorting network of Batcher [1]. At the end of his talk, Mahdi showed some microbenchmark results for several runs of MPS using their MPC protocol. As an example, for a network with about 30 million parties, each with a 20 -byte input, their protocol requires each party to send 5 kilobytes on average per item sorted.  See figure 5.

MEVAL: A Practically Efficient System for Secure Multi-party Statistical Analysis, Koki Hamada, NTT Secure Platform Laboratories

Koki described an implementation of an MPC protocol called MEVAL (Multi-party EVALuator), which is optimized for statistical analyses. His talk described the techniques they have used to make their protocol efficient and the experiments conducted for evaluating the system. The computation in MEVAL is performed over values shared using Shamir’s secret-sharing scheme among three parties, where at most one of them is controlled by a passive adversary. MEVAL can also be used in a server-based setting, where all clients share their inputs between three servers. One application of such a setting is secure outsourcing of data storage.

Koki mentioned that the protocol has been carefully implemented in order to achieve high efficiency. Some of the techniques used for this purpose are asynchronous processing, pseudorandom secret sharing, and optimized field operations. Koki argued that the main idea behind asynchronous implementation is to enable better resource usage by using both computation and communication resources at the same time in parallel instead of using them sequentially. This helps MEVAL to decrease its running time significantly. MEVAL achieves 8.7 million multiplications per second, and it can sort one million values in less than 7 seconds.

figure wampc-caspian.jpg

Figure 6. A great dinner with (from right to left)
Saeed SadeghianArash AfsharYan Huang,
Elaine Shi, and David Evans.


[1] K. E. Batcher: “Sorting networks and their applications”, Proceedings of the April 30—May 2, 1968, spring joint computer conference, pp. 307—314, 1968.

[2] Donald Beaver: Efficient Multiparty Protocols Using Circuit Randomization inAdvances in Cryptology — CRYPTO ’91 (Feigenbaum, Joan, ed.). Springer Berlin Heidelberg, 1991.

[3] Zuzana Beerliová-Trubı́niová, Martin Hirt, Jesper Buus Nielsen: “On the Theoretical Gap Between Synchronous and Asynchronous MPC Protocols”, Proceedings of the 29th Symposium on Principles of Distributed Computing, pp. 211—218, 2010.

[4] Birgit Pfitzmann, Michael Waidner: Information-theoretic Pseudosignatures and Byzantine Agreement for t≥n/3 . 1996. Technical Report RZ 2882 (#90830), IBM Research.

[5] Zvika Brakerski, Craig Gentry, Vinod Vaikuntanathan: “Fully Homomorphic Encryption Without Bootstrapping”, Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 309—325, 2012.

[6] Nicolas Braud-Santoni, Rachid Guerraoui, Florian Huc: “Fast Byzantine Agreement”, Proceedings of the 2013 ACM Symposium on Principles of Distributed Computing, pp. 57—64, 2013.

[7] Ivan Damgård, Valerio Pastro, Nigel P. Smart, Sarah Zakarias: “Multiparty Computation from Somewhat Homomorphic Encryption”, Advances in Cryptology — CRYPTO 2012, pp. 643—662, 2012.

[8] Jonathan Katz, Chiu-Yuen Koo: On Expected Constant-Round Protocols for Byzantine Agreement in Advances in Cryptology – CRYPTO 2006 (Dwork, Cynthia, ed.). Springer Berlin Heidelberg, 2006.

[9] V. King, S. Lonergan, J. Saia, A. Trehan: “Load balanced Scalable Byzantine Agreement through Quorum Building, with Full Information”,International Conference on Distributed Computing and Networking (ICDCN), 2011.

[10] Ranjit Kumaresan, Arpita Patra, C.Pandu Rangan: The Round Complexity of Verifiable Secret Sharing: The Statistical Case in Advances in Cryptology – ASIACRYPT 2010 (Abe, Masayuki, ed.). Springer Berlin Heidelberg, 2010.

[11] Helger Lipmaa, Tomas Toft: Secure Equality and Greater-Than Tests with Sublinear Online Complexity in Automata, Languages, and Programming. Springer Berlin Heidelberg, 2013. URL

[12] Michael Backes, Fabian Bendun, Ashish Choudhury, Aniket Kate: Asynchronous MPC with t<n/2 Using Non-equivocation. Cryptology ePrint Archive, Report 2013/745, 2013.

[13] Tal Rabin, Michael Ben-Or: “Verifiable secret sharing and multiparty protocols with honest majority”, Proceedings of the 21st Annual ACM Symposium on Theory of Computing, pp. 73—85, 1989.

Copyright (C) 2014 Mahnush Movahedi and Mahdi Zamani
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that new copies bear the full citation. Abstracting with credit is permitted. This work may be published or be a basis for publication in the future. Copyright may be transferred without further notice and this version may no longer be accessible.

SODA 2014
January 15, 2014, 2:55 pm
Filed under: Uncategorized | Tags: , ,

Following is a conference report on SODA 2014 by my student Mahnush Movahedi [Jared]

SODA Conference Notes

This report is for those of you that could not attend SODA 2014 and are eager to know about it. There were many exciting talks at SODA ranging from algorithms, complexity, approximation algorithm, discrete optimization, combinatorics, graph theory and game theory in three parallel sessions.

Herbert Edelsbrunner was the invited plenary speaker for the first day. He described his journey to the fascinating world of geometric forms in his talk named “Shape, Homology, Persistence, and Stability”. Assume we are given a set S of points in a plain, alpha shape is an interpretation for the concept of the the shape formed by these points (it is a generalization of the convex hull). We can generalize this alpha shape in 2-dimensional to higher dimensions, specifically 3-dimensional case. In his talk he discussed homology and persistence. Persistent homology is an algebraic method for measuring topological features of shapes. Stability of persistent homology diagrams under perturbations was a key point in his talk.

Day two at SODA, I was especially intrigued by a session about distributed algorithms and self-organizing particle systems. David Doty gave a very interesting talk on the paper “Timing in chemical reaction networks”. He started the talk by showing a movie in which a white blood cell chases a bacteria by the power of chemical reactions. Chemical Reaction Networks (CRN) attempt to formally model the behavior of chemical systems. CRNs can be considered as an implementable a programming language for the design of artificial molecular control circuitry.

It has been shown that CRN can simulate bounded-space Turing machine with an arbitrary small positive probability of error. A+B→C+D is an example of reaction in the CRNs model, where A,B,C,D are molecular species.  The “program” for a CRN consists of a set of reactions of this type, along with an initial population of molecular species.  The “computation” occurs according to a statistical process that repeatedly chooses one reaction from the set and applies it to the appropriate chemical species.

Angluin, Aspnes, and Eisenstat in their paper “Fast computation by population protocols with a leader” relate the CRN reaction to leader election protocol.  Starting from a uniform initial configuration of n copies of L ,  when two candidate leaders encounter each other, one drops out (by the reaction L+L→L+N ). However, electing a leader in this manner incurs an exponential slowdown and there is no way for the leader to know when it has been elected. These problems motivate the need for a timer CRN. In the search for a timer CRN, David shows that if a CRN respects finite density (at most O(n ) additional molecules can be produced from n initial molecules), then starting from any dense initial configuration (all molecular species initially present have initial count Theta(n), where n is the initial molecular count and volume), every producible species is produced in constant time with high probability. This implies that no CRN obeying the stated constraints can function as a timer. I had a chance to have lunch with David and discuss this work more. He is a fun and knowledgeable person who believes that implementing a timer is a core issue behind any population protocol.

Jared Saia talked about work with Valerie King on “Faster Agreement via a Spectral Method for Detecting Malicious Behavior”. To understand the work, lets start with the definition of Byzantine agreement:  1) bring n processors each with a private bit to agreement on a single common bit; and 2) ensure that this bit is the same as the input of one good processor. This problem is more difficult in the presence of a strong adversary, who has full information, can actively control the behavior of constant fraction of the processors and finally can control the scheduling of the delivery of messages. Jared presented an algorithm that can solve Byzantine Agreement for n processors in expected O(n^3) communication time, expected polynomial computation time per processor, and expected polynomial bits of communication. The algorithm is based on repeatedly generating a fair global coin, which is generated by coin flips from individual processors. The adversary can corrupt this process by generating biased individual coin flips. Spectral analysis is used in this work to detect such adversarial behavior. Each processor maintain a matrix containing the sum of the coin flips received in each iteration. The top right eigenvector of this matrix is used to detect adversarial processors, since entries with high absolute value in this eigenvectr correspond to processors that are trying to bias the global coin. Overall, a very nice paper.

I also would like to mention another talk that I found interesting: Intrinsic Universality in Tile Self-assembly Requires Cooperation presented by Andrew Winslow proves that Winfree’s abstract Tile Assembly Model is not intrinsically universal when restricted to use noncooperative (temperature 1) tile binding. The Tile Assembly Model is a simple model with translatable but un-rotatable square or cube tiles as its fundamental components. Tiles’ sides are labeled with glue colors, each with an integer strength. Two tiles that are placed next to each other interact if the glue colors on their abutting sides match, and they bind if the strengths on their abutting sides match and sum to at least a certain (integer) temperature. Temperature 1 binding, is where all tiles bind to each other if they match on at least one side; similarly you can define temperature 2 or more. It is interesting that the Tile Assembly Model is indeed intrinsically universal when temperature 2 binding is used but it is not intrinsically universal when restricted to temperature 1 binding.

I also enjoyed the game theory session which was full of new ideas. “Constrained Signaling in Auction Design” by Shaddin Dughmi, Nicole Immorlica, Aaron Roth is a well-written paper that focuses on auctions where the auctioneer does not have the capacity to describe to the buyers the exact identity of the good. Instead, the auctioneer describes the good using defined signals. Dimitris Paparas talked about “The Complexity of Optimal Multidimensional Pricing”.  This paper is about the problem of computing a revenue-optimal pricing. They show that the decision version of this problem is NP-complete for distributions of support size 3 as well as for identical distributions. In his talk, on “The Complexity of Optimal Mechanism Design”, Christos Tzamos proved that optimal mechanism design is complex, and that a revenue-optimal auction in multi-item settings cannot be found and implemented in a computationally efficient way.

Finally I like to mention the Abstract Talk website as an another good source to learn more about SODA.

Copyright (C) 2014 Mahnush Movahedi
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that new copies bear this notice and the full citation on the first page. Abstracting with credit is permitted. This work may be published or be a basis for publication in the future. Copyright may be transferred without further notice and this version may no longer be accessible.

CRA Snowbird
July 30, 2012, 6:53 pm
Filed under: Uncategorized | Tags: , ,

Last week, I attended the CRA Snowbird conference for the first time. This conference is held every 2 years and is attended by chairs and
associate chairs from departments in the U.S. and Canada, faculty
member on boards of organizations like CRA, representatives from
industry and funding organizations, and several big name researchers.

The conference was much more interesting that I expected.  Below are my notes from the conference which are (as always) sketchy, biased and incomplete.

Online Education

1) John Hennessy – President of Stanford University (and former CS professor)

  • Until the invention of the printing press, doubling rate for universities was about every 100 years
  • Major university cost was the library
  • After printing press, university cost decreased dramatically -> huge increase in access to education -> dramatic societal impact
  • Currently, major university cost is faculty salaries.  Research faculty are very expensive (faculty salaries track dentist salaries)
  •  [You can see where this is going…]
  • Claim: Advent of online education -> decrease in number of faculty in U.S. -> number of research universities in the U.S. has peaked.

This talk generated a *huge* amount of controversy. Hennessy (purposefully?) ignored any actual research benefits from research faculty.

2) Salman Khan (of Khan academy) and Peter Norvig (Google fellow and online ML course instructor)

  • – Online education has the potential to dramatically improve learning
  • – Only need to read 1 paper on education [Bloom ’84] (Bloom’s 2-sigmas paper), which I [Jared] will now sum up in one sentence: “Private tutoring improves student performance by two standard deviations over lecture-based instruction”
  • If can “automate” effects of private tutoring, through interactive educational technology, will dramatically improve student performance.
  • Example: Students who answer a question incorrectly in Stanford online class can be clustered, and pointed to information that can help them improve answer
  • Norvig: motivation is extremely crucial in online classes. His primary focus is keeping students motivated and in contact with small groups of other students to create social dependencies
  • Khan: want to avoid too much polish – polished lectures can be boring and inauthentic.
  • Physically attending a university will always be a better experience when done right. Universities need to ensure it’s “done right” so that physical attendance offers something more than what can be obtained online.
  • Online education will likely replace or at least supplement traditional textbooks


Farnam Jahanian (Assistant Director of NSF for CISE)

  • NSF funding for computer science has been increasing by ~6% per year for the last 5 years. (yes, really)
  • Contrasts with increases of ~3% per year for most other disciplines
  • Big effort to maintain these increases. Helps immensely if the CS community actively publicizes research successes – go do this.
  • Feels that CS will be better shielded than most disciplines from political vagaries and educational tsunamis

Big Networks

Jeffrey Dean (Google Fellow and co-inventor of MapReduce)

  • “Make a reliable whole out of unreliable parts” “Make a low latency whole out of variable latency parts”. Discussed clever trick ensuring low latency on a massive server farm
  • Described several applications of neural nets distributed over 1 Million CPUs. Neural nets are robust and inherently distributed so work on massive networks. Massive neural nets give huge improvements in accuracy for 1) speech recognition (“improvement is equivalent to 20 years of research in the field”) and 2) image recognition (double accuracy compared to state of the art)

Big Data

Shwetak Patel (UW): Created device you plug into an electrical
outlet that tracks energy usage of all devices in your house. How?
Every type of appliance generates a unique EM signature. His device
uses this noisy signal to track each device’s energy usage with
surprising accuracy. Amazingly, can do the same thing for water
(using special sensor faucet) and gas with appropriate devices. This
was really cool!

Daphne Kohller (Stanford): Machine-learning using clinical data to
detect: 1) infant health with much higher accuracy than Apgar and
other more invasive tests – her analysis considers only time signal
data for infant respiration; 2) aggressiveness of breast cancer. In
both cases, results from the ML approach are vastly more accurate than results from traditional medical tests (e.g. Apgar). Her recent focus is applying ML to online education data to automatically determine what is the appropriate information to present to students in order to help them fix common mistakes.

Map for SODA
January 13, 2012, 6:45 pm
Filed under: Uncategorized | Tags: , ,

Below is a map of some restaurants, cafes, shrines, museums, etc that are around the Westin Miyako Hotel where SODA will be this year.  This map was made by Shinobu Taniguchi, a very helpful Japanese friend of mine who lives in Osaka, Japan.

See you in Kyoto!

Map of Kyoto around the Miyako Hotel


PODC Day 3
June 13, 2011, 9:56 pm
Filed under: Uncategorized | Tags: , ,

The following is a guest post by Mahnush Mohavedi.  Mahnush is a first year PhD student at UNM.  She got her Masters degree from Amirkabir University of Technology in Iran and I believe this was her first conference in the U.S.

In the third day of PODC conference, there were several interesting talks including:

  • “Stability of P2P Communication Systems” by Ji Zhu and Bruce Hajek.  This paper discussed the missing piece syndrome in which one piece becomes very rare in a network like Bittorrent, leading to instability in the system. They calculate the minimum amount of help needed from the seed nodes in an information dissemination game in order to stabilize the system, and ensure that all nodes receive a file.
  • “Tight Bounds on Information Dissemination in Sparse Mobile Networks” by Pettarin, et al. is a study of the dynamics of information dissemination between k agents performing independent random walks on an n-node grid. They assume communication is determined by a dynamic communication graph process G_t, where two agents are connected in G_t iff their distance at time t is within a specified broadcast radius.  They assume that a rumor travels completely throughout a connected component of G_t during round t.  They study the broadcast time, T_B, of the system, which is the time it takes for all agents to know the rumor.  They study the sparse case for the broadcast radius, where, whp, all components are of size log n.  Their main result is that T_B is soft-Theta(n/\sqrt(k)).  In particular, for this sparse case, the broadcast time does not depend on the broadcast radius.
There were also some interesting brief announcements:
  • “Rationality Authority for Provable Rational Behavior” by Dolev et al. considers the games in which players are not totally rational and smart. They define a game inventor agent that is able to find the best response of the game and present it to the players. The inventor is rational and may gain revenues from the game. Thus, they introduce verifiers as trustable service providers that can verify inventor’s advices using formal methods. During dinner on Tuesday, I had a chance to talk to Elad who presented the talk. He believes separation of interest, benefits, and goals is the key idea of the work.
  • “Distributed Computing with Rules of Thumb” by Jaggard et al. indicates that a large and natural class of simple algorithms fails to guarantee convergence to an equilibrium in an asynchronous setting, even if the nodes and communication channels are reliably failure-free.  In particular, they consider algorithms like “best replying” to other player’s actions and minimizing “regret”.  They show that these algorithms fail to ensure convergence, in the asynchronous setting, for problems in routing, congestion control, social networks and circuit design.  

June 9, 2011, 12:02 am
Filed under: Uncategorized | Tags: , ,

Several interesting talks today including the following

“Compact Policy Routing” by Revtvari et al. considered the problems of designing small routing tables for routing when the optimization criteria is not necessarily path length. In particular, they note that Internet routing doesn’t use shortest paths, but instead is policy routing: making use of economic and political concerns. Their paper defines routing algebras and explores compressibility of routing tables for different types of optimization functions. A main result of the paper shows that BGP policy routing essentially must give rise to very large routing tables.

“Locally Checkable Proofs” by Goos and Suomela classifies various graph problems according to their local proof complexity. For example, locally checking whether a graph is bipartite requires each node to hold a proof of just 1 bit (the node’s color in a 2-coloring of a graph). In contrast, locally showing that a graph is *not* bipartite requires each node to hold a proof of size Theta(log n) (shown in the paper). The paper categorizes a dozen or so graph problems in terms of locally checkable proof sizes. Possibly interesting connections exist between this paper and the “Toward More Localized Local Algorithms” by Korman, Sereni and Viennot on Day 1. The proofs from this paper could possibly be plugged in as the “pruning algorithms” required by Korman et al.

“Fault-tolerant spanners” by Dinitz and Krauthgamer ” builds spanners that are robust to deletion of r nodes. Specifically, any algorithm that can create a k spanner (k>= 3) with f(n) edges can be converted to a k spanner with O(r^3 log n)*f(2n/r) edges that is “robust” to deletion of r nodes. “Robust” here means that for any set of F nodes, |F| <= r, the original spanner with F deleted is still a k-spanner of the original graph with F deleted. The algorithm is technically quite interesting, making use of a clever LP relaxation and the Lovasz Local Lemma.

“The Round Complexity of Distributed Sorting” by Patt-Shamir and Teplitsky considers a fully connected network where in each round, each node can send a O(log n) bit message to every other node (This is the CONGEST model with diameter 1). They first show that sorting in this model, when each node has at most n items can be done in O(log log n) rounds and selection can be done in O(1) rounds. Then, using a concurrent result by Lenzen and Wattenhofer on routing in the same model (in STOC), they further reduce sorting to O(1) rounds. The interaction between this paper and the result by Lenzen and Wattenhofer is neat, an the model itself is interesting (sort of diametrically opposed to LOCAL), and seems very powerful.

A quick final mention of two papers presented today that I was a co-author on. Varsha Dani gave a nice talk on our “Scalable Rational Secret Sharing” paper, and Maxwell Young gave a nice talk on our “Conflict on a Communication Channel paper.

PODC Day 1
June 7, 2011, 2:44 pm
Filed under: Uncategorized | Tags: , ,


FCRC is happening right now at the San Jose conference center.  The following is a guest post by Maxwell Young on Day 1 of PODC.

A nice opening talk was given by Rotem Oshman on the paper “Coordinated Consensus in Dynamic Networks” which is joint work with Fabian Kuhn and Yoram Moses.  Many consensus models assume that direct communication is possible between any pair of nodes (ie. a clique communication graph G). Furthermore, the graph topology is typically assumed to be static. Here, the authors studied a more generalized problem setting where the G is connected, but not necessarily a clique, and the topology can change over time to reflect unreliable links and connectivity failures. Eventual, simultaneous, and Delta-coordinated consensus are all examined with Rotem spending more time discussing lower bounds for the last two variants. Indeed, when it comes to simultaneous consensus, the news is bad as the authors show a lower bound of n-1 rounds. Relaxing this to Delta-coordinated consensus does not provide much relief. Here, *in the worst case*, the round complexity is n-Delta-1.  However, under certain (still fairly general) circumstances, one can do much better — for instance, in the “clear-majority” case where the fraction of identical inputs is bounded away from 1/2. Rotem also provided an overview of an intricate argument involving a static line graph that yields a lower bound of similar form to the clear-majority result, although not quite tight with the upper bound.
Another interesting morning talk was given by Guanfeng Liang on the topic of achieving consensus in the presence of Byzantine faults (joint work with Nitin Vaidya). Of course, there has been an enormous amount of work done in this area. The main contribution by Liang and Vaidya is demonstrating an *error-free* consensus algorithm for L-bit messages that improves quadratically (in n) on the bit complexity of previous work when L is large. This is achieved by running consensus on chunks of the input message in combination with error-detection codes and the use of an “accusation” graph.  In terms of practical utility, Guanfeng (seated next to me in this session) mentioned that the utility of larger messages in consensus is that they ameliorate the overhead that one would get from using other protocols that handle a single bit at a time. From a performance perspective, this facet of the problem, in combination with their improved round complexity, would obviously yield higher network thoughput.

Particularly interesting was David Ferrucci’s presentation of the work he and his team did at IBM on developing the JEOPARDY!-winning system “Watson”.  At first glance, this challenge may seem solvable through brute force. Why not categorize and store the top questions (and their possible permutations) in a format that allows standardized queries? Then parse asked questions correctly, throw enough computational power at the setup in order to get it to work in game-show time, and voila! Unfortunately, this approach immediately encounters problems – binning the questions by topic yields a heavy-tailed distribution. Therefore, storing the most frequently-asked questions would leave Watson grasping at straws for much of the show. Over the course of an hour, Ferrucci managed to convey the massive scope of the problem and IBM’s solution which spans the areas of natural language processing, machine learning, algorithmics, massively parallel architectures and more.

Something closer to home was Calvin Newport’s afternoon talk “Structuring Unreliable Radio Networks ” which is joint with Keren Censor-Hillel, Seth Gilbert, Fabian Kuhn and Nancy Lynch.  The talk centered around a model of wireless sensor networks where the communication graph consists of both reliable and unreliable links. Here, each node has access to information about the reliability of its links to neighbors via a detector. In practice, this information is obtained through algorithms running low in the protocol stack.  Given this setting, the authors studied the problem of building a connected constant-degree dominating set (CCDS) which can provide an efficient communications backbone in a sensor network. For a detector that never misclassifies links, the authors show that CCDS can be solved in time roughly O(polylog n) rounds given reasonable bounds on the maximum degree (in terms of reliable links) and the message size. But of particular interest is the lower bound:  If the detector misclassifies even *one* link, CCDS requires Omega(D) rounds, clearly separating this class of problems from more optimistic setting where sure knowledge of reliable links yields polylogarithmic behaviour.

The theory-oriented literature (a lot of it from PODC) reports on lower bounds in the context of wireless sensor networks. An interesting question was asked at the end of Calvin’s talk regarding how the lower bound in the paper plays out in the real-world.  Calvin’s response, and one that I asked him more about later on, was that current sensor network deployments are typically small in size; consequently, the costs implied by such a lower bound result (and other lower bound results regarding efficiency) are likely not an issue.  However, as systems grow in size, practitioners may see such results manifest – although, which lower bounds will be applicable remains to be seen.

Along similar lines, during one of the coffee breaks, I had the opportunity to talk with Christian Scheideler about how theoreticians have been prompted by practitioners to address more realistic models. Contending analytically with the kind of wireless interference that occurs in real-world deployments can be messy. One approach is to work with the SINR model; however, certain issues, such as addressing the RSSI threshold in the context of receiver-side collision detection, seem difficult to resolve satisfyingly.  As another example, in terms of jamming attacks, going back a few years we see many papers that address oblivious adversaries who make their jamming decisions independently of the good parties. However, more challenging reactive adversaries have been demonstrated and, consequently, recent work by the theoretical community tends to address this attack model.  In any event, if Calvin’s prediction bears out, the implications for large wireless sensor networks may be somewhat gloomy – depending on which lower bounds hold true – but, as a silver lining, it would also validate some of the efforts made by the theoretical community.

PODC (day 3)
July 30, 2010, 10:43 pm
Filed under: Uncategorized | Tags: , ,

Too many good talks on this last day of PODC to do them all justice.  Just a smattering of what I found interesting:

  • “Deterministic Distributed Vertex Coloring in Polylogarithmic Time” by Barenboim and Elkin.  This is the other vertex coloring paper at PODC (with which we shared the best paper award).  The main result is a deterministic algorithm that runs in polylog time and uses only O(Delta^(1+epsilon)) colors.  In face, in polylog time, O(alpha^(1+epsilon)) colors are possible where alpha is the arboricity of the graph.  I won’t go into technical details of the talk here, but I do want to point to a nice primer on distributed vertex coloring with applications.  It’s neat that after the question of improving the number of colors needed for polylog vertex coloring had been open for a quarter of a century, two papers at this PODC, gave significant improvement over the old O(Delta^2) result.
  • “Optimal Gradient Clock Synchronization in Dynamic Networks” by Kuhn, Lenzen, Locher and Oshman studies clock synchronization in networks where communication links can appear and disappear at any time, rate of hardware clocks can vary arbitrarily, and estimates a node can get of the clock time of another node are inherently inaccurate.  They are able to output a logical clock for each node such that the logical clocks of any two nodes are not too far apart, and nodes that remain close to each other in the network for a long time are better synchronized than other nodes.  I know very little about this area, but I definitely enjoyed the talk and am particularly intrigued by this follow-up paper to appear in FOCS ’10.  It shows how to use the PODC result to perform arbitrary computation over dynamic networks in which the network topology changes from round to round, provided that the network is T-interval connected.  “A network is T-interval connected if for every T consecutive rounds, there exists a stable connected spanning subgraph.  For T=1, the graph is connected in every round, but can change arbitrarily between rounds.”
  • “How to Meet when you Forget: Log-Space Rendezvous in Arbitrary Graphs” by Czyzowicz, Kosowski and Pelc.  This paper shows how deterministic agents with only log space can either 1) rendezvous in a graph or 2) determine that the graph is constructed in such a way that rendezvous is not possible.  The assumption is that the nodes of the graph are indistinguishable and the agents, on visiting a node only become aware of the immediate neighbors of that node.  It took me a while to realize that rendezvous may be impossible, this occurs when the graph is symmetric to the degree that any two deterministic agents will chase each other around indefinitely (for example, I think a cycle will cause this), and is an artifact of the lack of randomness for symmetry breaking.  The paper makes use of results from the famous paper by Reingold on “Undirected Connectivity in Log-space”
  • “Breaking the O(n^2) Bit Barrier: Scalable Byzantine agreement with an Adaptive Adversary” by King and Saia.  I’m biased of course but I think Valerie gave a great talk on our paper!  Slides are here.

Thanks to everyone for a great PODC.  See you next year in San Jose!

PODC 2010
July 27, 2010, 5:39 am
Filed under: Uncategorized | Tags: , ,

Greetings from Zurich and the morning of the second day of PODC!  Went for a short hike when I arrived on Sunday in the mountains above the city: meadows, mountain views, cowbells, great chocolate for a snack, lots and lots of very healthy looking blond people – a complete Swiss experience!  Pictured on the right is the Swiss National museum next to lake Zurich which abuts the city.

Yesterday Hagit Attiya gave an invited talk on Transactional Memory.  The tone of the talk was pessimistic about the benefits of transactional memory, arguing that it has significant theoretical and practical limitations, and that it weakens either consistency or progress guarantees (or simplicity).  The talk called for a post-Transaction memory era where we should use techniques like “mini-transactions” that don’t over-promise to the programmers who are facing the difficult challenge of programming in a parallel environment.

Two other talks I enjoyed:

  • “Partial Information Spreading with Application to Distributed Maximum Coverage” by Keren Hillel and Haden Shachnai.  This talk introduced a nice generalization of the property of graph conductance and then showed how this generalization could be useful for  approximating the maximum coverage problem in a distributed setting.  I like the generalization of conductance, weak conductance, that was presented since I think many real-world networks may tend to have high weak conductance even though they have low  conductance
  • “Adaptive Randomized Mutual Exclusion in Sub-Logarithmic Time” by Danny Hendler and Phillip Woelfel.  A very nice talk covering many mathematical details of what seems like a subtle proof of correctness for a randomized mutual exclusion algorithm.  I know very little about mutual exclusion but felt the talk gave me a good flavor of the mathematical techniques used in the area.

Quick business meeting summary:  brief announcements are good, they boost attendance;  attendance is up at the least couple of PODCs; to rebuttal or not to rebuttal?;  PODC 2011 will be in San Jose, CA; PODC 2012 will be in Madeira, Portugal,  known for its bananas, sweet wine and warm waters.