Filed under: Uncategorized | Tags: DISC, distributed computing, PODC, theory
From Jennifer Welch:
I’m honored to take over from Idit Keidar as editor of the SIGACT News Distributed Computing Column. As in the past, the December column is devoted to reporting on notable happenings in the community during the year:
– “2013 Dijkstra Prize on Distributed Computing”
– “2013 Principles of Distributed Computing Doctoral Dissertation Award”
– “PODC 2013 Review” by Nicolas Braud-Santoni
– “DISC 2013 Review” by Shahar Timnat
– “Bremen Workshop Review” by Ami Paz
An archive of the column is available online at:
— Jennifer Welch
Filed under: Uncategorized | Tags: distributed computing, PODC, SPAA, theory
PODC 2012 accepted papers are out (and have been for a while). This looks like a great collection of papers, with many focusing on problems in distributed reliability/security. It’s also nice to see some of my former students represented (Maxwell Young and Amitabh Trehan). Some papers that caught my eye:
Adi Akavia, Shafi Goldwasser, and Carmit Hazay
Distributed Public Key Schemes Secure against Continual Leakage
Faster Randomized Consensus with an Oblivious Adversary
Binbin Chen, Haifeng Yu, Yuda Zhao, and Phillip Gibbons
The Cost of Fault-Tolerance in Multi-Party Communication Complexity
Seth Gilbert and Maxwell Young
Making Evildoers Pay: Resource-Competitive Broadcast in Sensor Networks
Alexander Jaffe, Thomas Moscibroda, and Siddhartha Sen
On the Price of Equivocation in Byzantine Agreement
Allen Clement, Flavio Junqueira, Aniket Kate, and Rodrigo Rodrigues
On the (Limited) Power of Non-Equivocation
Guanfeng Liang and Nitin Vaidya
Byzantine Broadcast in Point-to-Point Networks using Local Linear Coding
James Aspnes, Hagit Attiya, Keren Censor-Hillel, and Faith Ellen
Faster than Optimal Snapshots (for a While)
Stephan Holzer and Roger Wattenhofer
Optimal Distributed All Pairs Shortest Paths and Applications
Michael Backes, Fabian Bendun, and Aniket Kate
Brief Announcement: Distributed Cryptography using TrInc
Atish Das Sarma, Ashwin Lall, Danupon Nanongkai, and Amitabh Trehan
Brief Announcement: Maintaining Large Dense Subgraphs on Dynamic Networks
The following is a guest post by Mahnush Mohavedi. Mahnush is a first year PhD student at UNM. She got her Masters degree from Amirkabir University of Technology in Iran and I believe this was her first conference in the U.S.
In the third day of PODC conference, there were several interesting talks including:
- “Stability of P2P Communication Systems” by Ji Zhu and Bruce Hajek. This paper discussed the missing piece syndrome in which one piece becomes very rare in a network like Bittorrent, leading to instability in the system. They calculate the minimum amount of help needed from the seed nodes in an information dissemination game in order to stabilize the system, and ensure that all nodes receive a file.
- “Tight Bounds on Information Dissemination in Sparse Mobile Networks” by Pettarin, et al. is a study of the dynamics of information dissemination between k agents performing independent random walks on an n-node grid. They assume communication is determined by a dynamic communication graph process G_t, where two agents are connected in G_t iff their distance at time t is within a specified broadcast radius. They assume that a rumor travels completely throughout a connected component of G_t during round t. They study the broadcast time, T_B, of the system, which is the time it takes for all agents to know the rumor. They study the sparse case for the broadcast radius, where, whp, all components are of size log n. Their main result is that T_B is soft-Theta(n/\sqrt(k)). In particular, for this sparse case, the broadcast time does not depend on the broadcast radius.
- “Rationality Authority for Provable Rational Behavior” by Dolev et al. considers the games in which players are not totally rational and smart. They define a game inventor agent that is able to find the best response of the game and present it to the players. The inventor is rational and may gain revenues from the game. Thus, they introduce verifiers as trustable service providers that can verify inventor’s advices using formal methods. During dinner on Tuesday, I had a chance to talk to Elad who presented the talk. He believes separation of interest, benefits, and goals is the key idea of the work.
- “Distributed Computing with Rules of Thumb” by Jaggard et al. indicates that a large and natural class of simple algorithms fails to guarantee convergence to an equilibrium in an asynchronous setting, even if the nodes and communication channels are reliably failure-free. In particular, they consider algorithms like “best replying” to other player’s actions and minimizing “regret”. They show that these algorithms fail to ensure convergence, in the asynchronous setting, for problems in routing, congestion control, social networks and circuit design.
Several interesting talks today including the following
“Compact Policy Routing” by Revtvari et al. considered the problems of designing small routing tables for routing when the optimization criteria is not necessarily path length. In particular, they note that Internet routing doesn’t use shortest paths, but instead is policy routing: making use of economic and political concerns. Their paper defines routing algebras and explores compressibility of routing tables for different types of optimization functions. A main result of the paper shows that BGP policy routing essentially must give rise to very large routing tables.
“Locally Checkable Proofs” by Goos and Suomela classifies various graph problems according to their local proof complexity. For example, locally checking whether a graph is bipartite requires each node to hold a proof of just 1 bit (the node’s color in a 2-coloring of a graph). In contrast, locally showing that a graph is *not* bipartite requires each node to hold a proof of size Theta(log n) (shown in the paper). The paper categorizes a dozen or so graph problems in terms of locally checkable proof sizes. Possibly interesting connections exist between this paper and the “Toward More Localized Local Algorithms” by Korman, Sereni and Viennot on Day 1. The proofs from this paper could possibly be plugged in as the “pruning algorithms” required by Korman et al.
“Fault-tolerant spanners” by Dinitz and Krauthgamer ” builds spanners that are robust to deletion of r nodes. Specifically, any algorithm that can create a k spanner (k>= 3) with f(n) edges can be converted to a k spanner with O(r^3 log n)*f(2n/r) edges that is “robust” to deletion of r nodes. “Robust” here means that for any set of F nodes, |F| <= r, the original spanner with F deleted is still a k-spanner of the original graph with F deleted. The algorithm is technically quite interesting, making use of a clever LP relaxation and the Lovasz Local Lemma.
“The Round Complexity of Distributed Sorting” by Patt-Shamir and Teplitsky considers a fully connected network where in each round, each node can send a O(log n) bit message to every other node (This is the CONGEST model with diameter 1). They first show that sorting in this model, when each node has at most n items can be done in O(log log n) rounds and selection can be done in O(1) rounds. Then, using a concurrent result by Lenzen and Wattenhofer on routing in the same model (in STOC), they further reduce sorting to O(1) rounds. The interaction between this paper and the result by Lenzen and Wattenhofer is neat, an the model itself is interesting (sort of diametrically opposed to LOCAL), and seems very powerful.
A quick final mention of two papers presented today that I was a co-author on. Varsha Dani gave a nice talk on our “Scalable Rational Secret Sharing” paper, and Maxwell Young gave a nice talk on our “Conflict on a Communication Channel paper.
FCRC is happening right now at the San Jose conference center. The following is a guest post by Maxwell Young on Day 1 of PODC.
A nice opening talk was given by Rotem Oshman on the paper “Coordinated Consensus in Dynamic Networks” which is joint work with Fabian Kuhn and Yoram Moses. Many consensus models assume that direct communication is possible between any pair of nodes (ie. a clique communication graph G). Furthermore, the graph topology is typically assumed to be static. Here, the authors studied a more generalized problem setting where the G is connected, but not necessarily a clique, and the topology can change over time to reflect unreliable links and connectivity failures. Eventual, simultaneous, and Delta-coordinated consensus are all examined with Rotem spending more time discussing lower bounds for the last two variants. Indeed, when it comes to simultaneous consensus, the news is bad as the authors show a lower bound of n-1 rounds. Relaxing this to Delta-coordinated consensus does not provide much relief. Here, *in the worst case*, the round complexity is n-Delta-1. However, under certain (still fairly general) circumstances, one can do much better — for instance, in the “clear-majority” case where the fraction of identical inputs is bounded away from 1/2. Rotem also provided an overview of an intricate argument involving a static line graph that yields a lower bound of similar form to the clear-majority result, although not quite tight with the upper bound.
Another interesting morning talk was given by Guanfeng Liang on the topic of achieving consensus in the presence of Byzantine faults (joint work with Nitin Vaidya). Of course, there has been an enormous amount of work done in this area. The main contribution by Liang and Vaidya is demonstrating an *error-free* consensus algorithm for L-bit messages that improves quadratically (in n) on the bit complexity of previous work when L is large. This is achieved by running consensus on chunks of the input message in combination with error-detection codes and the use of an “accusation” graph. In terms of practical utility, Guanfeng (seated next to me in this session) mentioned that the utility of larger messages in consensus is that they ameliorate the overhead that one would get from using other protocols that handle a single bit at a time. From a performance perspective, this facet of the problem, in combination with their improved round complexity, would obviously yield higher network thoughput.
Particularly interesting was David Ferrucci’s presentation of the work he and his team did at IBM on developing the JEOPARDY!-winning system “Watson”. At first glance, this challenge may seem solvable through brute force. Why not categorize and store the top questions (and their possible permutations) in a format that allows standardized queries? Then parse asked questions correctly, throw enough computational power at the setup in order to get it to work in game-show time, and voila! Unfortunately, this approach immediately encounters problems – binning the questions by topic yields a heavy-tailed distribution. Therefore, storing the most frequently-asked questions would leave Watson grasping at straws for much of the show. Over the course of an hour, Ferrucci managed to convey the massive scope of the problem and IBM’s solution which spans the areas of natural language processing, machine learning, algorithmics, massively parallel architectures and more.
Something closer to home was Calvin Newport’s afternoon talk “Structuring Unreliable Radio Networks ” which is joint with Keren Censor-Hillel, Seth Gilbert, Fabian Kuhn and Nancy Lynch. The talk centered around a model of wireless sensor networks where the communication graph consists of both reliable and unreliable links. Here, each node has access to information about the reliability of its links to neighbors via a detector. In practice, this information is obtained through algorithms running low in the protocol stack. Given this setting, the authors studied the problem of building a connected constant-degree dominating set (CCDS) which can provide an efficient communications backbone in a sensor network. For a detector that never misclassifies links, the authors show that CCDS can be solved in time roughly O(polylog n) rounds given reasonable bounds on the maximum degree (in terms of reliable links) and the message size. But of particular interest is the lower bound: If the detector misclassifies even *one* link, CCDS requires Omega(D) rounds, clearly separating this class of problems from more optimistic setting where sure knowledge of reliable links yields polylogarithmic behaviour.
The theory-oriented literature (a lot of it from PODC) reports on lower bounds in the context of wireless sensor networks. An interesting question was asked at the end of Calvin’s talk regarding how the lower bound in the paper plays out in the real-world. Calvin’s response, and one that I asked him more about later on, was that current sensor network deployments are typically small in size; consequently, the costs implied by such a lower bound result (and other lower bound results regarding efficiency) are likely not an issue. However, as systems grow in size, practitioners may see such results manifest – although, which lower bounds will be applicable remains to be seen.
Along similar lines, during one of the coffee breaks, I had the opportunity to talk with Christian Scheideler about how theoreticians have been prompted by practitioners to address more realistic models. Contending analytically with the kind of wireless interference that occurs in real-world deployments can be messy. One approach is to work with the SINR model; however, certain issues, such as addressing the RSSI threshold in the context of receiver-side collision detection, seem difficult to resolve satisfyingly. As another example, in terms of jamming attacks, going back a few years we see many papers that address oblivious adversaries who make their jamming decisions independently of the good parties. However, more challenging reactive adversaries have been demonstrated and, consequently, recent work by the theoretical community tends to address this attack model. In any event, if Calvin’s prediction bears out, the implications for large wireless sensor networks may be somewhat gloomy – depending on which lower bounds hold true – but, as a silver lining, it would also validate some of the efforts made by the theoretical community.
This year PODC has done 10 podcasts on papers that will be appearing in the upcoming conference. There are several minute interviews with an author of each paper. The interviews I’ve listened to are very well-done, although definitely geared to a technical audience. Valerie King does a nice interview on our own upcoming paper “Conflict on a Communication Channel”.