Machinations


Leslie Lamport Wins Turing Award
March 28, 2014, 10:40 pm
Filed under: Uncategorized | Tags: , ,

Long overdue (both the award and my reporting of it here)

Advertisements


Sneak Preview

A sneak preview of the September SIGACT News Distributed Computing Column is now available here (Technion) and here (MIT).  Before I get into the newest column, I would just like to give a shout out to Idit Keidar, who I think has done an exceptional job of maintaining and advertising the SIGACT Distributed Computing column over the past few years.  Archived copies of past columns are on the above web sites – check them out.

The column for September, by Lidong Zhou lead researcher at Microsoft Research Asia , is about the tension between theoretical and practical research in distributed systems.  In fact, a major focus of the article (and a timely one for this blog) is practical ways to solve consensus in order to ensure reliability in large-scale distributed systems.  There are several interesting things I learned (or relearned) from this column including:

  • The replicated machine approach based on consensus is considered by industry (well one good researcher at Microsoft at least) to be directly applicable to the problem of cloud computing.
  • Algorithmic simplicity and graceful degradation of security guarantees are two properties desired by practitioners that currently seem to be ignored by theoreticians.
  • One of the reasons for the popularity of the Paxos algorithm for solving consensus is its flexibility: it captures critical parts of the consensus problem but leaves “non-essential” details such as the exact protocol for choosing a leader unspecified.
  • Lidong mentions that much of the inspiration for the column came from his “responsibility for the distributed systems behind various on-line services, such as Hotmail and the Bing search engine.”  I’m curious if this means that algorithms for consensus, such as Paxos, are at the heart of the robustness mechanisms for such systems.  If so, then the consensus problem is even more pervasive than I thought.


Consensus

Consensus[1] is arguably one of the most fundamental problem in distributed computing.  The basic idea of the problem is: n players each vote on one of two choices, and the players  want to hold an election to decide which choice “wins”.  The problem is made more interesting by the fact that there is no leader, i.e. no single player who everyone knows is trustworthy and can be counted on to tabulate all the votes accurately.

Here’s a more precise statement of the problem.  Each of n players starts with either a 0 or a 1 as input.  A certain fraction of the players (say 1/3) are bad, and will collude to thwart the remaining good players.  Unfortunately, the good players have no idea who the bad players are.  Our goal is to create an algorithm for the good players that ensures that

  • All good players output the same bit in the end
  • The bit output by all good players is the same as the input bit of at least one good player

At first, the second condition may seem ridiculously easy to satisfy and completely useless.  In fact, it’s ridiculously hard to satisfy and extremely useful.  To show the problem is hard, consider a naive algorithm where all players send out their bits to each other and then each player outputs the bit that it received from the majority of other players.  This algorithm fails because the bad guys can send different messages to different people.  In particular, when the vote is close, the bad guys can make one good guy commit to the bit 0 and another good guy commit to the bit 1.

The next thing I want to talk about is how useful this problem is.   First, the problem is broadly useful in the engineering sense of helping us build tools.  If you can solve consensus, you can solve the problem of how to build a system that is more robust than any of its components.  Think about it: each component votes on the outcome of a computation and then with consensus it’s easy to determine which outcome is decided by a plurality of the components (e.g. first everyone sends out their votes, then everyone calculates the majority vote, then everyone solves the consensus problem to commit to a majority vote.)   It’s not surprising then that solutions to the consensus problem have been used in all kinds of systems ranging from flight control, to data bases, to peer-to-peer; Google uses consensus in the Chubby system; Microsoft uses consensus in Farsite; most structured peer-to-peer systems (e.g. Tapestry, Oceanstore) use consensus in some way or another.  Any distributed system built with any pretension towards robustness that is not using consensus probably should be.

But that’s just the engineering side of things.  Consensus is useful because it allows us to study synchronization in complex systems.  How can systems like birds, bees, bacteria, markets come to a decision even when there is no leader.  We know they do it, and that they do it robustly, but exactly how do they do it and what is the trade-off they pay between robustness and the time and communication costs of doing it?  Studying upper and lower bounds on the consensus problem gives insight into these natural systems.  The study of how these agreement building processes, or quorum sensing, occur in nature has become quite popular lately, since they occur so pervasively.

Consensus is also useful because it helps us study fundamental properties of computation.  One of the first major result on consensus due to Fischer, Lynch and Patterson, in 1982, was that consensus is impossible for any deterministic algorithm with even one bad player (in the asynchronous communication model).  However, a follow up paper by Ben-Or showed that with a randomized algorithm, it was possible to solve this problem even with a constant fraction of bad players, albeit in exponential time.  This was a fundamental result giving some idea of how useful randomization can be for computation under an adversarial model.  As a grad student, I remember taking a class with Paul Beame telling us how impressed he was by what these two results said about the power of randomness when they first came out.  Cryptography was also shown to be useful for circumventing the Fischer, Lynch and Patterson result, and I’ve heard of several prominent cryptographers  who were first drawn to that area at the time because of its usefulness in solving consensus.

In the next week or two, I’ll go into some of the details of recent results on this problem that make use of randomness and cryptography.  Early randomized algorithms for consensus like Ben-Or’s used very clever tricks, but no heavy duty mathematical machinery.  More recent results, which run in polynomial time, make use of more modern tricks like the probabilistic method, expanders, extractors, samplers and connections with error-correcting codes, along with assorted cryptographic tricks.  I’ve been involved on the work using randomness, so I’ll probably start there.

[1] The consensus problem is also frequently referred to as the Byzantine agreement problem or simply agreement.  I prefer the name consensus, since it is more succinct and descriptive. While the research community has not yet reached a “consensus” on a single name for this problem, in recent years, the name consensus is being used most frequently.



Day 1 PODC ’09
August 11, 2009, 4:11 pm
Filed under: Uncategorized | Tags: , , , ,

Editors Note: This is the first day of reports from PODC ’09 by my  student Amitabh Trehan.  Principles of Distributed Computing (PODC) is one of the premiere conferences focusing on theory of distributed computing.

An interesting talk in the morning was the presentation of the best student paper  Max Registers, Counters and Monotone Circuits (Aspnes, Attiya and Censor) presented by Keren Censor. The final  message of  the talk was `Lower bounds do not always have the final say ‘ (quoted from the talk). The paper deals with implementation of concurrent data structures in shared memory systems, where n processes communicate by reading and writing to shared multi-reader multi-writer registers. A lower bound given by Jayanti et al shows that these operations take Omega(n) space and Omega(n) time steps in the worst case, for many common data structures. On careful analysis of the lower bound proof, the authors realised that they could develop sub-linear algorithms for many useful applications where the worst case would not occur. These are for algorithms where the number of operations will be bounded e.g. for applications which have a limited life-time or several instances of the data structure can be changed. They go on to show how some of these structures can be constructed with some nice use of recursion and my favorite data structures – trees!  This reminded me of a discussion I had the previous evening with Victor Luchangco about Nancy Lynch. He contended that one trait contributing to the success of her work (and of her students) was the rigorous questioning of every assumption and definition of interest they came across. A good lesson!

Robert Van Renesse‘s keynote talk `Refining the way to Consensus‘ set up the stage for later talks on consensus.  His talk described several refinements to basic consensus algorithms from simple to non-blocking consensus, to election consensus, to voters consensus, and finally to recommender consensus. He stressed the importance of  exposing undergraduates to the consensus problem, because of the fact that this is a problem they are likely to see again the real world. What drew chuckles was a cartoon showing the ‘ideal’ programmer, a faceless human stuck to the computer screen and keyboard coding to strictly meet the specifications, and an ‘intuitive’ programmer, a happy guy with his back to the computer  smiling having no idea what the specifications he was implementing were but knowing when he was done with his coding job.  Editor Note: Look for more information on the consensus problem in a blog post in the next week or two.

There were two  excellent `game theoretic’ talks in the afternoon. Georgios Piliouras presented his paper with Robert Kleinberg and Eva Tardos `Load Balancing Without Regret in the Bulletin Board Model‘. They consider load balancing games in the bulletin board model (players can find out delay on all machines, but have no information on what their delay would have been if they had selected another machine) and show solutions using regret-minimization which are exponentially better than the correlated equilibrium. No-regret algorithms are an example of alternative (to the weaknesses of Nash equilibria) solution concepts based on average outcome of self-adapting agents who react to each other’s strategies in repeated play of the game. Martin Hoefer presented his paper with Ackermann, Berenbrink and Fischer `Concurrent Imitation Dynamics in Congestion Games‘ which discusses the dynamics emerging when agents sample and possibly  imitating other agent’s strategies when to do so will improve their utility.  Their main result is to show that this imitation strategy leads to rapid convergence (logarithmic in number of players) to approximate equilibria for congestion games.  An approximate equilibrium is one where only a small fraction of the players have latency much better or worse than the average latency.