Machinations


SFI Summer School
June 24, 2011, 6:07 pm
Filed under: Uncategorized | Tags: , , , ,

If you don’t live in New Mexico [1], you’ve probably never heard of a mangonada.  Basically, it’s frozen mango pulp on a stick, served in a cup that contains lime juice, chili and salt.  You dip the mango popsicle in the cup as you eat it.  A mangonada is simultaneously too sweet, too salty, too spicy and too sour, but somehow, after a few licks, it starts to make sense.

At its best, the Santa Fe Institute is kind of like a mangonada.  It’s aplace where people simultaneously do research in economics, physics, sociology, biology and computation.  The research agenda is ridiculously broad and ambitious, but sometimes it works out.

This summer, I had the pleasure of lecturing at the Santa Fe Institute Complex System Summer School – a program that brings together about 100 graduate students in many disciplines from all over the world.

This was a great chance for me to think about what is fundamental about distributed algorithms, what are some of the key ideas and problems that students in economics, math, biology, physics, and sociology need to know.  The links below give the material that I chose to talk about.  Basically I spent the first lecture on “traditional” distributed computing problems, where we can tell each node exactly what algorithm to run.  Then I spent the second lecture on the harder models where only some of the nodes run our algorithm, or worse, no node will run our algorithm unless it’s in that nodes best interest.

The students seemed particularly engaged in the second lecture, and I was surprised about the level of interest in Byzantine agreement, which I almost didn’t include at first, since it seemed a bit esoteric.  I talked to several students after the lectures, including a biologist and sociologist  who were interested in connections between these types of problems and the problems faced by social insects and social groups.  I always find it very energizing to talk to bright young scholars and I’m looking forward to next summer.

Here are my lectures:

 

Special thanks to Sriram Pemmaraju  who helped me out with the distributed graph coloring material, and to Roger Wattenhoffer for his excellent class notes on distributed graph coloring.

[1] or old Mexico I assume 🙂



June SIGACT Distributed Computing Column is Out
June 20, 2011, 8:53 pm
Filed under: Uncategorized | Tags: ,

From Idit Keidar:

The DC column in the June issue of SIGACT News features:
“Distributed computing meets game theory: combining insights from two fields”
by Ittai Abraham, Lorenzo Alvisi, and Joseph Y. Halpern

The column is online on the column’s archive, both at the Technion:

    http://www.ee.technion.ac.il/~idish/sigactNews

and at MIT:

    http://people.csail.mit.edu/idish/sigactNews

I’ll be happy to get suggestions for contributions to future columns.



PODC Day 3
June 13, 2011, 9:56 pm
Filed under: Uncategorized | Tags: , ,

The following is a guest post by Mahnush Mohavedi.  Mahnush is a first year PhD student at UNM.  She got her Masters degree from Amirkabir University of Technology in Iran and I believe this was her first conference in the U.S.

In the third day of PODC conference, there were several interesting talks including:

  • “Stability of P2P Communication Systems” by Ji Zhu and Bruce Hajek.  This paper discussed the missing piece syndrome in which one piece becomes very rare in a network like Bittorrent, leading to instability in the system. They calculate the minimum amount of help needed from the seed nodes in an information dissemination game in order to stabilize the system, and ensure that all nodes receive a file.
  • “Tight Bounds on Information Dissemination in Sparse Mobile Networks” by Pettarin, et al. is a study of the dynamics of information dissemination between k agents performing independent random walks on an n-node grid. They assume communication is determined by a dynamic communication graph process G_t, where two agents are connected in G_t iff their distance at time t is within a specified broadcast radius.  They assume that a rumor travels completely throughout a connected component of G_t during round t.  They study the broadcast time, T_B, of the system, which is the time it takes for all agents to know the rumor.  They study the sparse case for the broadcast radius, where, whp, all components are of size log n.  Their main result is that T_B is soft-Theta(n/\sqrt(k)).  In particular, for this sparse case, the broadcast time does not depend on the broadcast radius.
There were also some interesting brief announcements:
  • “Rationality Authority for Provable Rational Behavior” by Dolev et al. considers the games in which players are not totally rational and smart. They define a game inventor agent that is able to find the best response of the game and present it to the players. The inventor is rational and may gain revenues from the game. Thus, they introduce verifiers as trustable service providers that can verify inventor’s advices using formal methods. During dinner on Tuesday, I had a chance to talk to Elad who presented the talk. He believes separation of interest, benefits, and goals is the key idea of the work.
  • “Distributed Computing with Rules of Thumb” by Jaggard et al. indicates that a large and natural class of simple algorithms fails to guarantee convergence to an equilibrium in an asynchronous setting, even if the nodes and communication channels are reliably failure-free.  In particular, they consider algorithms like “best replying” to other player’s actions and minimizing “regret”.  They show that these algorithms fail to ensure convergence, in the asynchronous setting, for problems in routing, congestion control, social networks and circuit design.  


PODC DAY 2
June 9, 2011, 12:02 am
Filed under: Uncategorized | Tags: , ,

Several interesting talks today including the following

“Compact Policy Routing” by Revtvari et al. considered the problems of designing small routing tables for routing when the optimization criteria is not necessarily path length. In particular, they note that Internet routing doesn’t use shortest paths, but instead is policy routing: making use of economic and political concerns. Their paper defines routing algebras and explores compressibility of routing tables for different types of optimization functions. A main result of the paper shows that BGP policy routing essentially must give rise to very large routing tables.

“Locally Checkable Proofs” by Goos and Suomela classifies various graph problems according to their local proof complexity. For example, locally checking whether a graph is bipartite requires each node to hold a proof of just 1 bit (the node’s color in a 2-coloring of a graph). In contrast, locally showing that a graph is *not* bipartite requires each node to hold a proof of size Theta(log n) (shown in the paper). The paper categorizes a dozen or so graph problems in terms of locally checkable proof sizes. Possibly interesting connections exist between this paper and the “Toward More Localized Local Algorithms” by Korman, Sereni and Viennot on Day 1. The proofs from this paper could possibly be plugged in as the “pruning algorithms” required by Korman et al.

“Fault-tolerant spanners” by Dinitz and Krauthgamer ” builds spanners that are robust to deletion of r nodes. Specifically, any algorithm that can create a k spanner (k>= 3) with f(n) edges can be converted to a k spanner with O(r^3 log n)*f(2n/r) edges that is “robust” to deletion of r nodes. “Robust” here means that for any set of F nodes, |F| <= r, the original spanner with F deleted is still a k-spanner of the original graph with F deleted. The algorithm is technically quite interesting, making use of a clever LP relaxation and the Lovasz Local Lemma.

“The Round Complexity of Distributed Sorting” by Patt-Shamir and Teplitsky considers a fully connected network where in each round, each node can send a O(log n) bit message to every other node (This is the CONGEST model with diameter 1). They first show that sorting in this model, when each node has at most n items can be done in O(log log n) rounds and selection can be done in O(1) rounds. Then, using a concurrent result by Lenzen and Wattenhofer on routing in the same model (in STOC), they further reduce sorting to O(1) rounds. The interaction between this paper and the result by Lenzen and Wattenhofer is neat, an the model itself is interesting (sort of diametrically opposed to LOCAL), and seems very powerful.

A quick final mention of two papers presented today that I was a co-author on. Varsha Dani gave a nice talk on our “Scalable Rational Secret Sharing” paper, and Maxwell Young gave a nice talk on our “Conflict on a Communication Channel paper.



PODC Day 1
June 7, 2011, 2:44 pm
Filed under: Uncategorized | Tags: , ,

.

FCRC is happening right now at the San Jose conference center.  The following is a guest post by Maxwell Young on Day 1 of PODC.

A nice opening talk was given by Rotem Oshman on the paper “Coordinated Consensus in Dynamic Networks” which is joint work with Fabian Kuhn and Yoram Moses.  Many consensus models assume that direct communication is possible between any pair of nodes (ie. a clique communication graph G). Furthermore, the graph topology is typically assumed to be static. Here, the authors studied a more generalized problem setting where the G is connected, but not necessarily a clique, and the topology can change over time to reflect unreliable links and connectivity failures. Eventual, simultaneous, and Delta-coordinated consensus are all examined with Rotem spending more time discussing lower bounds for the last two variants. Indeed, when it comes to simultaneous consensus, the news is bad as the authors show a lower bound of n-1 rounds. Relaxing this to Delta-coordinated consensus does not provide much relief. Here, *in the worst case*, the round complexity is n-Delta-1.  However, under certain (still fairly general) circumstances, one can do much better — for instance, in the “clear-majority” case where the fraction of identical inputs is bounded away from 1/2. Rotem also provided an overview of an intricate argument involving a static line graph that yields a lower bound of similar form to the clear-majority result, although not quite tight with the upper bound.
Another interesting morning talk was given by Guanfeng Liang on the topic of achieving consensus in the presence of Byzantine faults (joint work with Nitin Vaidya). Of course, there has been an enormous amount of work done in this area. The main contribution by Liang and Vaidya is demonstrating an *error-free* consensus algorithm for L-bit messages that improves quadratically (in n) on the bit complexity of previous work when L is large. This is achieved by running consensus on chunks of the input message in combination with error-detection codes and the use of an “accusation” graph.  In terms of practical utility, Guanfeng (seated next to me in this session) mentioned that the utility of larger messages in consensus is that they ameliorate the overhead that one would get from using other protocols that handle a single bit at a time. From a performance perspective, this facet of the problem, in combination with their improved round complexity, would obviously yield higher network thoughput.

Particularly interesting was David Ferrucci’s presentation of the work he and his team did at IBM on developing the JEOPARDY!-winning system “Watson”.  At first glance, this challenge may seem solvable through brute force. Why not categorize and store the top questions (and their possible permutations) in a format that allows standardized queries? Then parse asked questions correctly, throw enough computational power at the setup in order to get it to work in game-show time, and voila! Unfortunately, this approach immediately encounters problems – binning the questions by topic yields a heavy-tailed distribution. Therefore, storing the most frequently-asked questions would leave Watson grasping at straws for much of the show. Over the course of an hour, Ferrucci managed to convey the massive scope of the problem and IBM’s solution which spans the areas of natural language processing, machine learning, algorithmics, massively parallel architectures and more.

Something closer to home was Calvin Newport’s afternoon talk “Structuring Unreliable Radio Networks ” which is joint with Keren Censor-Hillel, Seth Gilbert, Fabian Kuhn and Nancy Lynch.  The talk centered around a model of wireless sensor networks where the communication graph consists of both reliable and unreliable links. Here, each node has access to information about the reliability of its links to neighbors via a detector. In practice, this information is obtained through algorithms running low in the protocol stack.  Given this setting, the authors studied the problem of building a connected constant-degree dominating set (CCDS) which can provide an efficient communications backbone in a sensor network. For a detector that never misclassifies links, the authors show that CCDS can be solved in time roughly O(polylog n) rounds given reasonable bounds on the maximum degree (in terms of reliable links) and the message size. But of particular interest is the lower bound:  If the detector misclassifies even *one* link, CCDS requires Omega(D) rounds, clearly separating this class of problems from more optimistic setting where sure knowledge of reliable links yields polylogarithmic behaviour.

The theory-oriented literature (a lot of it from PODC) reports on lower bounds in the context of wireless sensor networks. An interesting question was asked at the end of Calvin’s talk regarding how the lower bound in the paper plays out in the real-world.  Calvin’s response, and one that I asked him more about later on, was that current sensor network deployments are typically small in size; consequently, the costs implied by such a lower bound result (and other lower bound results regarding efficiency) are likely not an issue.  However, as systems grow in size, practitioners may see such results manifest – although, which lower bounds will be applicable remains to be seen.

Along similar lines, during one of the coffee breaks, I had the opportunity to talk with Christian Scheideler about how theoreticians have been prompted by practitioners to address more realistic models. Contending analytically with the kind of wireless interference that occurs in real-world deployments can be messy. One approach is to work with the SINR model; however, certain issues, such as addressing the RSSI threshold in the context of receiver-side collision detection, seem difficult to resolve satisfyingly.  As another example, in terms of jamming attacks, going back a few years we see many papers that address oblivious adversaries who make their jamming decisions independently of the good parties. However, more challenging reactive adversaries have been demonstrated and, consequently, recent work by the theoretical community tends to address this attack model.  In any event, if Calvin’s prediction bears out, the implications for large wireless sensor networks may be somewhat gloomy – depending on which lower bounds hold true – but, as a silver lining, it would also validate some of the efforts made by the theoretical community.