Machinations


PODC Day 1
June 7, 2011, 2:44 pm
Filed under: Uncategorized | Tags: , ,

.

FCRC is happening right now at the San Jose conference center.  The following is a guest post by Maxwell Young on Day 1 of PODC.

A nice opening talk was given by Rotem Oshman on the paper “Coordinated Consensus in Dynamic Networks” which is joint work with Fabian Kuhn and Yoram Moses.  Many consensus models assume that direct communication is possible between any pair of nodes (ie. a clique communication graph G). Furthermore, the graph topology is typically assumed to be static. Here, the authors studied a more generalized problem setting where the G is connected, but not necessarily a clique, and the topology can change over time to reflect unreliable links and connectivity failures. Eventual, simultaneous, and Delta-coordinated consensus are all examined with Rotem spending more time discussing lower bounds for the last two variants. Indeed, when it comes to simultaneous consensus, the news is bad as the authors show a lower bound of n-1 rounds. Relaxing this to Delta-coordinated consensus does not provide much relief. Here, *in the worst case*, the round complexity is n-Delta-1.  However, under certain (still fairly general) circumstances, one can do much better — for instance, in the “clear-majority” case where the fraction of identical inputs is bounded away from 1/2. Rotem also provided an overview of an intricate argument involving a static line graph that yields a lower bound of similar form to the clear-majority result, although not quite tight with the upper bound.
Another interesting morning talk was given by Guanfeng Liang on the topic of achieving consensus in the presence of Byzantine faults (joint work with Nitin Vaidya). Of course, there has been an enormous amount of work done in this area. The main contribution by Liang and Vaidya is demonstrating an *error-free* consensus algorithm for L-bit messages that improves quadratically (in n) on the bit complexity of previous work when L is large. This is achieved by running consensus on chunks of the input message in combination with error-detection codes and the use of an “accusation” graph.  In terms of practical utility, Guanfeng (seated next to me in this session) mentioned that the utility of larger messages in consensus is that they ameliorate the overhead that one would get from using other protocols that handle a single bit at a time. From a performance perspective, this facet of the problem, in combination with their improved round complexity, would obviously yield higher network thoughput.

Particularly interesting was David Ferrucci’s presentation of the work he and his team did at IBM on developing the JEOPARDY!-winning system “Watson”.  At first glance, this challenge may seem solvable through brute force. Why not categorize and store the top questions (and their possible permutations) in a format that allows standardized queries? Then parse asked questions correctly, throw enough computational power at the setup in order to get it to work in game-show time, and voila! Unfortunately, this approach immediately encounters problems – binning the questions by topic yields a heavy-tailed distribution. Therefore, storing the most frequently-asked questions would leave Watson grasping at straws for much of the show. Over the course of an hour, Ferrucci managed to convey the massive scope of the problem and IBM’s solution which spans the areas of natural language processing, machine learning, algorithmics, massively parallel architectures and more.

Something closer to home was Calvin Newport’s afternoon talk “Structuring Unreliable Radio Networks ” which is joint with Keren Censor-Hillel, Seth Gilbert, Fabian Kuhn and Nancy Lynch.  The talk centered around a model of wireless sensor networks where the communication graph consists of both reliable and unreliable links. Here, each node has access to information about the reliability of its links to neighbors via a detector. In practice, this information is obtained through algorithms running low in the protocol stack.  Given this setting, the authors studied the problem of building a connected constant-degree dominating set (CCDS) which can provide an efficient communications backbone in a sensor network. For a detector that never misclassifies links, the authors show that CCDS can be solved in time roughly O(polylog n) rounds given reasonable bounds on the maximum degree (in terms of reliable links) and the message size. But of particular interest is the lower bound:  If the detector misclassifies even *one* link, CCDS requires Omega(D) rounds, clearly separating this class of problems from more optimistic setting where sure knowledge of reliable links yields polylogarithmic behaviour.

The theory-oriented literature (a lot of it from PODC) reports on lower bounds in the context of wireless sensor networks. An interesting question was asked at the end of Calvin’s talk regarding how the lower bound in the paper plays out in the real-world.  Calvin’s response, and one that I asked him more about later on, was that current sensor network deployments are typically small in size; consequently, the costs implied by such a lower bound result (and other lower bound results regarding efficiency) are likely not an issue.  However, as systems grow in size, practitioners may see such results manifest – although, which lower bounds will be applicable remains to be seen.

Along similar lines, during one of the coffee breaks, I had the opportunity to talk with Christian Scheideler about how theoreticians have been prompted by practitioners to address more realistic models. Contending analytically with the kind of wireless interference that occurs in real-world deployments can be messy. One approach is to work with the SINR model; however, certain issues, such as addressing the RSSI threshold in the context of receiver-side collision detection, seem difficult to resolve satisfyingly.  As another example, in terms of jamming attacks, going back a few years we see many papers that address oblivious adversaries who make their jamming decisions independently of the good parties. However, more challenging reactive adversaries have been demonstrated and, consequently, recent work by the theoretical community tends to address this attack model.  In any event, if Calvin’s prediction bears out, the implications for large wireless sensor networks may be somewhat gloomy – depending on which lower bounds hold true – but, as a silver lining, it would also validate some of the efforts made by the theoretical community.

Advertisements

Leave a Comment so far
Leave a comment



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s



%d bloggers like this: