Machinations


Christos Papadimitriou
July 31, 2009, 6:58 pm
Filed under: Uncategorized | Tags: ,

The current issue of Computer Science Review is devoted to celebrating the research contributions of Papadimitriou [1].  The issue contains a nice mix of accolades and ideas.  On the accolades side, I was impressed by the degree to which Papadimitriou has helped create a vibrant theory community in his home country of Greece, which has produced more than its fair share of notable theory researchers.

On the idea side, there are two survey papers of work connected to Papadimitriou that I particularly liked, one by Koutsoupias on the k-server problem and one by Kleinberg and Raghavan on internet structure, network routing and web information.  One of the very first dissertations I ever read was Koutsoupias’ dissertation (done under Papa.), which showed that the work function algorithm for the k-server problem had a competitive ratio of 2k-1, where the previous best known ratio was exponential in k.  This dissertation helped me fall in love with CS theory: beautiful math and ideas, a crisp problem statement, a clear understanding of when progress is made on a problem.  Plus, the entire dissertation is only 41 pages – that’s perhaps the best way to appreciate what a major result it was!

Bonus Papadimitriou link: Check out the following Papa. talk on why “CS is the new math”.

[1] Tip of the hat to journal editor-in-chief and friend Josep Diaz who air-mailed me the current issue of the journal.  A very enjoyable part of my sabbatical last year was spent in Barcelona at UPC working with Josep.



Windfall of Malice
July 27, 2009, 9:55 pm
Filed under: Uncategorized | Tags: ,

A significant contribution of computer science to game theory is the concept of the price of anarchy.  This value is a measure of how much selfish behavior can hurt the overall social welfare in a game.  It is defined by considering the worst possible social welfare that can occur in a Nash equilibria and comparing it with the best social welfare achievable if the players are controlled by a benevolent dictator.  More specifically, the price of anarchy is just the ratio of these two quantities: the optimal social welfare achievable divided by the social welfare for the worst Nash equilibria.

A huge number of papers over the past several years have computed the price of anarchy in various games, some games have large price of anarchy and some games have small.  After reading enough of these papers, it’s easy to get the feeling that “Everyone talks about the price of anarchy, but nobody does anything about it!”  After all, an obvious contribution that CS theory should be making to the area of game theory is to propose algorithmic techniques that optimize social welfare.  A recent nice result in this direction is by Balcan, Blum and Mansor in SODA ’09: Improved Equilibria via Public Service Advertising.  They describe how efforts by a third party can tip the players in some games away from equilibria with poor social welfare and towards equilibria with better social welfare.

Another line of interesting work began with a paper by Moscibroda, Schmid, and Wattenhofer who showed that for a particular virus inoculation game, the existence of malevolent players can paradoxically improve  social welfare.  In particular, if the set of players in a game know that a small number of them are not selfish, but actually have worst case utility functions, then the price of anarchy of this new game is actually better than if there were no malevolent players.   A follow up paper by Babaioff, Kleinberg, and Papadimitriou showed the same effect can occur in network congestion games, and this paper coined the phrase “windfall of malice” to describe the effect.

An interesting (and obvious) question is: “Is it possible to achieve the windfall of malice without the actual presence of adversarial players?”  Recently, Diaz, Mitsche, Rustagi and I have made our own small contribution to this area by showing that the answer is “Yes.”  Our approach is based on the concept of a mediator.    Informally, a mediator is a trusted third party that suggests actions to each player.  The players retain free will and can ignore the mediator’s suggestions.  The mediator proposes actions privately to each player, but the algorithm the mediator uses to decide what to propose is public knowledge.  The idea of using a mediator has been made much more appealing recently by a paper by Abraham, Dolev, Gonen and Halpern in PODC ’06.  They show that it is almost always possible to simulate a mediator, without the need of a trusted third party, through the technique of “cheap talk”, i.e. talk among the players that has no economic consequences.

There are three basic results we are able to show in our paper:

  • We introduce a general technique for designing mediators that is inspired by careful study of the “windfall of malice” effect.  In our approach, the mediator makes a random choice of one of two possible configurations, where a configuration is just a set of proposed actions for each player.  The first configuration is optimal: the mediator proposes a set of actions that achieves the social optimum (or very close to it).  The second configuration is “fear inducing”: the mediator proposes a set of actions that leads to catastrophic failure for those players who do not heed the mediators advice.  The purpose of the second configuration is to ensure that the players follow the advice of the mediator when the optimal configuration is chosen.  Thus, the random choice of which configuration is chosen must be hidden from the players.
  • We use our technique to design mediators for two games.  First, we design a mediator for the virus inoculation game that was proposed by Moscibroda et al. that achieves a social welfare that is asymptotically optimal.  Second, we design a mediator for a variant of the El Farol game that improves the social welfare over the best Nash equilibria.  Surprisingly, our technique works for the El Farol game, even though this game does not have a windfall of malice.
  • We show the limits of our approach by proving an impossibility result that shows that for a large class of games, no mediator will improve the social welfare over the best Nash equilibria.  In particular, this impossibility result holds for the congestion games that Babaioff, Kleinberg and Papadimitriou show have a windfall of malice.  Thus, we show that some games with a windfall of malice effect can not be improved by the use of a mediator.

Much of my own work has been on designing algorithms that foil adversarial agents (e.g. consensus algorithms).  It’s interesting and surprising to see that adversarial agents and actions can actually cause beneficial effects!



Small Worlds
July 17, 2009, 5:28 pm
Filed under: Uncategorized | Tags: ,

Piere Fraigniaud and George Giakkoupis from the University of Paris at Diderot have a really nice paper in this upcoming Principles of Distributed Computing (PODC). Their paper titled “The Effect of Power-Law Degrees on the Navigability of Small Worlds” builds on the classic paper by Jon Kleinberg on navigating small world networks.

Jon Kleinberg’s classic paper concerns navigation in a grid network where each node, in addition to its local edges, has one additional long range edge. What Kleinberg showed is that provided that, for each node, this long range link covers a distance d with probability proportional to d^2, then a greedy routing algorithm will ensure that any node can reach any other node in the network within no more than about log^2 n hops where n is the number of nodes in the network[1]. Moreover, the exponent in this probability is pretty important, even a slight deviation from an exponent of 2 results in networks that can not be efficiently navigated by greedy algorithms. In this way, Kleinberg was thus one of the first people to describe a type of network that might mimic the social network that allowed quick routing in Stanley Milgram’s famous six degrees of separation experiments.

So what about this new paper by Piere and George? Well for many years the exponent of 2 in log^2 has bothered people. Piere and George show that it is possible to get rid of it with power laws. In particular, they show that if instead of each node having exactly 1 long distance link, the number of long distance links per node follows a certain powerlaw distribution, then greedy routing works in about log n hops. A powerlaw distribution means that the number of nodes with a number of long distance links x is proportional to x^k for some fixed constant k. This is a so called heavy tail distribution which occurs in many natural complex systems. Surprisingly, Piere and George show that the type of powerlaw distribution for which greedy routing works is when k is in the range between 2 and 3, which is very similar to the exponent one observes for degree distributions in many naturally occuring social networks.

As far as I know this is one of the first papers that suggests a nice functional property of powerlaw distributions. In particular, it shows that powerlaw distributions are more powerful than other distributions in achieving a specific mathematical goal. Are there other algorithmic or mathematical problems that powerlaw distributions are “good” for. It looks like a very nice paper.

[1] I’m using the term about to meet O(log^2 n) or roughly a function that grows like C log^2 n for some fixed constant C.



Random Sampling
July 17, 2009, 5:10 pm
Filed under: Uncategorized | Tags:

A fundamental statistical operation over a network is to sample a node uniformly at random. For large networks like the internet or the Facebook social network, this is the only principled way to gain information about properties of the network like: node degree distributions, fraction of nodes with a given property, clustering coefficient, etc.

For the most part, all techniques for sampling a node uniformly at random are currently just heuristics. Maybe you take a random walk around the graph for a while and then throw in some tricks to try to correct for bias in the stationary distribution. However, I think there is the possibility of a great and important theory problem here. Increasingly, many interesting networks are huge and not completely known. Randomly sampling nodes is a great way to measure properties of such networks, but bias in the sampling can lead to all kinds of problems [1]. Here’s a stab at a problem: Given an unknown graph and an arbitrary node (or two or more) in that graph, devise an algorithm to sample (close to) uniformly from the set of all nodes, when the only operation available is to move along edges of the graph. Of course, the graph would have to have “sufficiently nice” properties to do this, e.g. ergodic, good expansion, good degree distribution, etc. Coupling theory would be a good candidate tool to use for this problem.

[1] For example, see this paper for an example of how bias due to traceroutes can lead to erroneous prediction of power-law degree distribution where it does not actually exist.

P.S. A few years ago, Valerie King and I wrote a paper on choosing a random node in a peer-to-peer network. This only worked for highly structured networks and so did not solve the problem described above. However, we came up with the following algorithmic tool that might be useful for the harder problem. Imagine we can broadcast a message from some node to all other nodes in the network once for free and we want to minimize the number of nodes that need to reply to this broadcast. What is the minimum number of responses that need to be sent in order to select a node uniformly at random?

A naive approach is for every node to choose a number uniformly at random between 0 and 1; have the broadcaster send out a random number between 0 and 1; have nodes respond that have random numbers closest to that of the broadcaster (mod 1); and choose the single responder that is closest. Surprisingly, this scheme has significant bias. Think about it this way.  If you distribute n points uniformly at random on a circle with circumference 1, what would you expect the minimum distance to be between any pair of points.  Think it would be about 1/n?  Then, you’d be wrong!  It’s actually much less: only 1/n^2.  So how then can you remove this bias? Check out the paper for the answer!



Machinations
July 17, 2009, 4:48 pm
Filed under: Uncategorized | Tags: , , ,

Welcome! I am Jared Saia, a professor of computer science at U. New Mexico. This is my new blog containing meandering thoughts on order, disorder and robustness in complex systems, with an eye towards connections with computer science theory, distributed computing and security.



Bertinoro
July 17, 2009, 3:31 pm
Filed under: Uncategorized | Tags: , ,

Last week, I attended the Algorithms and Data Structures (ADS) workshop in Bertinoro. Bertinoro is a small medieval town near Bologna, Italy that is perfect for a workshop. There’s a hilltop castle offering well-protected meeting spaces for talks, and a nearby nunnery offering living accommodations for participants, plus the restaurants in town are uniformly excellent.

The workshop had a great collection of talks, including several excellent talks on data structures, an area I don’t normally keep up on. Some that stood out for me:

  • Monika Henzinger , formerly head of research at google and now at EPFL, surveyed recent results on the problem of choosing a random web page. This problem is rendered non-trivial by the fact that nobody knows the entire network of web pages and links and so the only tool available for sampling is to walk from one page to another. More on this topic in a later post.
  • My friend Amos Fiat, at the University of Tel Aviv, gave a great talk on a connection between differential privacy and computational geometry. A coreset is a small subset of points that can be used to compute an approximate solution to a problem over a larger set of points. Amos introduced the concept of a private coreset, which essentially has low sensitivity to the replacement of a single point in the original point set
  • Andrew Goldberg introduced the concept of the highway dimensionality of a graph to show why certain heuristics for speeding up shortest paths computations work so well on graphs related to road networks. Essentially this is a number like the tree-width of a graph that measures the structural simplicity of the graph. Smaller highway dimention equals a more algorithmically tractable graph. I really like these results that generate analytic techniques that move beyond worst case analysis.

After the conference, Amos and I traveled to Rome where we worked with Stefano Leonardo for several days on a problem related to auctions with budgets.  Unfortunately, I now have a mild case of gout from my culinary exploits in Rome and Bertinoro.