September 29, 2009, 9:26 pm
Filed under: Uncategorized | Tags: , ,

The following parody of Jorge Luis Borges is from McSweeney‘s, an excellent literary/humor website.  You need to read at least some Borges to appreciate the humor, which you absolutely should do if you’re interested in literature with a mathematical bent.  Let me recommend starting with the short story The Library of Babel. There’s an entire book based on this short story called The Unimaginable Mathematics of Borges Library of Babel.


Those who write of self-defense inevitably fail to celebrate the mysterious and the multiple. These scribes imagine safety to be a litany of banal proscriptions: do not go out alone in the evening, avoid eye contact with strangers, carry always in your mind an image of Shakespeare riding a lamed horse. Such writers commit a deplorable error, for to place one’s hope for safety in changeless things is the fallacy of realism and the unspoken goal of pepper-spray manufacturers. Instead, one correctly perceives safety in the fabrication of new and undreamt paths, paths engendered by the laborious placement of one’s feet somewhere in the world, in any world, of the many divergent and proliferating worlds in which one may be harassed or assaulted. From one’s stance may spring any and all actions. To slouch or lean, then; to stand less than heroically—these things usurp the power of the omnipotent stance. Wearing high heels does not help, either. Flats are more stable and better for one’s ankles.

Let us enumerate, then, the multiplicity of places where the human form is vulnerable: The eyes. The soft tissue of the throat. The nose and the glabrous upper lip. The fingers, astoundingly fragile and exposed. The groin. The absurd knee, notoriously false. The toes. Note also the curious paradox of these targets’ coexistence with the myriad uncounted bodily weapons: The elbow. The fingernails. The front of the first two knuckles of the right hand. The heel of the foot. The arcing curve of the back of the skull. The base of the palm. The teeth, if one is not overly squeamish. The possible combinations of these weapons against the human frailties are numberless.

Now we shall role-play one or two.

Imagine that in the year 1934 you are returning from your friend Carolus Ernst’s house, where you have been engaged in a fierce and discordant dispute concerning a corrupt translation of Herodotus from a Persian manuscript. Suddenly, near some undocumented alleyway, a tall, sharp-featured stranger with gray eyes approaches and seizes you by the left wrist. Time stops, revealing its irrefutable triviality. Expanding outward from this attack are illimitable, inexhaustible responses.

Here is a practical three-part strategy that I recommend: You will strike this attacker in the throat with the hardest weapon available to you. Perhaps it is an elbow. Perhaps it is an 1847 edition of The Encyclopedia of Celestial Navigation. Perhaps it is a small, strange bird of infinite plumage, or a monstrous iron implement once used to prod the sacred tortoise in the savage jungles of Uln. Perhaps it is all these things. It depends, ultimately, on your inclination and talents, and also on the size of the purse one carries. Decidedly, you must also raise an alarm. As you strike, you voice words of opprobrium; you issue a polemic against the imprisonment of wrists; you invent a language spoken only by those who struggle for the free movement of their left arm. One might say, for example, “No,” or “Let go of me,” or, conceivably, “Look at the moon, rising above the pyramids.”

After striking, you will rotate your seized hand radially about the axis of your attacker’s forearm, effecting the negation of his grip by virtue of the thumb’s fallibility. The attacker, stunned by your articulation, and rendered breathless by the crushing of his larynx, steps back from you as if seeing 500 red-tinged angels descending an ebony staircase. Now run.

Very good. I reverently applaud your efforts.

Conceivably, one does not wish to strike the assailant. This may be due to an excess of compassion, or to impenetrable melancholy, or to obscure religious commitments. In the mirrors of possibility, many reactions are reflected. Other corridors of action extend, any one of which you may travel. You may choose, instead of fighting, to pray, to declare yourself invisible, to recite the 13 lamentations of Ramón Beckjord, to feign idiocy, to utter mystical predictions concerning nameless men in three anonymous cities. All responses are valid. It is a good idea to practice them at home when you can; this builds confidence.

I had planned to end our session today with a parable about a tiger and a nightingale, but I see we are out of time.

Consensus (II) : Feige’s Leader Election Protocol
September 21, 2009, 6:36 pm
Filed under: Uncategorized | Tags: , , , , , ,

“Whenever a man has cast a longing eye on offices, a rottenness begins in his conduct.” ~Thomas Jefferson

I want to continue my occasional series about consensus today by talking about leader election.  Actually, it’s going to be the problem of “committee election”, or as it is referred to more fancily, universe reduction.  The problem is basically this: you have a collection of n people, some hidden fraction of which are bad, and you want to elect a small committee of the people such that the fraction of bad people in the committee is not much more than the fraction of bad people in the general population.  The problem is complicated by the fact that there is no trusted outside party that can be used to, e.g.  count up all the votes.  Instead, the problem needs to be solved just through direct communication among the players.  We’ll assume everyone has synchronized clocks and, moreover, that there is a way to broadcast a message to everyone (i.e. a consensus algorithm).

We’ll be interested in the very challenging case where the bad guys are computationally unbounded, i.e. no cryptography!  Why do we torture ourselves like this?  Well there are a few reasons.  First, the intellectual interest in determining exactly what is and is not possible with cryptographic assumptions.  Second, the practical problem of maybe avoiding some of the computational overhead associated with cryptography e.g. setting up a PKI.  Finally, the desire to avoid assumptions about the “hardness” of certain problems, assumptions that may in the end not wind up not being completely correct.

In FOCS 1999, Uri Feige proposed a very clever solution to the committee election problem in his paper “Non-cryptographic Selection Protocols”.  I’m going to simplify the algorithm a bit to make it blog friendly.  So here’s the setup: we have n people, a hidden fraction of whom are bad and we want to elect a committee of these people such that the fraction of bad people in the committee is not much more than the fraction of bad in the general population.  Here’s the algorithm: we put everyone up on a ledge and we put a certain number of large bins under the ledge.  Each person then must jump into a bin.  We tell the good guys to choose their bins independently and uniformly at random.  Then, once everyone has jumped (we give them a fixed amount of time to do so), we take the bin with the smallest number of people in it and select these people as the winners.  The rest of the bins are carted off to the politician recycling center.

Why should this work?  We have no control over what the bad guys do, and moreover we assume they are omniscient and can secretly collude.  Thus, we can expect the bad guys to hem and haw standing on the ledge until all the good guys have jumped, and then carefully decide amongst themselves what bins to jump into in order to try to throw the election.  Except it turns out that there’s really nothing much they can do.  Assume there are x bins and n people and that t of those people are bad.  Then the expected number of good people jumping in each bin is (n-t)/x.  Moreover, if x is chosen carefully, we can show that with high probability, the number of good people in every bin is very closely concentrated around the expected value[1].  But then what is the worst the bad people can do?  The chosen bin will certainly have no more than n/x people in it.  Thus, the chosen bin will have a fraction of good people in it that is very close to the fraction of good people in the general population.

So that’s the idea.  Cool no?  Much of the technical challenge of Feige’s paper is showing how you can extend this idea to choose a single leader, so that this leader has probability of being bad no more than a fixed constant.  The paper is a good read so I won’t reveal any of the details of how to do this.  Also I want to point out that Feige’s algorithm is a critical tool in many of the modern consensus algorithms.  But doesn’t his algorithm require a consensus algorithm in order to announce bin choices for each person?  Yes, the trick is to use his algorithm among small groups where consensus is not that costly and then to create a kind of tournament network to eventually elect a very small representative group after running many election protocols in parallel.

[1] For example, let t = n/3 and x = n/(log n).  Then Chernoff bounds and a union bound show that with probability that goes to 1 as n gets large, the number of good players in every bin is between (1-epsilon) and (1+epsilon) times (2/3)log n.

WINE 2009
September 11, 2009, 10:00 pm
Filed under: Uncategorized | Tags: , , ,

As reported here, Accepted papers for WINE 2009 (Workshop on Internet & Network Economics) are now up on the workshop web page.  My paper Fear in Mediation: Exploiting the Windfall of Malice , that I’d previously blogged about here, was accepted, so I’m looking forward to  taking another trip out to Rome.  Amazingly, this will be my 3rd trip to Rome this year.  Clearly I need to stop throwing so many coins into Trevi fountain!

Calling Bluffs and Naming Names: Myths about the Internet Topolgy

Tanya Berger-Wolf sent me a brave little paper recently: Mathematics and the Internet: A Source of Enormous Confusion and Great Potential by Willinger, Alderson and Doyle.  This paper does a post mortem on the popular belief that empirical studies show that the Internet is scale-free.  Spoiler: They don’t.

Willinger et al. trace this erroneous belief starting with a paper by Pansiot and Grad (who are by no means responsible), that collected data on node degrees “to get some experimental data on the shape of multicast trees one can actually obtain in [the real] Internet”.  This data was collected using trace routes, which was reasonable given that its purpose was to determine the shape of multicast trees.  In 1999, Faloutsous, Faloutsous and Faloutsous used the data from Pansiot and Grad for a purpose for which it was not intended: they used it to infer information about the degree distribution for the Internet’s router-level topology.  Next, Barabasi et al., who were studying the WWW graph, and observing scale-free distributions there, picked up the reports of scale-free distributions for the Internet and added this to their list of networks with such properties.

A paper soon followed by Barabasi and Albert that described the preferential attachment model of network growth, wherein nodes are added one at a time to a network; each node has a constant number of out links; and the destinations of these out links are selected from the current nodes with probability proportional to each current node’s number of incoming links. Barabasi and Albert showed that this model generates a network with a scale-free distribution, and posited the preferential attachment as a model of the growth of the Internet and the WWW network.  The preferential attachment model had actually been studied over about 75 years by Yule, Luria and Delbruck, and Simon, but the rediscovery by Albert and Barabasi, and the possible connection to modern networks generated a huge amount of excitement and a large number of follow-up papers.  In particular, a slew of follow-up papers specifically studied properties of preferential attachment (PA) networks, i.e. networks generated via the preferential attachment model of growth. It was shown that PA networks are very robust to random deletions, but very fragile to adversarial deletions.  One commonly accepted conclusion was that the Internet, like PA networks, had high degree nodes, that were very centrally located, and whose removal would easily disconnect the network.  This idea was actually celebrated as a nice implication from theoretical network models to the real world.

So what’s the problem?  Basically the research community made two huge mistakes; mistakes that some people in the community still have not recovered from (in the sense that “recover from” means  “become aware of”)!

Mistake 1: Empirical studies do not support the claim that the Internet is scale-free

Willinger et al. go through several problems about why traceroutes do not create accurate maps of the Internet.  There are many technical details here that are hard for a theoretician like me to understand.  However, one detail that caught my eye was the fact that when a traceroute encounters an “opaque layer-2” cloud, “it falsely ‘discovers’ a high-degree node that is really a logical entity  … rather than a physical node”.  This causes traceroutes to report high-degree nodes in the core of the router-level Internet, even when such nodes don’t exist.  This type of bias is claimed to be even worse than the well-known “over-sampling of high-degree nodes” bias that traceroutes also have.

Mistake 2: Scale-free networks are not PA networks

PA Networks are scale-free, but the converse is not true.  In particular, there are scale-free networks that are robust to adversarial deletions.  Intuitively, it’s possible to have a scale-free network where the high degree nodes are not at the “core” of the network.  More generally, it’s possible to have a scale free network with good expansion properties.  Such networks are robust to adversarial deletions.  In fact, Wilinger et al., suggest that the Internet is more likely to have the high degree nodes at the “periphery” of the network since each router can only handle a certain amount of traffic, and more edges are only possible if each edge is handling less traffic.

What Next?

Here the Willinger paper looses steam.  They suggest a first principles approach to Internet modeling, where the network formed is one that tries to optimize a given algorithmic problem over the network.  This is great.  What is not so great is the model they propose, which assumes that the network creation problem is solved in a completely centralized manner.  Much better would be if they had used game theory as a starting point.  After all, the Internet is inherently a massive, distributed enterprise. There is actually some really cool work done in algorithmic game theory on network creation games.  For example, see Chapter 19 of The Book, or this paper by Fabrikant et al. (shout out to Elitza Maneva in the et al. for this paper, who was visiting UPC while I was there on sabbatical).  Yes, it’s true that most of these game theory result have shown that network creation games have small price of anarchy.  However, that does not imply that the network topology you get in a Nash equilibria will be like the topology you get in the socially optimum solution.  I’d like to see some results on the types of topologies one might expect in an Nash equilibria for network creation games.

Defeat the Nazis!
September 4, 2009, 11:02 pm
Filed under: Uncategorized | Tags: , , ,

Do your part at Enigma@Home