Filed under: Uncategorized | Tags: expanders, networks, robustness, theory

In 1952, Von Neumann proposed the following problem. You are given a function f that can be computed with a circuit of n gates. You have to build a new circuit to compute f, but all the gates in your new circuit are unreliable. In particular, each new gate fails independently with some small probability epsilon, and when a gate fails, it computes the NOT of what it should compute. What is the smallest number of gates you need in order to compute f with a probability of error strictly less than 1/2?

Turns out that in general you need Theta(n log n) gates. The upper bound was shown by Von Neumann, and the lower bound for the exclusive-or function was first proved by Pippenger, Stamoulis and Tsitsiklis (after a previous incorrect attempt by others).

Today I want to talk about the upper bound result. You might think this is pretty easy – just duplicate each gate log n times, do majority filtering, and use Chernoff bounds to show that the majority output will be correct. Unfortunately, duplication and majority filtering alone fails since the fraction of faulty wires will increase by a constant with each level of the circuit.

To fix this, you need a **sampler** (aka a concentrator). A sampler is just a special type of bipartite graph. In the simplest case, there are the same number of nodes on the top as on the bottom and the graph is d-regular for some constant d. The very useful property of a sampler is that it *reduces error* in the following sense. If some fraction x of the nodes in the top are “bad”, then the fraction of nodes on the bottom that have a *majority *of bad neighbors is less than x/4. We can make the denominator an arbitrary large constant by increasing d, but 4 will suffice for our purposes. For more about samplers, including how to construct them and other applications, see Section 3.2.1 of our Breaking the O(n^2) Bit Barrier paper.

Once, you’ve got a sampler, Von Neumann’s construction is pretty easy and is illustrated in the figure below. In the upper right corner of the figure is a noiseless gate G with two input wires and a single output wire. The large circuit to the left replicates this functionality with noisy gates. In the large circuit, in place of the two input wires, we have two “cables”, each consisting of Theta(log n) wires. In place of G, we have Theta(log n) replicas of G. One wire from each cable goes into each of these replicas. Below this, we have wires connected via a sampler to a bunch of simple circuits (“Maj”) that simply compute the majority of their inputs.

These majority circuits (Maj in the figure) are themselves also noisy. However, since they have constant degree input, its not hard to construct them so that they fail only with constant probability.

The names “Executive Organ” and “Restoring Organ” used in the figure are Von Neumann’s original terminology. A bit colorful perhaps, but who are we to change it?

The simple analysis of this circuit is illustrated below. Adjacent to each cable is now an upper bound on the fraction of wires in the cable whose output is faulty. Each of these upper bounds holds with high probability (whp), i.e. a probability of error that is polynomially small in n). Adjacent to each gate is the probability that the gate is faulty.

We assume inductively that the fraction of bad wires in each of the input cables is no more than \theta for some fixed constant \theta. The first observation is that the fraction of faulty wires coming out of the gates G is then no more than 2\theta + 2\epsilon. To see this, note that, whp, the fraction of gates that fails is no more than 2\epsilon by Chernoff bounds. In the worst case, each faulty gate and each faulty input wire causes its own separate faulty output, which gives us 2\theta + 2\epsilon faulty wires.

Next notice that by the properties of the sampler, if all the Maj circuits are correct, we would have a factor of 4 **decrease **in the fraction of faulty wires leaving these, so that the fraction of faulty output wires would be only 1/2(\theta + \epsilon). But again by Chernoff bounds, whp, at most 2\epsilon of the Maj circuits will be faulty. Thus we have at most a 1/2(\theta + \epsilon) + 2\epsilon fraction of faulty wires coming out and this last term is no more than \theta for \theta large enough but depending only on \epsilon.

The final thing to note is that the fraction of faulty wires on the final output cable is the same as the fraction of faulty wires on the two input cables. Thus, the fraction of faulty wires does not increase with the depth of the circuit!

**Leave a Comment so far**

Leave a comment