Filed under: Uncategorized | Tags: algorithms, distributed computing, theory
It’s always nice when a former student does good. Amitabh Trehan, who got his PhD with me at UNM, recently received the best paper award at ICDCN for his paper “Sublinear Bounds for Randomized Leader Election” (joint with Shay Kutten, Gopal Pandurangan, David Peleg and Peter Robinson).
Amitabh is currently a post doc at Technion with Profs. Shay Kutten and Ron Lavi. Rumor has it that he is looking for a academic position…
Filed under: Uncategorized | Tags: 3D printing, algorithms, distributed computing, theory
So this is where the distributed computing community can interact with the interesting new world of “3D printing”…
“Imagine that you have a big box of sand in which you bury a tiny model of a footstool. A few seconds later, you reach into the box and pull out a full-size footstool: The sand has assembled itself into a large-scale replica of the model.
That may sound like a scene from a Harry Potter novel, but it’s the vision animating a research project at the Distributed Robotics Laboratory (DRL) at MIT’s Computer Science and Artificial Intelligence Laboratory. At the IEEE International Conference on Robotics and Automation in May — the world’s premier robotics conference — DRL researchers will present a paper describing algorithms that could enable such “smart sand.” They also describe experiments in which they tested the algorithms on somewhat larger particles — cubes about 10 millimeters to an edge, with rudimentary microprocessors inside and very unusual magnets on four of their sides….
Algorithmically, the main challenge in developing smart sand is that the individual grains would have very few computational resources. “How do you develop efficient algorithms that do not waste any information at the level of communication and at the level of storage?” asks Daniela Rus, a professor of computer science and engineering at MIT and a co-author on the new paper, together with her student Kyle Gilpin. If every grain could simply store a digital map of the object to be assembled, “then I can come up with an algorithm in a very easy way,” Rus says. “But we would like to solve the problem without that requirement, because that requirement is simply unrealistic when you’re talking about modules at this scale.” Furthermore, Rus says, from one run to the next, the grains in the heap will be jumbled together in a completely different way. “We’d like to not have to know ahead of time what our block looks like,” Rus says.”
There’s a nice article on Eric Demaine’s work on origami in Nature recently, along with some beautiful pictures. I like the fact that it’s probably the only article in Nature that has ever contained the phrase “balloon animals”.
Filed under: Uncategorized | Tags: algorithms, secure multiparty computation, security, theory
Attached is a paper on a problem we’ve been working on for a while: an efficient algorithm for Secure Multiparty Computation (SMPC) against a static adversary.
In the SMPC problem, n players each have a private input, and their goal is to compute the value of a n-ary function, f, over the inputs, without revealing any information about the inputs. The problem is complicated by the fact that a hidden subset of the players are controlled by an adversary that actively tries to subvert this goal.
SMPC abstracts several important problems in distributed security, and so, not surprisingly, there have been thousands of papers written
in the last several decades addressing it. However, there is a striking barrier that prevents wide-spread use: current algorithms to solve SMPC are not resource efficient. In particular, if there are n players involved in the computation and the function f can be computed by a circuit with m gates, then most algorithms require each player to send a number of messages and perform a number of computations that is
We describe scalable algorithms for SMPC against a static adversary. We assume a partially synchronous message passing communication model, but unlike most related work, we do not assume the existence of a broadcast channel. Our main result holds for the case where there are $n$ players, of which a fraction are controlled by an adversary, for any positive constant. We describe a SMPC algorithm for this model that requires each player to send messagesand perform computations to compute any function f, where m is the size of a circuit to compute f.
Below is the last problem on a midterm for my graduate algorithms class. A few years ago, I started creating problems for this class that were based on (simplifications of) research problems. There’s a subset of the students that really like these problems and do well on them, but I worry sometimes that they hurt the students who are struggling in the class. I’m curious if others assign these types of problems for general graduate classes?
Today I want to talk about an interesting result in Pippenger’s paper “Networks of Noisy Gates” for the Von Neumann noisy gates model. Recall that in this model, each noisy gate fails independently with some fixed probability \epsilon. We are given a circuit to compute a function with m regular gates and our goal is to compute the same function with probability great than 1/2 with as few noisy gates as possible.
In previous posts, we showed that n \log n gates are always sufficient and we showed that for the exclusive-or function, n \log n gates are necessary. This implies that in the worst case, there is a log n blowup in the number of gates required.
There is another result by Pippenger that shows that for *most* boolean functions, only a *constant* blowup is required. Today I want to focus on the first part of this result, which is to show that a multiplexer (MUX) over 2^r signals can be computed with only O(2^r) noisy gates. Recall that a MUX over 2^r signals is a circuit that takes as input both 1) 2^r possible signals; and 2) an r bit selector. The MUX outputs exactly one of the 2^r possible signals, specifically the one that is specified by the r bit selector.
For example, a MUX for r=1 has as input 1) 2 signal bits; and 2) a single selector bit. If the selector bit is 0, the MUX outputs the first signal bit and if the selector bit is 1, the MUX outputs the second selector bit. Let G be a gate that computes the MUX for r=1. Then it’s easy to create a MUX for any r by wiring up O(2^r) copies of G (try it). In particular, a MUX for r requires at least 2^r noise free gates.
Pippenger shows that you can create a MUX for any r using only O(2^r) noisy gates. Clearly this is just a constant blowup over the number of noiseless gates needed to create such a MUX. The construction used to prove this result is sketched in the figure below. See also this pdf: pipp-mux
Note that any boolean function over x inputs can be computed using a MUX, where r=x (the computation is done essentially by table lookup). This means that we can now compute any boolean function over x inputs with O(2^x) noisy gates. It turns out that most boolean functions over x variables require 2^x/x noise-free gates. In my next post, I’ll show how we can shave off this factor of x in the noisy gate world.
Roger Wattenhoffer shares with me slides from his talk on “local looking” distributed algorithms that I think will be interesting to the distributed computing community. This talk was given at another workshop at Bertinoro this summer – a workshop on sublinear algorithms (blogged by Piotr Indyk here). Roger’s talk particularly focuses on lowerbounds and upperbounds for many “local looking” distributed problems like Maximal Independent Set, Minimal Vertex Cover, Minimum Dominating Set and Maximal Matching. The talk discusses the known lowerbounds for these problems: they are all known to require a number of rounds that is Omega(sqrt(log n)) and Omega(log Delta).
The talk also discusses upperbounds – in particular it also discusses a beautiful randomized algorithm for Maximal Independent Set that takes O(log n) rounds in expectation (I’ve recently posted slides from one of my own recent talks on this nice randomized MIS algorithm here. )
Finally, the talk illuminates many interesting open problems in distributed algorithms for these types of problems:
- Is there a deterministic algorithm for Maximal Independent Set (MIS) that takes polylog n rounds?
- Can we close the gap for randomized algorithms for MIS between the best lowerbound O(sqrt(log n)), and the best upperbound, O(log n)?
- What are the problem boundaries between O(1), O(log*n), O(log n) and O(diameter)? A nice picture illustrating what is open on these boundaries is given in the last couple slides of the talk.