Machinations


Efficient Secure Multiparty Computation
February 27, 2012, 10:31 pm
Filed under: Uncategorized | Tags: , , ,

Attached is a paper on a problem we’ve been working on for a while: an efficient algorithm for Secure Multiparty Computation (SMPC) against a static adversary.

In the SMPC problem, n players each have a private input, and their goal is to compute the value of a n-ary function, f, over the inputs, without revealing any information about the inputs. The problem is complicated by the fact that a hidden subset of the players are controlled by an adversary that actively tries to subvert this goal.

SMPC abstracts several important problems in distributed security, and so, not surprisingly, there have been thousands of papers written
in the last several decades addressing it. However, there is a striking barrier that prevents wide-spread use: current algorithms to solve SMPC are not resource efficient. In particular, if there are n players involved in the computation and the function f can be computed by a circuit with m gates, then most algorithms require each player to send a number of messages and perform a number of computations that is \Omega(mn)

We describe scalable algorithms for SMPC against a static adversary. We assume a partially synchronous message passing communication model, but unlike most related work, we do not assume the existence of a broadcast channel. Our main result holds for the case where there are $n$ players, of which a 1/3-\epsilon fraction are controlled by an adversary, for \epsilon any positive constant. We describe a SMPC algorithm for this model that requires each player to send \tilde{O}(\frac{n+m}{n} + \sqrt n) messagesand perform \tilde{O}(\frac{n+m}{n} + \sqrt n) computations to compute any function f, where m is the size of a circuit to compute f.



BIRS Workshop on “Probabilistic versus Deterministic Techniques for Shared Memory Computation”
February 14, 2012, 3:41 am
Filed under: Uncategorized | Tags: , , ,

Last week, I attended a great workshop in Banff, Canada on “Probabilistic versus Deterministic Approaches to Shared Memory Computation”.  The following is an extremely biased, incomplete and watered down summary – also it only includes morning talks because I was too sleepy in the afternoon to take notes – reader beware.

On Monday morning, Jennifer Welch and I gave back-to-back talks on emulating shared memory objects in “weird” communication models.  Jennifer talked about maintaining shared memory consistency under probabilistic churn in a peer-to-peer network.  In particular, she described recent work that created a shared read-write register in such a model that loses its state only with some small probability.  The technical proofs of correctness relied on careful analysis of continuous time Markov processes.  I believe that this paper describes some of their results in this model.

My own talk was on emulating a single writer, multi-reader in a wireless communication model subject to adversarial jamming (link to slides) – this was based on work with Valerie King and Maxwell Young from last PODC.  I’ve blogged about this somewhat before so won’t repeat myself.  However, if you’re a fan of the golden ratio (and who isn’t?)  then you should read the first half of the slides.  I had a lot of good feedback at the talk on how to extend these results  including: 1) determining what happens when there are multiple communication channels available; 2) what happens if one can use signal processing in the event of a jam to determine what the underlying message was in the case where many processors broadcast the sam underlying message; and 3) determining if one can reduce the power consumption necessary to in order to maintain the state of the shared object, perhaps expending more energy only when there is a need to change state.

On Tuesday, Keren Censor-Hillel and Danny Hendler gave excellent back-to-back talks on restricted use shared objects.  Keren started with upper bounds – in particular how to implement a max count register in the restricted use model and Danny followed up with lower bounds from a paper joint with Aspnes, Attiya, and Censor-Hillel.  A restricted use object is essentially one where some restriction exists on the number of operations that may be applied to an object.  There are two common types of restrictions: 1) m limited use: at most m ops may be applied; and 2) b bounded: at most b values supported by the object.  An old result by Jayanti, Tan and Toeg in ’00 shows that Omega(n) work and space is needed for history-less primitives [Jayanti, Tan, Toueg, ’00], but these lower bounds don’t apply to restricted use objects.  There have been several exciting results showing the possibility of implementing such objects in polylog step complexity (see the talk by Gilbert and Bender below for an example of cool applications of these new results).  How can we devise lower bounds for this new type of restricted use object?  Danny discussed the notion of L perturbable objects that intuitively can be perturbed at most L times.  He gave details of a result (joint with Aspnes, Attiya, and Keren) showing that a L perturbable object must have step complexity Omega(min(log L,n)) and space complexity Omega(min(L,n)).  This lower bound is for deterministic objects only; randomized lower bounds are still relatively open (lower bound of Omega(log log m/ log log log m))

On Wednesday, Lisa Higham and Wojciech Golab gave talks on the notion of strong linearizability.   I will just touch on the problem they address.  When a shared object is linearizable, it intuitively means that the history of invocations and responses to that object can be ordered in time in a way that 1) the total order extends the “happens before” partial order over all the operations; and 2) the ordering obeys correctness properties for the shared object.  (I’m probably missing something important here – maybe someone will correct me in the comments).  Many (most?) people think that if operations on an object are linearizable, then everything is great: the object acts like it is atomic in the sense that it appears to the rest of the system as if operations on it occur instantaneously.

Lisa and Wojciech showed that these people are sadly mistaken.  In a recent STOC result, joint with Wojciech Golab and Philipp Woelfel, it’s shown that linearizability does not suffice in the case where processes can use randomization.  They introduce a new concept strong linearizability that is sufficient (and more or less necessary) to ensure correctness of randomized algorithms.  Unfortunately, as discussed by Wojciech, it seems that to ensure strong linearizability for most shared objects requires significantly higher resource costs than to ensure linearizability.  Valerie King brought up an interesting question about whether for certain types of randomized algorithms, a somewhat weaker notion may work (for example, if the algorithm is Monte Carlo and it’s ok if the scheduler can slightly tweak the probability of “bad” events, so long as this probability never gets too large).

[Note from Wojciech: you said “it seems that to ensure strong linearizability for most shared objects requires significantly higher resource costs than to ensure linearizability.”  Is your impression based by any chance on the impossibility results I gave during my presentation?  Those actually pertain to first-step and first-update linearizability, one of which is strictly stronger than strong linearizability, and the other is incomparable.  For strong linearizability itself, there are several upper bounds that Lisa and I didn’t mention as prominently as we should have.  In particular, known universal constructions for lock-based objects and wait-free objects tend to be strongly linearizable.  Thus, the message we wanted to get across is that strong linearizability is a practical property because it’s readily attainable, and in several important cases it comes at no additional cost beyond the cost of ordinary linearizability.  (That’s in contrast to first-step and first-update linearizability.)]

Next, Hagit Attiya led an interesting discussion on how to motivate our work in distributed computing, pointing out that other CS areas, like Machine Learning, are frequently better at “selling” their results.  Maurice pointed out that many results that originated in the PODC community (like Byzantine agreement) have been “rediscovered” by the systems community, frequently without significant attibution/homage to the distributed algorithms community.  How can we more effectively advertise these results outside our own community?  Hint: it may help to have fewer models.

On Wednesday afternoon, we took a road trip to Lake Louise where we saw a great collection of ice sculptures, walked around a beautiful snow covered lake, and learned that penny loafers aren’t the best footwear for a glacial approach.

On Thursday, Seth Gilbert and Michael Bender gave a great joint talk on their recent FOCS paper on “Mutual Exclusion with O(log^2 log n) Amortized Work”.  A recent algorithm for this problem was by Danny Hendler and Phillip Woefel (one of the workshop organizers), which showed that it was possible to breakthrough a Theta(log n) barrier (shown by Attiya, Hendler and Woefel);  this previous algorithm achieved Theta(log n/ (log log n)) work against an adaptive adversary (in the shared memory world, this essentially means an adversary in the full information communication model).  Seth and Michael’s work assumes a weaker adversary that is oblivious in that it can’t see past coin flips by the processors.  Their new algorithm is also Monte Carlo in that it ensures each processor gets into the critical section only with high prob.

Some key technical ideas behind their new result are: 1) to use a dense array to store processors that are waiting to enter the critical section; and 2) to create and use clever approximation and work-efficient counters (remember you can only afford O(log^2 log n) work per counter) in order to dynamically manage the array of processors that are waiting.  An interesting open problem: Can we prove that an adaptive adv. (i.e. one that has full information) can force at least Omega(log n/log log n) work even if the algorithm ensures access to the critical section only with high prob; or alternatively can we design an algorithm in this model that does better?

Valerie King gave a nice talk in the afternoon, half of which was on connections between the shared memory model and the message passing model for a consensus problem.  We’re likely writing a paper on this problem so I’ll probably blog about it later here.

Friday was overcast and cold, which made it a little bit easier to say goodbye to beautiful Banff.

[Thanks to Wojciech Golab for helpful corrections]



A Swarm of Quadrotors
February 3, 2012, 3:03 am
Filed under: Uncategorized | Tags: ,

I’m kind of geeking out on this right now.  This is really a great looking demonstration – interesting to think about the research problems they will face if the number of quadrotors increases (and the size of each decreases).