Filed under: Uncategorized | Tags: distributed computing, game theory, proof-of-work, security, Sybil attack, theory

Above is a link to the keynote talk I gave at SIROCCO 2020. Short abstract is below:

How can we defend Blockchains and peer-to-peer systems, when no central authority provides admission control? Resource-burning (proof-of-work, proof-of-state, CAPTCHAs) is one of the most used tools to defend such systems, but it is currently poorly understood mathematically. In this talk, I survey recent research to better understanding resource burning, in order to reduce its cost, and thereby improve system security.

Keywords: Distributed algorithms, game theory, costly signaling, money burning, Sybil attack, blockchains, cryptocurrencies, peer-to-peer.

Filed under: Uncategorized | Tags: distributed algorithms, golden ratio, security, theory

I’ll be giving two talks at the upcoming SIROCCO. The first is a keynote on how to ensure security in systems where nodes can come and leave at will, with no admission control. The second is a paper that won the best paper award; it is on robust, distributed search, and makes use of the golden ratio in an interesting way.

Anyone who registers can participate in the livestream of the talk, and the organizers of SIROCCO have made registration **free** this year. Below are links to talk times and registration information .

Talk abstracts are below:

**Resource Burning for Permissionless Systems** (Monday, June 29th, 6-7pm CEST)

Proof-of-work puzzles and CAPTCHAS consume enormous amounts of energy and time. These techniques are examples of resource burning: verifiable consumption of resources solely to convey information.

Can these costs be eliminated? It seems unlikely, since resource burning shares similarities with “money burning`"`

and “costly signaling”, which are concepts foundational to game theory, biology, and economics. Can these costs be reduced? Yes, research shows we can significantly reduce asymptotic costs of resource burning in disparate domains.

In this talk, we survey the literature on resource burning; take positions based on predictions of how the technique is likely to evolve; and propose several open problems targeted at the theoretical distributed-computing research community.

**ANTS on a Plane** (Wednesday, July 1st, 7:00-7:30 pm CEST)

In the ANTS (Ants Nearby Treasure Search) problem, multiple searchers, starting from a central location, search for a target. The searchers cannot communicate and have few bits of initial knowledge, called advice, when they begin the search. In this paper, we initiate the study of ANTS in the geometric plane.

Our main result is an algorithm, GoldenFA, that tolerates arbitrarily many crash failures caused by an adaptive adversary, and requires no bits of advice. GoldenFA takes expected time to find the shape, for a shape of diameter D, at distance L from the central location, with N searchers, t<N of which suffer adversarial crash-failures.

We complement our algorithm with a lower bound, showing that it is within logarithmic factors of optimal. Additionally, we empirically test GoldenFA and a related heuristic, and find that the heuristic is consistently faster than the state-of-the-art. Our algorithms and analysis make critical use of the golden ratio.

Filed under: Uncategorized | Tags: distributed computing, security, theory

*[The following blog report on SSS’13 was written by my student George Saad – Ed]*

*15th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2013)* was held at the mid of this month in Osaka, Japan. This conference discusses the design and development of distributed systems with fault-tolerance and self-* properties. SSS 2013 had five sessions: self-stabilization; fault-tolerance and dependability; formal methods and distributed systems; ad-hoc, sensors, mobile agents and robot networks; and P2P, social, self-organizing, autonomic and opportunistic networks.

We presented our paper, *Self-Healing of Byzantine Faults*, in the self-stabilization session. Our main result is a self-healing algorithm, for communication networks, to detect message corruptions made by Byzantine nodes and to segregate such nodes so that eventually they can no longer corrupt messages. Our self-healing algorithm reduces communication cost significantly compared to non-self-healing algorithms.

In the same self-stabilization session, the safe-convergence property was discussed. Due to transient faults, a system may be placed in some arbitrary configuration. A distributed algorithm is *self-stabilizing* if the system will recover to a legitimate configuration, without external intervention, in finite time. S*afely-converging* *self-stabilization* is an extension of self-stabilization, where after the system is recovered to a legitimate configuration, then the system converges to an optimal legitimate configuration. Sayaka Kamei, from Hiroshima University in Japan, gave a talk about: *An Asynchronous Self-Stabilizing 6-Approximation for the Minimum Connected Dominating Set with Safe Convergence*, in which she provided the first asynchronous self-stabilizing (6+e)-approximation algorithm with safe convergence for the minimum connected dominating set (MCDS). Her algorithm has a self-stabilization phase of one round, and a safe-convergence phase of O(n) rounds.

In the session on ad-hoc, sensor, and mobile networks, there was an interesting talk about *counting and naming nodes in anonymous unknown dynamic networks*. A network is anonymous when initially, all nodes don’t have ids. The network is *unknown* if the nodes have no priori knowledge of the network in terms of network topology and network size. In the counting problem, the objective is to determine the network size. In naming, a unique identifier is assigned to each node. Othon Michail, from Computer Technology Institute in Greece, considered dynamic networks with broadcast. He showed interesting impossibility results on the problems of counting and naming. In particular, he showed that counting is impossible without a leader, and that naming is impossible to solve even with a leader, and even if the network size is known to all nodes.

Two other interesting problems were discussed in the workshop: the exploration problem and the estimation of average network degree. In the exploration problem, a set of robots is required to explore a finite space. The space to explore is partitioned into a finite number of locations represented by a graph. The nodes represent the locations, and the edges represent the possible movements of robots between the nodes. In the estimation of average network degree, the objective is to estimate the average degree of nodes in a given network.

Anissa Lamani, from Universite de Picardie Jules Verne Amiens in France, discussed the *terminating exploration problem* in her paper, *Ring Exploration by Oblivious Robots having Vision Limited to 2 or 3***. **The terminating exploration problem requires the following: 1) each node is occupied by at most one robot in the initial configuration, and 2) every node eventually has to be visited by at least one robot; and then all robots must stop moving. She assumed that the robots are oblivious (memoryless) and all robots follow the same algorithm. Further, there is no communication among the robots; however, each robot has an electronic eye-sensor to see the other robots located on the nodes. She assumed that every robot is myopic in the sense that it has a limited visibility in terms of the path length from its location to any other location. In other words, if the limited visibility, ϕ = 1, then each robot can see the robots on its location and on its neighbors; and if ϕ = 2, then the visibility of each robot is extended to the neighbors of its neighbors. She studied this problem on ring networks. She presented two algorithms to solve this problem, one algorithm for ϕ = 2 and the other one for ϕ = 3. Moreover, she argued that there is no deterministic algorithm solving this problem if the robots are not initially close enough.

For the estimation of the average network degree, Hironobu Kanzaki, from Nagoya Institute of Technology in Japan, discussed estimating the average degree of a network particularly for dynamic networks. Naïvely, each node sends a message containing its degree to a centralized party, which calculates the average network degree. This has a message cost of O(n). Kanzaki discussed developing an approximation algorithm based on the concept of *property testing**, *which reduces communication cost to o(n).* *In particular, his algorithm makes use of the Goldreich and Ron algorithm in the setting of distributed computing.

Beside these scientific contents, I have a few comments about Osaka city. Osaka is a beautiful city in Japan, where it has many skyscrapers that have ~40 floors. It is fascinating to see the view of this city from Osaka-sky building. A few pictures are attached here to show the beauty of Osaka and a near city, Kyoto.

Osaka – Japan, in the evening, from Osaka-Sky building:

Kyoto – Japan, Golden Pavilion, afternoon:

Below is an article on the Sandia Cybersecurity Research Institute. It describes (briefly) some work my research group is doing in collaboration with folks at this center.

*“You have to have good researchers. Good researchers, all of them.” – *Paraphrase of quote from Das Boot.

Sometimes writing a paper can be a challenging slog: goals move as conjectures prove wrong; facts that seem simple turn out to be slippery to prove; experiments must be rerun as the paper evolves; and notation needs to be redone for the sake or readability.

In such a situation, it’s a blessing to have coauthors who will go the extra mile to keep pushing and pushing the paper to completion. So I would now really like to thank my students Jeffrey Knockel and George Saad for their hard work on our latest paper: “Self-Healing of Byzantine Faults” (abstract below, full paper here).

Note: I wouldn’t exactly compare the research process with serving on a U-boat mission during the decline of the Third Reich. For me it’s most similar to mountaineering, whose ethos is captured by the motto of the Seattle mountaineers: “Like fun, only different!”

Paper Abstract:

*Recent years have seen significant interest in designing networks that are self-healing in the sense that they can automatically recover from adversarial attack. Previous work shows that it is possible for a network to automatically recover, even when an adversary repeatedly deletes nodes in the network. However, there have not yet been any algorithms that self-heal in the case where an adversary takes over nodes in the network. In this paper, we address this gap.*

*In particular, we describe a communication network over n nodes that ensures the following properties, even when an adversary controls up to nodes, for any positive . First, the network provides point-to-point communication with bandwidth and latency costs that are asymptotically optimal. Second, message corruptions occur in expectation, before the adversarially controlled nodes are effectively quarantined so that they cause no more corruptions. We present empirical results showing that our approach may be practical.*

Filed under: Uncategorized | Tags: algorithms, secure multiparty computation, security, theory

Attached is a paper on a problem we’ve been working on for a while: an efficient algorithm for Secure Multiparty Computation (SMPC) against a static adversary.

In the SMPC problem, n players each have a private input, and their goal is to compute the value of a n-ary function, f, over the inputs, without revealing any information about the inputs. The problem is complicated by the fact that a hidden subset of the players are controlled by an adversary that actively tries to subvert this goal.

SMPC abstracts several important problems in distributed security, and so, not surprisingly, there have been thousands of papers written

in the last several decades addressing it. However, there is a striking barrier that prevents wide-spread use: current algorithms to solve SMPC are not resource efficient. In particular, if there are n players involved in the computation and the function f can be computed by a circuit with m gates, then most algorithms require each player to send a number of messages and perform a number of computations that is

We describe scalable algorithms for SMPC against a static adversary. We assume a partially synchronous message passing communication model, but unlike most related work, we do not assume the existence of a broadcast channel. Our main result holds for the case where there are $n$ players, of which a fraction are controlled by an adversary, for any positive constant. We describe a SMPC algorithm for this model that requires each player to send messagesand perform computations to compute any function f, where m is the size of a circuit to compute f.

Filed under: Uncategorized | Tags: distributed computing, secure multiparty computation, security, theory

In this post, I want to talk about a research problem that I think even our new Republican legislatures can get excited about: Yao’s millionaire problem. In this problem, two millionaires want to determine who is richer, but neither wants to reveal their private net worth. Can we develop a protocol to help these millionaires? This problem is important because it kicked off the the study of secure multiparty computation.

Below is a picture of the protocol originally proposed by Yao. Alice and Bob are the two millionaires and i and j are the private net worths of Alice and Bob respectively. The protocol makes use of public key cryptography: Alice is assumed to be able to generate a public encryption key E_a() along with a private decryption key D_a(). To see that the protocol reveals who is richer, note that y_j = D_a(k) = x. Thus w_j = x if j<= i and w_j = x+1 if j>i. Showing that the protocol doesn’t reveal any information beyond who is richer is more challenging and is presented in detail in the paper.

I should point out that this protocol takes exponential time and that this run time has been improved in subsequent papers. However, there is a question about this problem I haven’t been able to answer. In the case where there are n players, do we need private channels or cryptographic assumptions to solve the problem? Are private channels needed even if we’re happy with a Monte Carlo solution? I’ve seen several papers that remove cryptographic assumptions, but none that seem to remove the need for private channels. Conversely, I’ve seen no papers that prove that private channels are necessary…

Filed under: Uncategorized | Tags: distributed computing, game theory, PODC, security

Accepted papers for PODC 2011 are now up on the web here. There are many interesting papers – here are a few that immediately catch my eye:

*“Error-Free Multi-Valued Consensus with Byzantine Failures“*by Guanfeng Liang and Nitin Vaidya. This paper gives a deterministic algorithm for solving multi-valued consensus with Byzantine faults that has asymptotically optimal bandwidth when the number of bits in the input values are large. The cool thing about the paper is that it uses network coding and an interesting amortized analysis. Essentially there are multiple runs of a single bit consensus algorithm. The single bit consensus algorithm has lightweight checks and is 1) very efficient when the checks detect no malicious behavior; and 2) able to “kick out” a Byzantine processor whenever the checks do detect malicious behavior.*“Xheal: Localized Self-Healing Using Expanders”*by Amitabh Trehan and Gopal Pandurangan. Amitabh, my former PhD student, and periodic guest blogger on this blog, is now a postdoc at Technion working with Shay Kutten and Ron Lavi. This paper is still very secret as Amitabh won’t even send**me**a copy of the submitted version. However, we can guess that the paper describes an algorithm that ensures an overlay network 1) keeps paths between nodes short; and 2) keeps degrees of nodes small. That word “localized” is very tantalizing as it suggests that, unlike previous work, this new self-healing algorithm requires only local communication.*“Adaptively Secure Broadcast, Revisted”*by Juan Garay, Jonathan Katz, Ranjit Kumaresan and Hong-Sheng Zhou. Again this paper doesn’t seem to be up on the web and I didn’t get to read it as a member of the PC. However, the problem itself, described in the paper Adaptively Secure Broadcast by Martin Hirt and Vassilis Zikas, looks interesting: basically the goal is secure broadcast in a scenario where an adaptive adversary can corrupt up to a 1/3 fraction of the players,*including the sender*. The original paper by Hirt and Zikas shows that past results on the secure broadcast problem are not secure under composition, and they then present a new protocol that is secure under composition, along with a new simulation-basted security notion for the problem.*“Fault-Tolerant Spanners: Better and Simpler”*by Michael Dinitz and Robert Krauthgamer. This paper shows how to create spanners of general graphs that are tolerant to vertex failures. They give a transformation for k >= 3 that converts every k-spanner construction with at most f(n) edges into a k-spanner construction that can tolerate r faults and uses O(r^3 log n)*f(2n/r) edges. For the case k=2, they improve the approximation ratio from O(r log n) to O(log n). Bonus: This last result uses an LP relaxation and the Lovasz Local Lemma.- There are several papers that did not make it as regular papers that I hope to see as brief announcements. Privacy concerns keep me from mentioning anything in particular here, but there were some nice lower bound results that were invited to be resubmitted as brief announcements. I hope the authors will accept.

Question: Are there no game theory papers this year except the two that we submitted? This seems surprising.

Filed under: Uncategorized | Tags: algorithms, cryptography, game theory, PODC, security, theory

Decisions for PODC 2011 (in San Jose) have just been mailed out this morning. In a later post, I’ll talk about some of the interesting accepted papers this year – there are many. For now I wanted to mention two of my own papers that were accepted and are now up on my web page.

Conflict on a Communication Channel. This is a paper that I had previously blogged about here. I’m still very excited about the idea of maintaining security invariants while spending asymptotically less than an adversary that is trying to break them. In this paper, we were able to do this for a few problems related to jamming in sensor networks and denial of service attacks. However, the application of this kind of game theoretic analysis to security problems is still wide open. Many interesting problems remain.

Scalable Mechanisms for Rational Secret Sharing. This is a paper that I had **not** previously talked about in this blog. The paper considers the problem of secret sharing when each player is rational. In particular, we assume that each player prefers to learn the secret over not learning it, but prefers to be the only player learning the secret over being one of many players learning it. Much interesting work has been done on this problem in the last few years, with the main goal being to design a mechanism that ensures that ensures that all players learn the secret (thereby maximizing social welfare). The new problem we focus on in this paper is designing mechanisms that both reveal the secret to all players and are **scalable **in the sense that they require each player to send just a constant number of messages in expectation. Secret sharing is a subroutine for many cryptographic protocols, so one motivation is to develop a scalable tool for problems like rational multiparty computation and implementation of a mediator.

Filed under: Uncategorized | Tags: distributed algorithms, security, theory

I enjoy collecting meta-level tricks for coming up with new research problems. Two tricks I know of are “zooming out” and “zooming in”. I think that “zooming out” is the more obvious trick: find several important problems that share some fundamental similarity and formulate a single abstract problem that allows for simultaneous progress in understanding all the particular problems. Many big problems in complexity theory are of this type.

The other technique, “zooming in” seems to be just as important, but to be used much less frequently, perhaps because it’s harder to determine when it should be applied. One of my most cited research results is on a problem that was devised using the “zooming in” technique: the problem of choosing a random peer in a large network. The story behind this paper is that we wanted to solve some hard problem on peer-to-peer networks, and one part of our solution required as a subroutine an algorithm to choose a peer uniformly at random in the network. Implementing this subroutine seemed like the easiest part of the problem and so we saved it to the end. But then when it came time to solve it, we became more and more annoyed by the difficultly of the problem. Finally we did a thorough literature search on this subproblem, which at first we thought would be the easiest part of the paper. Not only didn’t we find any correct solutions in the literature, but we actually found several **incorrect** solutions. At this point, we realized the problem was important and interesting in its own right, and we wrote an entire paper on just this one subproblem.

Today I wanted to highlight a recent nice example of the “zooming in” technique, in a ICDCS paper by my former student Maxwell Young. This paper, “Practical Robust Communication in DHTs Tolerating a Byzantine Adversary”, by Maxwell Young, Aniket Kate, Ian Goldberg and Martin Karsten, is a systems paper that presents a clever algorithm for efficient and robust communication in a distributed hash table (DHT). A DHT is basically a distributed data structure that allows for efficient storage and lookup of content based on keywords. The nice trick they use in this paper, is to formulate the research problem by “zooming in”. Instead of designing yet another DHT from scratch that is robust to adversarial control of nodes, they assume that part of the problem is already solved: there is already a DHT that has the property that each “supernode” contains a majority of peers that are **not** controlled by an adversary (there are any number of papers that show how this property can be maintained). They then focus only on the problem of routing most efficiently through such a DHT. To my knowledge, this is the first paper that focuses only on this important subproblem of robust communication in a DHT. And if you take a look at the paper, you’ll see that the results they achieve are much better than other algorithms that have been proposed, as asides and perhaps without too much thought, in other papers for this problem. While this paper focuses just on the *empirical* side of this problem, I think there are still many interesting open problems on the theoretical side.