Machinations


Simon’s Institute Workshop on Securing Computation
June 20, 2015, 6:28 am
Filed under: Uncategorized | Tags: ,

[The following report on the Securing Computation Workshop at the Simons Institute was written by my students Mahnush Movahedi and Mahdi Zamani. A polished version of this report is available HERE.]

2015 Workshop on Securing Computation

June 8-12, Simons Institute for the Theory of Computing, University of California, Berkeley


Lower Bounds for Information-Theoretic MPC, Ivan Damgård, Aarhus University

Ivan started his talk by comparing unconditional multi-party computation (MPC) protocols to cryptographic techniques such as FHE: The unconditional protocols are usually much more computationally-efficient than FHE protocols. On the other hand, unconditional techniques usually require lots of interactions and their round complexity is usually large. One interesting question to ask here is that “Is it possible to design an unconditionally-secure FHE that has the benefits of both schemes?” The answer is, unfortunately, still unknown, but we have some partial answers. The goal here is to determine how many messages we need to transmit in order to compute a non-trivial function (such as the AND of bits each coming from one player) securely. Ivan argued that if there are n players with t semi-honest corruptions, the protocol must send Ω(nt) messages. For example, for n=3 and t=1, we need to send at least six messages to compute the AND functionality. The intuition behind this is that every player must communicate with at least t+1 players, and must later receive another message to compute the output. So, for each player we count (t+1)+2 messages sent or received, and thus the total becomes at least n(t+3)/2 messages sent. One thing that I should point out here (and which my own research touches on) is that they assume all-to-all communication. Several protocols (such as [DKMS12, BGT13, DKMS14]) break this lower bound by requiring each party to only communicate with a small (polylog size) set of parties (see [SZ15] for a complete discussion on sublinear MPC results).

figure 20150610_160437.jpg

Figure 1. Calvin Lab Auditorium (enlarge)
Bounding the communication and rounds needed in general seems hard. But, it might be possible to bound these for the gate-by-gate protocols, where a circuit is evaluated gate-by-gate: the invariant is that we secret-share the inputs to each gate, and then evaluate the gate functionality over the secret-shared inputs to generate the gate output. Known protocols with this strategy have Ω(n|C|) communication complexity and Ω(|C|) round complexity, where |C| is the circuit size. But, are these optimal? The bottleneck for gate-by-gate protocols is the multiplication gate, which Ivan argues to take at least t message to be evaluated. Thus, the best known results are asymptotically optimal. Ivan then finished his talk by showing that the same lower bounds exist for the case of dishonest majority, where a preprocessing phase is necessary for MPC. Thus, to really improve the existing protocols, we need a fundamentally different approach.

Obfuscation: Past, Present, and Possible Futures, Amit Sahai, UCLA

In program obfuscation, a given program P1 is converted to another program P2 that represents the same functionality (i.e., the same I/O) while an adversary who can see P2 cannot understand its logic. From a cryptographic perspective, this is like having a software that can keep a secret. This leads to a new notion of program obfuscation called indistinguishability obfuscation (iO), where a polynomial-time adversary cannot distinguish between the obfuscations of two equivalent programs. The first mathematical construction of an indistinguishability obfuscator was proposed by Garg-Gentry-Halevi in 2013. The main idea to obfuscate programs using structured noise rather than just random noise. When the program is evaluated, the noise cancels out and the correct output is obtained.

To this end, a multilinear map can be used to encode field elements with specific algebraic manipulations and a Boolean function that returns Yes/No when the encoded value is zero or not. The program is first converted to a sequence matrices using a technique called oblivious matrix branching program. Then, the Kilian’s randomization technique is used to generate structured noise: the matrices are multiplied by a sequence of random matrices. The key challenge to the current obfuscation scheme is the input-mixing attack. A technique is required to enforce honest behavior in some way. Amit finished his talk by arguing that multilinear maps provide a new hardness generating technique at the level of Diffie-Hellman and learning with error hardness assumptions, but they have mainly been used for obfuscation. One open question is that can multilinear maps be used for solving other security problems?

Two-Round MPC via Multi-Key FHE, Daniel Wichs, Northeastern University

Daniel started his talk by asking two motivating questions: (1) Can we construct MPC with minimal round complexity? (2) Can we construct MPC directly using FHE techniques? A fully homomorphic encryption (FHE) scheme allows to perform secure computation over encrypted data without decrypting it [Gen09]. Asharov et al. [AJLTVW12] show how FHE can be used to construct constant-round MPC by designing a threshold FHE (TFHE) technique that provides Byzantine-resilience and circuit-independent communication cost. All parties first encrypt their inputs under an FHE scheme (such as that of [BGV12]) and send the encrypted values to all other parties. Then, each party evaluates the desired function over the encrypted inputs via homomorphism, and eventually participates in a distributed decryption protocol to decrypt the output. This protocol requires a distributed key generation algorithm to agree on a common public key and a secret-shared private key. While the MPC protocol of [AJLTVW12] requires three rounds of communication, Daniel described a new FHE-based MPC protocol that requires only two rounds of communication. This result as well as [AJLTVW12] are secure in the common random string (CRS) model  ( an algorithm is secure in the CRS model if it assumes that all parties have access to a common random string taken from a predetermined distribution). This assumption is required for malicious security in both work. Also, both results require a non-interactive zero-knowledge (NIZK) technique to enforce honest behavior in the malicious setting.
The idea of the new result is to remove the first round (distributed key generation) of the protocol: Each party chooses its own individual public key and secret key pair with coordinating with other parties, encrypts its input under that public key, and broadcasts the ciphertext. But how to compute over ciphertext generated by these uncoordinated keys and decrypt the result? The key idea is to use a multi-key FHE scheme that can compute over a set ciphertext each encrypted with a different key. The result will be another ciphertext that can only be decrypted with a certain number of private keys. Recently, Clear and McGoldrick [CM14] proposed a multi-key FHE scheme using the learning with error (LWE) hardness assumption  (the LWE problem asks to recover a secret from a sequence of “approximate” random linear equations// . Algorithms based on the LWE assumption belong to the so-called post-quantum cryptography, where the security cannot be broken in feasible time even using a quantum computer).

Efficient Multiparty Protocols via Log-Depth Threshold Formulae, Ron Rothblum, Weizmann Institute

Ron presented a new approach for designing multi-party computation protocols when there are more than two parties. Their approach is motivated by the fact that the huge body of work on MPC are very complicated; there is a need for a simple and flexible approach. The basic technique that they propose is called player emulation; the computational task run by a player is emulated by other players. This allows us to reduce the construction of an // -party protocol to the construction of (small) constant-party protocols which are much easier to design. This idea consists of two steps:
  1. Design a protocol for a small number of parties (say, 3 or 4) which achieves security against a single corrupted party. Such protocols are typically easy to construct as they may employ techniques that do not scale well with the number of corrupted parties.
  2. Recursively compose with itself to obtain an efficient n-party protocol which achieves security against a constant fraction of corrupted parties.

The reduction idea is described in the following. First, consider a trivial n-party setting that includes a trusted party τ. The parties send their inputs to τ whom locally computes the MPC functionality and returns the output back to all parties. Now, replace the trusted party with a small constant number of virtual parties, say v1,v2,v3, which emulate the behavior of τ. The new protocol is secure as long as the adversary does not control a majority of the virtual parties. We now proceed by replacing v1 by with three new virtual parties w1,w2,w3. The new protocol is secure even if the adversary controls either one of v2,v3 and one of the w’s, or two of the w’s. Using this, Ron constructed a formula as shown in Figure 2↑, which can be used to build a formula for the entire protocol as shown in Figure 3↓, where the leaves are the real parties.

figure Capture3.jpg

Figure 3. Threshold Formulae of the Entire Protocol (enlarge)
Two things that are necessary for the new model to result in efficient MPC protocols are (1) a secure 3-party protocol; and (2) a logarithmic-depth formula that computes majority using only // gates. For the first requirement we can simply use the protocol of [BGW88] over three parties. For the second one, Ron proposed two techniques. One is to use a randomized majority-from-majorities algorithm (this results in statistical security) and one using a deterministic but only close to optimal solution (see his talk for more details). The new approach that Ron presented allows us to give new and conceptually simple proofs of classical MPC results such as [BGW88], and also design simple MPC protocols in the future.

Rethinking Secure Computation – A Greedy Approach, Muthu Venkitasubramaniam, University of Rochester

The general framework for secure computation consists of two steps: (1) Compile a functionality into one of several standard representations such as a circuit (Boolean or arithmetic) or an oblivious RAM; (2) Use a generic scheme such as [Yao82, GMW87, BGW88, CCD88] to securely evaluate the functionality. However, for some certain problems such as Private Set Intersection, specific solutions can result in significantly more efficient solutions. Inspired by these, Muthu presented a new algorithmic approach for designing secure two-party computation protocols. In their model (join work with Abhi Shelat from UVA), they show that several problems such as convex hull, minimum spanning tree, unit job scheduling, and single source shortest path can be securely computed only by using secure comparison.

figure Capture.jpg

Figure 4. Abhi and Muthu’s Results (enlarge)

The underlying idea among all of these problems is that each of them has some known greedy algorithm. Accordingly, Muthu described a new framework called Greedy-Millionaire Framework, where a function f is defined secure greedy compatible if there exists it has the following three properties:

  1. Unique Solution: Given inputs U and V of Alice and Bob, f(U,V) is unique;
  2. Unique Order: There is a unique order in which the output is presented in the greedy algorithm;
  3. Local Updatability: The local greedy heuristic can be computed using a comparison operation over the function locally computed over each of the inputs.
Figure 4↑ shows the message complexity of their protocols for solving the listed problem using the Greedy-Millionaire framework. Their model currently supports secure computation in the semi-honest and covert settings. Other than extending their model to the malicious setting, one open problem here is that “Can we adapt their framework to model secure computation using other primitives rather than the secure comparison operation? For example, primitives that work for graph algorithms?”

Cryptocurrencies and Smart Contracts, Elaine Shi, University of Maryland

Bitcoin is a decentralized cryptocurrency that can be used to perform secure computation using the concept of blockchains. A blockchain is a public ledger of all Bitcoin transactions that have ever been executed. It is constantly growing as “completed” blocks are added to it with a new set of recordings (read more about blockchains here). The blockchain used in bitcoin is guaranteed to be always correct but it does not guarantee privacy of users. Elaine described a universally-composable model of the bitcoin protocol, where the functionality F_blockchain exposes internal state of the blockchain to everyone, and the security proof is conducted in the F_blockchain-hybrid model.
Next, Elaine introduced Ethereum, a new cryptocurrency that consists of a ledger and user-defined smart contracts. The contracts run transactions (e.g., transfer x amount from y) over the ledger and the ledger ensures consensus among all users. A system called Hawk provides privacy-preserving smart contracts. Hawk implements contracts as auctions: each auction receives bids from the users and the auctioneer closes the auction at some point. One challenging problem here is that all bids are visible to the public so, for example, a smart user can wait for others and then submit its bid. In their model, the auctioneer (also called the manager) is assumed to be semi-trusted; it is standard to implement the manager using MPC but for efficiency purposes this is not considered in their model. Hawk supports financial fairness by punishing bad behaviors.

References

[AJLTVW12] Gilad Asharov, Abhishek Jain, Adriana López-Alt, Eran Tromer, Vinod Vaikuntanathan, Daniel Wichs. Multiparty Computation with Low Communication, Computation and Interaction via Threshold FHE. In Advances in Cryptology — EUROCRYPT 2012 . Springer Berlin Heidelberg, 2012.

[BGT13] Elette Boyle, Shafi Goldwasser, Stefano Tessaro. Communication locality in secure multi-party computation: how to run sublinear algorithms in a distributed setting. Proceedings of the 10th theory of cryptography conference on Theory of Cryptography, pp. 356—376, 2013.

[BGV12] Zvika Brakerski, Craig Gentry, Vinod Vaikuntanathan. Fully Homomorphic Encryption Without Bootstrapping. Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 309—325, 2012.

[CCD88] David Chaum, Claude Crépeau, Ivan Damgård. Multiparty unconditionally secure protocols. Proceedings of the twentieth annual ACM symposium on Theory of computing, pp. 11—19, 1988.

[DKMS12] Varsha Dani, Valerie King, Mahnush Movahedi, Jared Saia. Brief announcement: breaking the // bit barrier, secure multiparty computation with a static adversary. Proceedings of the 2012 ACM symposium on Principles of distributed computing, pp. 227—228, 2012.

[DKMS14] Varsha Dani, Valerie King, Mahnush Movahedi, Jared Saia. Quorums Quicken Queries: Efficient Asynchronous Secure Multiparty Computation. In Distributed Computing and Networking . Springer Berlin Heidelberg, 2014.

[Gen09] Craig Gentry. Fully homomorphic encryption using ideal lattices. Proceedings of the 41st annual ACM symposium on Theory of computing, pp. 169—178, 2009.

[BGW88] Michael Ben-Or, Shafi Goldwasser, Avi Wigderson. Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computing. Proceedings of the Twentieth ACM Symposium on the Theory of Computing (STOC), pp. 1—10, 1988.

[CM14] Michael Clear, Ciarán McGoldrick: Multi-Identity and Multi-Key Leveled FHE from Learning with Errors. Cryptology ePrint Archive, Report 2014/798, 2014.

[GMW87] Oded Goldreich, Silvio Micali, Avi Wigderson. How to play ANY mental game. Proceedings of the nineteenth annual ACM symposium on Theory of computing, pp. 218—229, 1987.

[SZ15] Jared Saia, Mahdi Zamani. Recent Results in Scalable Multi-Party Computation. In SOFSEM 2015: Theory and Practice of Computer Science (Italiano, GiuseppeF. and Margaria-Steffen, Tiziana and Pokorný, Jaroslav and Quisquater, Jean-Jacques and Wattenhofer, Roger, ed.). Springer Berlin Heidelberg, 2015. URL: http://dx.doi.org/10.1007/978-3-662-46078-8_3.

[Yao82] Andrew C. Yao. Protocols for secure computations. Proceedings of the 23rd Annual Symposium on Foundations of Computer Science, pp. 160—164, 1982.



The Coolest Problem You’ve Never Heard Of
April 20, 2015, 6:32 pm
Filed under: Uncategorized | Tags: , , ,

How do you compute over a noisy channel?   Say Alice and Bob want to run a distributed algorithm, but they can only communicate over a channel where each bit is flipped either stochastically or adversarially.  What is the smallest number of bits they must send in order to obtain the correct output of the algorithm, with high probability?  This general problem is called Interactive Communication, or sometimes Interactive Coding/Computation – let’s use IC for short.  It doesn’t take much imagination to think of applications of IC:  enabling sensor nodes to compute a function on their inputs in a noisy environment; designing reliable algorithms for natural algorithms [1]; and designing attack-resistant algorithms [2], just as some examples.

Wait, you may be thinking, isn’t this problem solved by error correcting codes?  No – the bits sent in the distributed algorithm may be determined only one at a time, in a dynamic manner.  Even for stochastic noise, if we naively encode each bit sent, in order to get a union bound ensuring the probability of any decoding error is small, we’ll need logarithmic blowup for each bit.  We can do better.

In fact, in the first paper on this problem, Coding for Interactive Communication, Leonard Schulman described an algorithm that requires only constant blowup, even when a constant fraction of the bits on the channel are flipped adversarially.  Schulman’s algorithm was based on a new type of code, tree codes, designed specifically for interactive communication. After this paper was published in 1996, subsequent work  focused on improving the error tolerance of Schulman’s algorithm (i.e. tolerating a larger constant fraction of errors); tuning the communication blowup to the (known) error rate; and considering the situation where the computation occurs over more than 2 parties.  See the survey paper Coding for interactive computation: progress and challenges (especially Sections 3 and 4) for details.

I’ll note here that in a sense all of information theory can be though of as a subset of IC. In particular, information theory considers “only” the class of distributed algorithms where Alice has an input that she wants to send to Bob.  In IC, we need to handle all possible distributed  algorithms, and so in a way, it’s pretty amazing that a constant adversarial noise rate can even be tolerated.

A very recent breakthrough in IC occurred in FOCS 2014, where Bernhard Haeupler in the paper Interactive Channel Capacity Revisited, made huge progress on the problem of tuning the communication blowup to a known error rate.  In particular, he described an algorithm that given a fixed and known adversarial noise rate \epsilon, achieves a communication blowup as a function of \epsilon that is conjectured to be optimal.

So what if \epsilon is not known in advance?  I’m going to end this post with a short advertisement for a recent paper with some collaborators of mine that begins to address that problem.  The paper is Interactive Communication with Unknown Noise Rate, with Varsha Dani, Mahnush Movahedi, and Maxwell Young that will be in this upcoming ICALP.  This is the first paper I’ve written about IC, and I hope it won’t be the last.  In fact, I’d like to see more people in the distributed computing community consider the problem.  In particular, it’d be nice to make progress on the (currently very hard) problem of somehow generalizing results to more than 2 parties.

[1]  For example, see this paper for a closely related problem: Breathe before Speaking: Efficient Information Dissemination Despite Noisy, Limited and Anonymous Communication

[2] This last domain is particularly interesting to me.  An obvious connection is attacks on communication channels.  A less-obvious connection is attacks on nodes.  If communication from each node is bounded, as is increasingly common for algorithms on large networks, there may be a fairly tight connection between attacks on communication channels and attacks on nodes.  Thus, there may be connections to attacks like Byzantine faults.



Reliable and Scalable Anonymity
April 18, 2015, 4:04 pm
Filed under: Uncategorized | Tags: , ,

If you’re in New Haven next week, you could do worse than checking out this talk at Yale’s CS Colloquium by my stellar PhD student Mahdi Zamani.



Congratulations!
April 7, 2015, 3:31 pm
Filed under: Uncategorized | Tags: , ,

Congratulations to Dr. George Saad who just defended his dissertation (with distinction) yesterday on “Selfishness and Malice in Distributed Systems”.  George’s dissertation focused on designing 1) algorithms for self-healing in networks with Byzantine nodes and 2) designing mediators to improve social welfare in a type of El-Farol game with both positive and negative networks effects.  Here are slides from his excellent talk.

George will be taking a position at the Silicon Valley startup Jasper Technologies next month.  On his committee were Maxwell Young (Drexel University), Tom Hayes and Dorian Arnold.

Congratulations George!



Truth, Lies, and Random Bits
February 3, 2015, 11:57 pm
Filed under: Uncategorized | Tags: , ,

I gave a talk last week at SOFSEM 2015.  The talk covers some of our recent work that uses spectral analysis to solve a classic Byzantine agreement problem.  Here are the slides.



ACM Fellows
January 10, 2015, 12:29 am
Filed under: Uncategorized | Tags:

Congratulations to the 2014 ACM Fellows, especially my friend and frequent collaborator Valerie King.  As usual, there are many theoreticians including Allan Borodin, Faith Ellen, Michael Mitzenmacher, Aravind Srinivisan.  The (somewhat) surprising thing is that all of these theoreticians have done significant work in distributed algorithms or networks.



BDA 2014
October 22, 2014, 10:31 pm
Filed under: Uncategorized | Tags: , ,

[The following report on BDA’14 was written by my student Mahdi Zamani]

[A more polished version of this report is available HERE]

Recently, Mahnush and I attended a workshop on Biological Distributed Algorithms co-located with DISC 2014. The workshop consisted of 20 talks distributed in two days and focused on the relationships between distributed computing and distributed biological systems and in particular, on analysis and case studies that combine the two. The following is a summary of some of the talks we found interesting.

Insect Colonies and Distributed Algorithms; Insect Colony Researcher Viewpoint, Anna Dornhaus, University of Arizona

Anna talked about several open questions in modeling the behavior of different insect colonies. Insect colonies go through many changes over time in response to their changing needs and environment.

figure bda14.jpg

Figure 1. BDA 2014

Most changes happen via complex collective behaviors such as task allocation, foraging (food finding), nest building, load transport, etc. One interesting aspect of insect colonies is that unreliable individual behaviors result in complex group behaviors that are reliable. Individuals use various methods of communication such as pheromone trails, versatile signals, visual cues, substrate vibration, and waggle dance. Waggle dance is a sophisticated method of communication among honeybees to indicate resource locations by showing the angle from sun. Biologists are generally interested in computer models to know how individual behaviors impact group behaviors. In particular, they are interested to understand how positive feedbacks (a process A produces more of another process B which in turn produces more of A and so on) lead to significant consequences such as symmetry breaking. For example, ants tend to choose from one food source even if there are multiple similar sources around them. Also, larger colonies result in more symmetry breaking behavior. This motivates the following questions: How does the size of a colony affect collective behavior? Why is the workload distribution so uneven in some biological systems?

Distributed Algorithms and Biological Systems, Nancy Lynch, MIT

Nancy started by describing similarities between biological and distributed systems. Both systems often have components that perform local communications using message passing and local broadcasting. In bio systems, there are components with simple states that follow simple rules. To model a bio system using a distributed algorithm, the first step is to define the problem, the platform (physical capabilities of the system such as memory), and the strategies (rules).
Nancy then talked about two important distributed problems: leader election and maximal independent sets (MIS). In leader election, there is a ring of processes that can communicate with their neighbors and the goal is to pick a leader process. If the processes are all identical and their behaviors are deterministic, then solving this problem is impossible due to symmetry (all processes are similar). On the other hand, if the processes are not identical (i.e., each has a unique ID), then finding a leader is possible. Interestingly, in a setting with identical processes that are allowed to make random choices, this problem can be solved using a biased coin: each party flips a coin with probability 1/n to announce itself as the leader.
In MIS, the goal is to find the largest subset of the vertices of a graph such that no two neighbors (i.e., vertices connected directly via an edge) are both in the subset. There is a Las Vegas algorithm [B]  for solving this problem: in each of several rounds, each party flips a biased coin and informs its neighbors that it is in the MIS if it has not received a similar message from its neighbors. Each party stops if either it is in MIS or one of its neighbors in the MIS. Nancy finally talked about three ant colony problems that her research group has recently been working on: Ants foraging, house hunting, and task allocation.

Modeling Slime Mold Networks, Saket Navlakha, CMU

Saket started his talk by explaining an experiment related to slime mold, where the mold food was put in different locations similar to the station of the Tokyo rail system. They observed that the mold grew in a similar network as the rail system. This is very interesting because Japanese engineers could have asked a slime mold to design the rail system instead of spending several hours on the design.

Saket then continued by describing their model of the slime mold behavior for finding food sources. For simplicity, they assume there is a complete graph at first and then by calculating flow over the edges (tubes) some of the edges are disconnected. Then, they measured and compared the cost, efficiency, and fault tolerance of the network generated by their model and the Tokyo rail system: their model is as efficient as the rail system! The brain development has a similar behavior: it generates a complete neural network at first and then prunes over time. This is called the synaptic pruning algorithm.

figure slime_mold_1-660x501.jpg
Figure 2. Yellow slime mold growth
(courtesy of ScienceDaily)

The human brain starts with a very dense network of neurons and each edge keeps track of the number of times it is used to route information based on some pre-determined distribution. Then, the network is pruned based on the flow information. Saket finally talked about a similar distributed model for bacterial foraging (E. coli) in complicated terrains.

Collective Load Transport in Ants, Ofer Feinerman, Weizmann Institute

Ever seen a group of ants carrying a large foot item together? Ofer talked about collective load retrieval, the process in which a large number of ants cooperate to carry a large food item to the nest. Ofer’s research team tracked a group of ants and the load over distances of about 1000 ant lengths and used image analysis to obtain highly detailed information. They showed that the collective motion is highly cooperative and guided by temporary leaders that are knowledgeable regarding the correct direction home. Ofer finally presented a theoretical model suggesting that the ant-load system is poised at a critical point between random and ballistic motions which renders it highly susceptible to a knowledgeable leader. He played a video showing a group of ants carrying their load in a wrong direction. Then, one ant joined the group as the leader and corrected the direction.

Distributed Information Processing by Insect Societies, Stephen Pratt, Arizona State University

Stephen talked about a collective model of optimal house-hunting in rock ant Temnothorax albipennis. Each colony of T. albipennis has a single queen and hundreds of scouts (workers). In the process of house-hunting, scouts first discover new nests and assess them according to some criteria such as size and darkness. Then, they recruit other ants to the new nest using tandem running, where an informed ant leads a second ant to her destination to get a second opinion about the nest (see Figure 3↓).

Figure 3. Ants tandem run (courtesy of Stephen Pratt)

When the number of ants in the new nest reaches a threshold, scouts begin rapid transport of the rest of the colony by carrying nest-mates. In each time step, each ant is in one of these three states: explore, tandem, and transport. The transition between these states happens based on the ant’s evaluation of the quality of the nest sites and the population of the ants in this sites. Stephen defines optimality with regards to the speed and accuracy of the decision-making process. He asked: Does a colony have a greater cognitive capacity than an individual? For the house-hunting process, recent lab experiments show that when the number of nests (choices) increases, colony performs much better in choosing the good choices while lone ants visit more nests. He then asked: Do colonies make more precise discriminations than individuals? To answer this, Stephen’s team ran experiments to measure how individuals and colonies can correctly compare a dark nest with nests with various brightness levels. Interestingly, they observed that colonies can correctly choose darker nests with significantly more accuracy than individuals. They also show that even two ants perform significantly better than one.

Cells, Termites, and Robot Collectives, Radhika Nagpal, Harvard University

Radhika talked about biological systems from an engineering viewpoint. Collective behaviors often result in self-repairing and self-organizing properties which are crucial for building robust systems. In bio systems, these properties are achieved from cooperation of vast numbers of unreliable agents.

Figure 4. Termite-inspired robots
(courtesy of Harvard University)

Radhika described a bio-inspired distributed robot system that can perform group tasks such as collective construction, collective transport, and pattern/shape formation. The robots achieve a desired global goal in a completely decentralized fashion by performing local interactions with other robots. In particular, they model a large population of termites for building complex structures (see Figure 4↑).

Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups, Amos Korman, CNRS and University of Paris Diderot

Amos started his talk by defining two methods of communication that exist in biological systems: passive (indirect) and active (direct) communication. Passive communication is done by transferring information with no direct intention of signaling, i.e., cues from the behavior of one animal are indirectly perceived by others. For example, it is shown that animals align their movements to those performed by their neighbors. In active communication, an animal communicates directly with others by sending parts of its internal state via, for example, pheromone trails, cell signaling, etc. Amos then continued his talk by arguing that confidence exists among animals: they are shown to become more responsive as their certainty drops. For example, crickets increase their speed when they are more confident about their intention. This confidency is propagated from one cricket to others via passive and active communication. By sharing their confidence, agents improve their unreliable individual estimates. Amos described an algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. This gives a very simple and near optimal algorithm. The algorithm continuously updates agents confidence level based on the interaction it has with other agents. Unfortunately, if there are bandwidth and computational restrictions to agents, then the performance of this algorithm decreases significantly. Also, the algorithm assumes two agents who exchange confidence information must have disjoint set of exchange history.

Task Allocation in Ant Colonies, Alex Cornejo, Harvard University

Alex presented a general mathematical model for task allocation in ant colonies that can be used to achieve optimal division of labor. Based on this model, Alex described a distributed randomized efficient algorithm for task allocation that imposes minimal assumptions on the capabilities of the individual ants. The proposed algorithm requires constant amount of memory. Moreover, it assumes the ants have a primitive binary feedback function to sense the current labor allocation and to determine whether a particular task requires more workers or not. The algorithm also assumes individual workers do not differ in their ability to perform tasks. The proposed algorithm for ants converges to a near-optimal task allocation with high probability in time which is logarithmic in the size of the colony.
Compared to the previous work on task allocation, the proposed algorithm is different in the following aspects: it uses constant memory per ant, works only when the number of tasks to allocate is a small constant, and the allocation is for proportions of workers (and not for individual workers), and all workers are similar. Once workers are variant in ability to perform tasks, the task allocation problem becomes NP-hard. One drawback that was pointed out by one of the participants is that their model does not strictly adhere to the real ants’ behaviors.



Follow

Get every new post delivered to your Inbox.