Machinations


BDA 2014
October 22, 2014, 10:31 pm
Filed under: Uncategorized | Tags: , ,

[The following report on BDA’14 was written by my student Mahdi Zamani]

[A more polished version of this report is available HERE]

Recently, Mahnush and I attended a workshop on Biological Distributed Algorithms co-located with DISC 2014. The workshop consisted of 20 talks distributed in two days and focused on the relationships between distributed computing and distributed biological systems and in particular, on analysis and case studies that combine the two. The following is a summary of some of the talks we found interesting.

Insect Colonies and Distributed Algorithms; Insect Colony Researcher Viewpoint, Anna Dornhaus, University of Arizona

Anna talked about several open questions in modeling the behavior of different insect colonies. Insect colonies go through many changes over time in response to their changing needs and environment.

figure bda14.jpg

Figure 1. BDA 2014

Most changes happen via complex collective behaviors such as task allocation, foraging (food finding), nest building, load transport, etc. One interesting aspect of insect colonies is that unreliable individual behaviors result in complex group behaviors that are reliable. Individuals use various methods of communication such as pheromone trails, versatile signals, visual cues, substrate vibration, and waggle dance. Waggle dance is a sophisticated method of communication among honeybees to indicate resource locations by showing the angle from sun. Biologists are generally interested in computer models to know how individual behaviors impact group behaviors. In particular, they are interested to understand how positive feedbacks (a process A produces more of another process B which in turn produces more of A and so on) lead to significant consequences such as symmetry breaking. For example, ants tend to choose from one food source even if there are multiple similar sources around them. Also, larger colonies result in more symmetry breaking behavior. This motivates the following questions: How does the size of a colony affect collective behavior? Why is the workload distribution so uneven in some biological systems?

Distributed Algorithms and Biological Systems, Nancy Lynch, MIT

Nancy started by describing similarities between biological and distributed systems. Both systems often have components that perform local communications using message passing and local broadcasting. In bio systems, there are components with simple states that follow simple rules. To model a bio system using a distributed algorithm, the first step is to define the problem, the platform (physical capabilities of the system such as memory), and the strategies (rules).
Nancy then talked about two important distributed problems: leader election and maximal independent sets (MIS). In leader election, there is a ring of processes that can communicate with their neighbors and the goal is to pick a leader process. If the processes are all identical and their behaviors are deterministic, then solving this problem is impossible due to symmetry (all processes are similar). On the other hand, if the processes are not identical (i.e., each has a unique ID), then finding a leader is possible. Interestingly, in a setting with identical processes that are allowed to make random choices, this problem can be solved using a biased coin: each party flips a coin with probability 1/n to announce itself as the leader.
In MIS, the goal is to find the largest subset of the vertices of a graph such that no two neighbors (i.e., vertices connected directly via an edge) are both in the subset. There is a Las Vegas algorithm [B]  for solving this problem: in each of several rounds, each party flips a biased coin and informs its neighbors that it is in the MIS if it has not received a similar message from its neighbors. Each party stops if either it is in MIS or one of its neighbors in the MIS. Nancy finally talked about three ant colony problems that her research group has recently been working on: Ants foraging, house hunting, and task allocation.

Modeling Slime Mold Networks, Saket Navlakha, CMU

Saket started his talk by explaining an experiment related to slime mold, where the mold food was put in different locations similar to the station of the Tokyo rail system. They observed that the mold grew in a similar network as the rail system. This is very interesting because Japanese engineers could have asked a slime mold to design the rail system instead of spending several hours on the design.

Saket then continued by describing their model of the slime mold behavior for finding food sources. For simplicity, they assume there is a complete graph at first and then by calculating flow over the edges (tubes) some of the edges are disconnected. Then, they measured and compared the cost, efficiency, and fault tolerance of the network generated by their model and the Tokyo rail system: their model is as efficient as the rail system! The brain development has a similar behavior: it generates a complete neural network at first and then prunes over time. This is called the synaptic pruning algorithm.

figure slime_mold_1-660x501.jpg
Figure 2. Yellow slime mold growth
(courtesy of ScienceDaily)

The human brain starts with a very dense network of neurons and each edge keeps track of the number of times it is used to route information based on some pre-determined distribution. Then, the network is pruned based on the flow information. Saket finally talked about a similar distributed model for bacterial foraging (E. coli) in complicated terrains.

Collective Load Transport in Ants, Ofer Feinerman, Weizmann Institute

Ever seen a group of ants carrying a large foot item together? Ofer talked about collective load retrieval, the process in which a large number of ants cooperate to carry a large food item to the nest. Ofer’s research team tracked a group of ants and the load over distances of about 1000 ant lengths and used image analysis to obtain highly detailed information. They showed that the collective motion is highly cooperative and guided by temporary leaders that are knowledgeable regarding the correct direction home. Ofer finally presented a theoretical model suggesting that the ant-load system is poised at a critical point between random and ballistic motions which renders it highly susceptible to a knowledgeable leader. He played a video showing a group of ants carrying their load in a wrong direction. Then, one ant joined the group as the leader and corrected the direction.

Distributed Information Processing by Insect Societies, Stephen Pratt, Arizona State University

Stephen talked about a collective model of optimal house-hunting in rock ant Temnothorax albipennis. Each colony of T. albipennis has a single queen and hundreds of scouts (workers). In the process of house-hunting, scouts first discover new nests and assess them according to some criteria such as size and darkness. Then, they recruit other ants to the new nest using tandem running, where an informed ant leads a second ant to her destination to get a second opinion about the nest (see Figure 3↓).

Figure 3. Ants tandem run (courtesy of Stephen Pratt)

When the number of ants in the new nest reaches a threshold, scouts begin rapid transport of the rest of the colony by carrying nest-mates. In each time step, each ant is in one of these three states: explore, tandem, and transport. The transition between these states happens based on the ant’s evaluation of the quality of the nest sites and the population of the ants in this sites. Stephen defines optimality with regards to the speed and accuracy of the decision-making process. He asked: Does a colony have a greater cognitive capacity than an individual? For the house-hunting process, recent lab experiments show that when the number of nests (choices) increases, colony performs much better in choosing the good choices while lone ants visit more nests. He then asked: Do colonies make more precise discriminations than individuals? To answer this, Stephen’s team ran experiments to measure how individuals and colonies can correctly compare a dark nest with nests with various brightness levels. Interestingly, they observed that colonies can correctly choose darker nests with significantly more accuracy than individuals. They also show that even two ants perform significantly better than one.

Cells, Termites, and Robot Collectives, Radhika Nagpal, Harvard University

Radhika talked about biological systems from an engineering viewpoint. Collective behaviors often result in self-repairing and self-organizing properties which are crucial for building robust systems. In bio systems, these properties are achieved from cooperation of vast numbers of unreliable agents.

Figure 4. Termite-inspired robots
(courtesy of Harvard University)

Radhika described a bio-inspired distributed robot system that can perform group tasks such as collective construction, collective transport, and pattern/shape formation. The robots achieve a desired global goal in a completely decentralized fashion by performing local interactions with other robots. In particular, they model a large population of termites for building complex structures (see Figure 4↑).

Confidence Sharing: An Economic Strategy for Efficient Information Flows in Animal Groups, Amos Korman, CNRS and University of Paris Diderot

Amos started his talk by defining two methods of communication that exist in biological systems: passive (indirect) and active (direct) communication. Passive communication is done by transferring information with no direct intention of signaling, i.e., cues from the behavior of one animal are indirectly perceived by others. For example, it is shown that animals align their movements to those performed by their neighbors. In active communication, an animal communicates directly with others by sending parts of its internal state via, for example, pheromone trails, cell signaling, etc. Amos then continued his talk by arguing that confidence exists among animals: they are shown to become more responsive as their certainty drops. For example, crickets increase their speed when they are more confident about their intention. This confidency is propagated from one cricket to others via passive and active communication. By sharing their confidence, agents improve their unreliable individual estimates. Amos described an algorithm in which each agent compresses all information it has gathered into a single parameter that represents its confidence in its behavior. This gives a very simple and near optimal algorithm. The algorithm continuously updates agents confidence level based on the interaction it has with other agents. Unfortunately, if there are bandwidth and computational restrictions to agents, then the performance of this algorithm decreases significantly. Also, the algorithm assumes two agents who exchange confidence information must have disjoint set of exchange history.

Task Allocation in Ant Colonies, Alex Cornejo, Harvard University

Alex presented a general mathematical model for task allocation in ant colonies that can be used to achieve optimal division of labor. Based on this model, Alex described a distributed randomized efficient algorithm for task allocation that imposes minimal assumptions on the capabilities of the individual ants. The proposed algorithm requires constant amount of memory. Moreover, it assumes the ants have a primitive binary feedback function to sense the current labor allocation and to determine whether a particular task requires more workers or not. The algorithm also assumes individual workers do not differ in their ability to perform tasks. The proposed algorithm for ants converges to a near-optimal task allocation with high probability in time which is logarithmic in the size of the colony.
Compared to the previous work on task allocation, the proposed algorithm is different in the following aspects: it uses constant memory per ant, works only when the number of tasks to allocate is a small constant, and the allocation is for proportions of workers (and not for individual workers), and all workers are similar. Once workers are variant in ability to perform tasks, the task allocation problem becomes NP-hard. One drawback that was pointed out by one of the participants is that their model does not strictly adhere to the real ants’ behaviors.


SSS’14 Conference
October 8, 2014, 5:53 pm
Filed under: Uncategorized | Tags: ,

[The following blog report on SSS’14 was written by my student George Saad]

 SSS-2

16th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2014) was held in Paderborn, Germany, from Sep 28 to Oct 1, 2014. This conference discusses the design and development of distributed systems with fault-tolerance and self-* properties.

SSS 2014 had seven sessions: self-stabilization I/II/III; Dependable Systems; Formal Methods, Safety, and Security; Ad-hoc, Sensor and Mobile Networks and Cyberphysical Systems; and Cloud Computing, P2P, Self-organizing and Autonomous Systems.


On the first day, which was tutorial day on Self-Organizing Physical Systems, Marco Dorigo, a professor in Universite Libre de Bruxelles and University of Paderborn, gave the fourth talk for Self-organizing Swarms. In this talk, Dorigo showed how to use ants to find the shortest path of a pair of nodes in a network using artificial pheromones. The ants choose one path from a set of paths stochastically, depending on the amount of pheromones of previous ants visited such paths. Note that there are other strategies considered for solving this problem such as separation, alignment and cohesion. Interestingly, finding the shortest path in this way can be used to obtain arg min f(x).

However, artificial pheromones are not practical in many situations. Thus, goal search and path formation are studied in the absence of pheromones. In particular, robots are assigned as points in order to form a path to the goal, after which all other robots will follow such path.

Homogenous robots can cooperate together to perform tasks that cannot be done by individual robots, such as crossing big holes and climbing steep hills. Adaptive rotation is one of the strategies to enable a group of robots to climb high hills. Moreover, the robots do light-flashing to synchronize in order to cooperate properly.

Heterogeneous robots are also considered. Note that they are heterogeneous in the sense that they have different capabilities. Thus, they work together to empower their combined capabilities in order to perform harder tasks. For instance, there are three types of robots: eye-robot, arm-robot and foot-robot. In a popular scenario, they cooperate together to look for and bring a book. First, a eye-robot flies seeking for the book. Once the book is located. A set of foot-robots is notified in order to move to that location carrying an arm-robot. Eventually, the arm-robot catches the book. After that, all these robots return back to the starting point.


In Self-Stabilization I session, Volker Turau, a professor in Hamburg University of Technology, presented his paper, “A self-stabilizing algorithm for edge monitoring problem”.

In wireless sensor networks, the nodes sense the environment and transmit (or forward) the data in the network.  In the presence of adversary, a set of compromised nodes may disrupt the network in the sense of corrupting or dropping messages. Thus, a set of nodes is chosen in order to monitor all communications on edges using k-hop knowledge. In k-hop knowledge, a monitoring node x can monitor the communication on any edge whose endpoint is reachable by at most k hops from node x. The challenging task is to find the minimum set of monitoring nodes in a network. This problem is NP-Complete even for 1-hop knowledge.

Two distributed approximation algorithms are provided as previous work to solve this problem in synchronous model with no transient faults. In this paper, the authors provided a self-stabilizing algorithm, which finds the minimum set of monitoring nodes in the presence of transient faults in asynchronous model. In this algorithm, each node has a state which determines if this node is in or out of the monitoring set, and each node maintains a set of monitoring nodes for each of its adjacent edges. In each step of the algorithm, the state of each node changes only after having the permission of all neighboring nodes.


We presented our paper, Self-Healing Computation, in Self-Stabilization II session. Our contribution is that we developed a self-healing algorithm, for computation networks, which 1) detects computation corruptions made by Byzantine nodes; and 2) segregates such nodes, so that eventually no more corruptions occur.

We show that our self-healing algorithm reduces asymptotically the message cost compared to non-self-healing algorithms. Moreover, our experimental results show that the message cost is reduced by a factor of 425 compared to the naïve computation for a network of size 8k.

In this paper, we have an interesting result: informally, given any tree of size n, and each node survives independently with a constant probability, the probability of having a subtree of surviving nodes of size Ω(log n) is at most ½.


In Self-Stabilization III session, Toshimitsu Masuzawa, a professor in Osaka University, presented the paper, “Edge Coloring Despite Transient and Permanent Faults”. In this paper, the authors provided a self-stabilizing algorithm, which colors the edges of an arbitrary graph so that every node has no two edges of the same color in the presence of Byzantine adversary and in ring topology. The basic idea is that coloring is performed in steps, where one node x proposes colors to its adjacent edges in one step. After setting these colors, if a neighboring node y has two incident edges of the same color, then node y proposes a different color to the edge (x, y). However, this kind of color proposals may not terminate in the presence of a Byzantine node. To overcome this problem, the authors used a rotating priority procedure, where each node has a priority to propose a color in case of conflict, and these priorities change in a round-robin fashion. Unfortunately, this algorithm does not color the graph with the minimum number of colors required. My consideration is that will their algorithm color the graph properly and terminate in case that there is a good node surrounded with two Byzantine nodes in a ring topology?

Also, Hung Tran-The, a graduate student in Universidade de Lisboa, presented his paper, “Tight Bounds for Stabilizing Uniform Consensus in Mobile Networks”. He provided a self-stabilizing algorithm, in which an agreement of nodes is obtained in the presence of crashes, send-omissions or general omissions. However, they claimed that there is no self-stabilizing algorithm in the presence of Byzantine adversary. This is arguable due to the existence of Byzantine Agreement. My consideration is that cannot we implement Byzantine agreement as a self-stabilizing algorithm?


Beside these scientific contents, I have a few comments about Paderborn city. Paderborn is a beautiful and quiet city in Germany. In this city, the mayor of Paderborn invited us to Paderborn Town-Hall. He gave us a presentation showing how beautiful Paderborn city is.

2014-09-28_-_19-37-55

He told us that “Pader” of “Paderborn” means water spring, where Paderborn has many water springs.

paderaue

Paderborn-Pader-River


We also visited a wonderful palace, Schloss Corvey, which is 57 km away from Paderborn.

IMG_1742

IMG_1739

2014-09-30_-_18-23-25


Afterwards, we spent good time in a restaurant near to the palace.

IMG_1759

IMG_1769

IMG_1753



FOMC 2014
August 23, 2014, 3:03 am
Filed under: Uncategorized | Tags: , ,

Foundations of Mobile Computing (FOMC) was one of the most fun workshops I’ve organized. Many thanks to my co-organizer Maxwell Young.

Slides from talks are available at the link above.

 

 

 



Workshop on Modern Massive Data Sets (MMDS)
July 16, 2014, 9:20 pm
Filed under: Uncategorized | Tags: , ,

[The following blog report on MMDS’14 was written by my student Mahdi Zamani – Ed]

Recently, I attended the MMDS workshop hosted by UC Berkeley. The workshop consisted of 40 talks distributed in four days and a poster session. It focused on algorithmic and statistical problems in large scale data analysis and brought together researchers from several areas including computer science, mathematics, statistics, and Big Data practitioners. In the third day of the workshop, I presented a poster about our recent work on multi-party sorting of large data sets. The following is a brief summary of a few talks that seemed interesting to me and my current research projects. The workshop program is available here.

Dan Suciu from University of Washington talked about running queries on very large databases. He argued that traditionally database queries were measured in terms of disk I/O but in Big Data query processing, since the database is store on distributed clusters, communication is the bottleneck. Now, the problem is to run a query on p servers via a Massive Parallel Communication (MPC) model, where the goal is to minimize the load on each server. In this model, the database records are partitioned among p servers each getting M/p, where M is the total number of records. Then, the database operations (like SQL joins) are performed on the partitions. As an example, Dan mentions a popular operation in Big Data analysis called triangle that is defined over three relations: Q(x,y,z)=R(x,y) join S(y,z) join T(z,x). Using a naive approach, this query can be computed in two communication rounds, doing two separate joins. But, surprisingly, it can be computed in a single communication round with reasonable load on each server using a simple technique described by Dan in [3].

Stanley Hall Auditorium

Bill Howe from University of Washington talked about Myria, a web-based framework written in Java for Big Data query processing. He is motivated by the fact that at least we, as scientists, spend more than 90% of our time handling data for our experiments rather than designing new techniques and algorithms  (I think by scientists he means practitioners here. I believe theoreticians spend much less time working with data :). Bill explains that although current Hadoop optimizations are more than 100 times faster than Hadoop itself, we still need faster and simpler frameworks that can be at least used by non-expert scientists. The main idea behind Myria is to blur the distinction between relational algebra and linear algebra. In other words, every query in the relational domain is translated to simple linear algebraic equations that can be computed very fast. Myria provides an imperative-declarative language that runs queries in parallel.

Yiannis Koutis from University of Puerto Rico talked about spectral algorithms for mining data from large graphs. Spectral analysis is an important tool for graph partitioning, where the set of vertices of a large graph is divided into smaller components such that the number of edges running between separated components is small. This is often an important subproblem for complexity reduction or parallelization. A cut that provides the smallest number of such edges is called the sparsest cut. Such a cut bisects the graph into the most important components. Given an adjacency matrix A of a graph G, the general idea behind spectral graph partitioning is that the eigenvector associated with the second smallest eigenvalue of the Laplacian matrix L of A is the sparsest cut of G (given a matrix A and the degree matrix of A denoted by D, the Laplacian matrix of can be computed as L=D-A). Since finding the exact eigenvalues of the Laplacian matrix is often computationally intensive, Yiannis argues that one can find a good (small) sparse cut by approximating the eigenvalues. However, it is an open problem whether one can find better cuts using another technique with similar computational costs. At the end of the talk, I asked Yiannis whether probabilistic methods (such as random walks) are slower than spectral methods. He answered that probabilistic methods are usually local, and global random walks are often much slower than global spectral analysis.

David Woodruff from IBM Almaden Research talked about an optimal CUR matrix decomposition technique. Given a matrix A, the goal is to find three matrices C, U, and R such that A=C.U.R. A solution to this problem can be used in many applications such as recommendation systems. For example, Netflix is interested in fast techniques for decomposing its huge users-by-movies matrix into a matrix of the most important users and a matrix of the most important movies. Since the exact decomposition is often very computationally expensive, an approximation is calculated instead. Although there are fast randomized algorithms for solving this problem (e.g., [2]), David proposes their asymptotically-optimal deterministic algorithm published recently in STOC 2014 [1].

Xavier Amatriain from Netflix talked about current Big Data challenges of his company. Until 2007, the company had over 100 million movie ratings and now it holds about 5 billion ratings. The company currently has 44 million users around the world and holds more than 5 billion hours of movies. The users play about 50 million movies and submit 5 million ratings per day. This contributes to about 32% of US daily downstream traffic. From this huge pile of data, Netflix needs to extract ranking and similarity information about the movies. Their main approach to this end is to employ distributed machine learning algorithms. Xavier argues that in today’s world more data sometimes beats better algorithms. He backs up his claim by a quote from Peter Norving, Director of Research at Google: “Google does not have better algorithms, only more data” (see this). Xavier continues by arguing that training algorithms can run in parallel because training data are often independent of each other. He also describe how they have scaled their algorithms by distributing at three different levels: across regions or subsets of the population, at the hyperparameter optimization stage, and at the model training level.

[A polished version of this report is available here – Ed]

References

[1] Christos Boutsidis, David P. Woodruff: “Optimal CUR Matrix Decompositions”, Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pp. 353—362, 2014. URL: http://doi.acm.org/10.1145/2591796.2591819.

[2] Michael Mahoney, Petros Drineas: “CUR Matrix Decompositions for Improved Data Analysis”, Proceedings of the National Academy of Sciences, pp. 697—702, 2009.



Game Theory and Evolution
June 24, 2014, 8:14 pm
Filed under: Uncategorized | Tags: , , ,

The Game Theory of Life is a nice writeup of a recent PNAS paper Algorithms, Games and Evolution by Chastaina, Livnatb, Papadimitriou and Vazirani.  As a CS researcher with an amateur’s interest in biology, I really enjoyed this paper.

The paper shows a certain equivalence between two algorithms.  The first algorithm, I’ll call “evolution”.   This is an approximation (using replicator dynamics) to what happens in real Evolution in nature.  The second algorithm is the multiplicative weight update algorithm (MWUA) which we all know and love in CS.  MWUA is also known as weighted majority or winnow to many people in CS theory.

The paper proves that the outputs of “evolution” are equivalent to MWUA. They then show this implies that “evolution” is equivalent to a process that tries to maximize utility *plus* entropy (equation 3 in the paper). The utility part is not surprising since we’d certainly expect Evolution to be concerned with it.  The “plus entropy” part is surprising, and perhaps explains the surprising diversity of life.  Essentially sexual replication in the “evolution” algorithm is what gives rise to this additional entropy part.

I think it’s a pretty interesting paper. Intuitively, it proves that a simple kind of genetic algorithm (i.e. “evolution”) will give equivalent results to the MWUA (i.e. see the equivalence of Figures 1 d) and 2 d) in the paper). A nice result from the CS perspective.  However, there are a few “rah rah computer science” comments in the paper that may turn off some biologists.

I think the key thing that would really impress biologists would be making the “evolution” algorithm more realistic.  To do this would require improving Nagylaki’s theorem which is used in the paper for analyzing the “evolution” algorithm. The authors claim Nagylaki’s theorem can already be used to handle diploidy and partial recombination. Mutation is an area mentioned for future work. Modeling the interaction of organisms seems much harder. It’ll be interesting to see the followup work on this paper.

 



Secure Multiparty Computation Workshop
June 22, 2014, 10:48 pm
Filed under: Uncategorized | Tags: , ,

Last month I went to a  workshop on secure multiparty computation that was organized by IARPA.  I’ve had mixed luck with these types of workshops that are organized by funding agencies.  However, this one was pretty good, because of the timeliness of the topic, well-prepared talks, and the fact that the participants were some of the big names in security and MPC.

This was also one first times I’ve visited and given a talk at MIT, which was a lot of fun.  Many of the talk slides are included in the workshop link above.

 



Latest SIGACT Distributed Computing Column
May 29, 2014, 3:04 pm
Filed under: Uncategorized | Tags: ,

The June 2014 edition of the Distributed Computing Column in SIGACT News is devoted to a review article on Transactional Memory by Gokarna Sharma and Costas Busch. An archive of the column is available online at:

https://parasol.tamu.edu/~welch/sigactNews/

Also, if you, like me, are having difficulty obtaining the full version of the March 2014 column, which reviewed a Dagstuhl seminar on Consistency in Distributed Systems by Bettina Kemme, G. Ramalingam, Andre Schiper, and Marc Shapiro, you can get it at the archive.

Enjoy!

– Jennifer Welch




Follow

Get every new post delivered to your Inbox.