Machinations


Worm vs Alert (Part 2)
June 22, 2010, 6:19 pm
Filed under: Uncategorized | Tags: , , , ,

This is the final post of a two part series.  Guest post by Navin Rustagi.

If a faulty detector node misfires a false alert, it will end up propagating this false alert through the whole network, thus unnecessarily congesting the network. In our recent paper we address this issue to some extent. We do it by implementing a Time To Live(ttl) field in the alert messages and the nodes. We have a parameter for the system called  τ, which is used to initialize the value of the ttl field of the detector nodes, when they get alerted for the first time. After that each node which gets alerted decrements its ttl field by one in each successive time step and sends out its alert messages with that value. New nodes which get alerted get initialized with the value of the ttl field in the alert message which alerts them. If they get an alert message with a higher ttl value than their existing one, they update their ttl value to reflect the new value. Once the ttl value goes to zero, the alerted nodes stop propagating but remain alerted throughout. Observe that this ttl mechanism ensures that if  τ =O(loglogn), the influence of false alerts is limited to at most poly logarithmic number of nodes.

In the paper which we have submitted to SSS, we show that for worms which are designed to last for at most O(log n) time steps, there are certain conditions on the parameters which ensure that with the alert strategy adapted slightly from our paper in OPODIS, all but o(n) nodes in the network will be alerted w.h.p.

We call the lower bound on the expected rate of growth of alerts as p and the upper bound on the expected rate of growth of worms as q. We choose the values of  α,  β and γ s.t  p ≥ 2q². For these values of the parameters, we show that w.h.p all but o(n) nodes would be alerted if the worms are constrained to spread in at most O(log n) time steps. In this paper, we assume that the overlay network has degree O(log n)(as opposed to a constant degree in the previous paper) so that we can make use of the resulting high vertex and edge expansion in our analysis. The rate of growth of alerts as characterized by  p  only provably holds true till O(n/log n) nodes are alerted. After that, the lower bound on the expected rate of growth of alerts till the number of alerted nodes exceed n/3, is called p1=(1+e /4).

In our paper in OPODIS, so far as  p  was greater than  q we gained significant advantage over the worms in terms of the number of nodes alerted. Since once a node gets alerted, it continues to send alert messages throughout the game, the worm is not really successfully in gaining any ground by slowing down the infection process. But in this paper, due to the constraints imposed by the ttl mechanism, alerted nodes have to stop propagating once their ttl field expires. So the worms can wait for the ttl to expire before starting the infection again. We show in our analysis that we can avoid worrying about the number of nodes which stop propagating alerts by alerting enough number of detector nodes in one round which in τ more rounds would alert all but o(n) nodes in the network. To do that we define a small step round to be a round when the number of virgin(neither infected nor alerted) nodes that receive worm messages is no more than 1/(logp1n) times the number of infected nodes at the end of the previous round. A round
which is not a small step round is called a   large step round.

We observe that for a worm to take over the network in O(log n) rounds, there has to be a large step round after n/(logp12r+1n) nodes have been infected and before κ0 n/(logp12r+1n) nodes have been infected, where κ0 is a fixed constant. We make use of the high edge and vertex expansion of the underlying network, coupled with the fact that  p ≥ 2q²  to show that the detector nodes alerted at the end of this large step round are good enough to alert all but o(n) nodes in τ more rounds.

A simulation implementing our algorithm on a network of 500 nodes with degree 10 and  α=4,  γ=0.15,  β=1  and  τ=3 is given here.



Worm Vs Alert (Part 1)
June 17, 2010, 7:10 pm
Filed under: Uncategorized | Tags: , , , ,

This post is the first part of a two part series.  This is by Jared’s student Navin.

Jared and I submitted a paper to SSS 2010. This blog entry will describe the results and some of the ideas used in that paper. The problem is as follows.

Consider the following game between a worm and an alert over a network of  n nodes. Initially, no nodes are infected or alerted and each node in the network is a special detector node independently with small but constant probability, γ. A detector node is dedicated to  protecting  itself against a worst case worm attack and to generate a trace of that attack called  Self Certifying Alerts(SCA). Any node which has a SCA, can run that trace in a sand boxed environment to verify the vulnerability and take necessary measures to protect itself.

The game starts with a single node becoming infected. In every round thereafter, every infected node sends out β worms to other nodes in the population for some constant β; in addition, every alerted node sends out α alerts for some constant α. Nodes in the network change state according to the following three rules: 1) If a worm is received by a node that is not a detector and is not alerted, that node becomes infected; 2) If a worm is received by a node that is a detector, that node becomes alerted; 3) If an alert is received by a node that is not infected, that node becomes alerted.

We allow an infected node to send worm messages to any other nodes in the network, but, in contrast, only allow the alerts to be sent over a special precomputed overlay network with degree d. This network is a random d regular graph. We assume  that the infected nodes collaborate with each other, and know everything except which nodes are detectors, and the alerted nodes’ random coin flips.

So, is there a strategy which the  alerted nodes should follow to ensure that all but o(n) nodes in the network would  be alerted? Are there any properties of the underlying overlay network, which are necessary for any successful strategy by alerted nodes to work?

Consider the  following strategy for propagating alerts. Each alerted node, chooses α neighbors uniformly at random from its neighbors to send alerts to in the next time step.  In our paper titled “Worm Versus Alert: Who Wins in a Battle for Control of a Large-Scale Network?’‘ which appeared in OPODIS 2007, we showed that this strategy for the alerted nodes ensures that only a vanishingly small fraction of the nodes become infected w.h.p, no matter what strategy is used by the infected nodes. In particular, we prove that if d ≥ α and ( α / β(1 – γ) ) > (2d)/c where c is the expansion constant, then w.h.p all but o(n) nodes are infected. We also prove in this paper that there is a worm strategy to take over the network, if the underlying network does not have good expansion.

There is a run of this game for 600 nodes given in the figure.  Time moves from left to right, top to bottom in the figure and the red nodes are infected, green nodes are alerted and blue nodes are neither.

So far this post  describes the problem statement and previous contributions. I will describe the most recent contributions in the next blogpost.

Navin Rustagi



Kinesthetic Learning for a Distributed Computing Class
June 10, 2010, 3:15 pm
Filed under: Uncategorized | Tags: , , ,

Suresh Venkatasubramanian at geomblog blogs recent about active learning modules for graduate algorithms. I’ve frequently used active learning in undergraduate classes but somehow have found it less effective for graduate classes.  However, just recently, I ran across a nice paper by Sivilotti and Pike that describes kinesthetic learning activities for a distributed algorithms class.  The basic idea is to have each student simulate a processor that is running a distributed algorithm.  Examples they discuss are non-deterministic sorting, parallel garbage collection, a stabilizing token ring, and stabilizing leader election.  I also think the ideas could be applied to more advanced/theoretical distributed algorithms.  A great thing about distributed computing is that it lends itself nicely to this sort of classroom activity.

I have not yet used these ideas in a graduate class.  However, I frequently using active learning in undergrad classes and the students seem to appreciate it.  I’ll probably try out some of the ideas in the next academic year.  I’m teaching a game theory class in the spring where I can use these types of activities – even perhaps for “hard” problems like e.g rational secret sharing, auctions, etc.



June Column of Sigact News
June 9, 2010, 3:50 pm
Filed under: Uncategorized | Tags: , ,

is now available at MIT and Technion.

This month’s column, by Zvi Lotker and David Peleg, describes the signal to interference and noise ratio (SINR) model for communication in wireless networks.  The SINR model is in contrast to the popular unit disc graph (UDG) model of communication where two nodes in a wireless network have a communication link if they are within distance one of each other.  The SINR model is much more realistic than UDG. It assumes the energy of a signal fades with the distance to the power of some parameter alpha.  Then, a message is assumed to be received correctly by a listener if the energy of the signal divided by the energy of other interferences is greater than some parameter beta (where one of the interferences is always regular background radiation, which has a fixed energy level).  While the SINR model is more realistic, many (theory) researchers choose to use the UDG model because it is easier to work with algorithmically.

In the column, Lotker and Peleg discuss similarities and differences between the two models, and discuss the types of research problems that need to be solved in order to make it easier to design algorithms for the more realistic SINR model.  One of these main problems is the study of SINR “reception diagrams”, a problem that seems to have an interesting computation geometry flavor to it.