Machinations


SODA 2014
January 15, 2014, 2:55 pm
Filed under: Uncategorized | Tags: , ,

Following is a conference report on SODA 2014 by my student Mahnush Movahedi [Jared]

SODA Conference Notes

This report is for those of you that could not attend SODA 2014 and are eager to know about it. There were many exciting talks at SODA ranging from algorithms, complexity, approximation algorithm, discrete optimization, combinatorics, graph theory and game theory in three parallel sessions.

Herbert Edelsbrunner was the invited plenary speaker for the first day. He described his journey to the fascinating world of geometric forms in his talk named “Shape, Homology, Persistence, and Stability”. Assume we are given a set S of points in a plain, alpha shape is an interpretation for the concept of the the shape formed by these points (it is a generalization of the convex hull). We can generalize this alpha shape in 2-dimensional to higher dimensions, specifically 3-dimensional case. In his talk he discussed homology and persistence. Persistent homology is an algebraic method for measuring topological features of shapes. Stability of persistent homology diagrams under perturbations was a key point in his talk.

Day two at SODA, I was especially intrigued by a session about distributed algorithms and self-organizing particle systems. David Doty gave a very interesting talk on the paper “Timing in chemical reaction networks”. He started the talk by showing a movie in which a white blood cell chases a bacteria by the power of chemical reactions. Chemical Reaction Networks (CRN) attempt to formally model the behavior of chemical systems. CRNs can be considered as an implementable a programming language for the design of artificial molecular control circuitry.

It has been shown that CRN can simulate bounded-space Turing machine with an arbitrary small positive probability of error. A+B→C+D is an example of reaction in the CRNs model, where A,B,C,D are molecular species.  The “program” for a CRN consists of a set of reactions of this type, along with an initial population of molecular species.  The “computation” occurs according to a statistical process that repeatedly chooses one reaction from the set and applies it to the appropriate chemical species.

Angluin, Aspnes, and Eisenstat in their paper “Fast computation by population protocols with a leader” relate the CRN reaction to leader election protocol.  Starting from a uniform initial configuration of n copies of L ,  when two candidate leaders encounter each other, one drops out (by the reaction L+L→L+N ). However, electing a leader in this manner incurs an exponential slowdown and there is no way for the leader to know when it has been elected. These problems motivate the need for a timer CRN. In the search for a timer CRN, David shows that if a CRN respects finite density (at most O(n ) additional molecules can be produced from n initial molecules), then starting from any dense initial configuration (all molecular species initially present have initial count Theta(n), where n is the initial molecular count and volume), every producible species is produced in constant time with high probability. This implies that no CRN obeying the stated constraints can function as a timer. I had a chance to have lunch with David and discuss this work more. He is a fun and knowledgeable person who believes that implementing a timer is a core issue behind any population protocol.

Jared Saia talked about work with Valerie King on “Faster Agreement via a Spectral Method for Detecting Malicious Behavior”. To understand the work, lets start with the definition of Byzantine agreement:  1) bring n processors each with a private bit to agreement on a single common bit; and 2) ensure that this bit is the same as the input of one good processor. This problem is more difficult in the presence of a strong adversary, who has full information, can actively control the behavior of constant fraction of the processors and finally can control the scheduling of the delivery of messages. Jared presented an algorithm that can solve Byzantine Agreement for n processors in expected O(n^3) communication time, expected polynomial computation time per processor, and expected polynomial bits of communication. The algorithm is based on repeatedly generating a fair global coin, which is generated by coin flips from individual processors. The adversary can corrupt this process by generating biased individual coin flips. Spectral analysis is used in this work to detect such adversarial behavior. Each processor maintain a matrix containing the sum of the coin flips received in each iteration. The top right eigenvector of this matrix is used to detect adversarial processors, since entries with high absolute value in this eigenvectr correspond to processors that are trying to bias the global coin. Overall, a very nice paper.

I also would like to mention another talk that I found interesting: Intrinsic Universality in Tile Self-assembly Requires Cooperation presented by Andrew Winslow proves that Winfree’s abstract Tile Assembly Model is not intrinsically universal when restricted to use noncooperative (temperature 1) tile binding. The Tile Assembly Model is a simple model with translatable but un-rotatable square or cube tiles as its fundamental components. Tiles’ sides are labeled with glue colors, each with an integer strength. Two tiles that are placed next to each other interact if the glue colors on their abutting sides match, and they bind if the strengths on their abutting sides match and sum to at least a certain (integer) temperature. Temperature 1 binding, is where all tiles bind to each other if they match on at least one side; similarly you can define temperature 2 or more. It is interesting that the Tile Assembly Model is indeed intrinsically universal when temperature 2 binding is used but it is not intrinsically universal when restricted to temperature 1 binding.

I also enjoyed the game theory session which was full of new ideas. “Constrained Signaling in Auction Design” by Shaddin Dughmi, Nicole Immorlica, Aaron Roth is a well-written paper that focuses on auctions where the auctioneer does not have the capacity to describe to the buyers the exact identity of the good. Instead, the auctioneer describes the good using defined signals. Dimitris Paparas talked about “The Complexity of Optimal Multidimensional Pricing”.  This paper is about the problem of computing a revenue-optimal pricing. They show that the decision version of this problem is NP-complete for distributions of support size 3 as well as for identical distributions. In his talk, on “The Complexity of Optimal Mechanism Design”, Christos Tzamos proved that optimal mechanism design is complex, and that a revenue-optimal auction in multi-item settings cannot be found and implemented in a computationally efficient way.

Finally I like to mention the Abstract Talk website as an another good source to learn more about SODA.

Copyright (C) 2014 Mahnush Movahedi
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that new copies bear this notice and the full citation on the first page. Abstracting with credit is permitted. This work may be published or be a basis for publication in the future. Copyright may be transferred without further notice and this version may no longer be accessible.
Advertisements


Map for SODA
January 13, 2012, 6:45 pm
Filed under: Uncategorized | Tags: , ,

Below is a map of some restaurants, cafes, shrines, museums, etc that are around the Westin Miyako Hotel where SODA will be this year.  This map was made by Shinobu Taniguchi, a very helpful Japanese friend of mine who lives in Osaka, Japan.

See you in Kyoto!

Map of Kyoto around the Miyako Hotel

 



SODA accepted papers
September 14, 2011, 3:14 am
Filed under: Uncategorized | Tags: , ,

As has already been pointed out in several blogs, accepted papers for SODA  are now up on the web.  I’m on the program committee this year so can tell war stories about reviewing 53 papers over several weeks, but that’s another blog post.  For now, I wanted to list some of the papers that I think will be of interest to the distributed computing community:

Gathering despite mischief
Yoann Dieudonne, Andrzej Pelc and David Pele

Networks Cannot Compute Their Diameter in Sublinear Time
Silvio Frischknecht, Stephan Holzer and Roger Wattenhofer

Information Dissemination via Random Walks in d-Dimensional Space
Henry Lam, Zhenming Liu, Michael Mitzenmacher, Xiaorui Sun and Yajun Wang

Rumor Spreading and Vertex Expansion
George Giakkoupis and Thomas Sauerwald

Towards Robust and Efficient Computation in Dynamic Peer-to-Peer Networks
John Augustine, Gopal Pandurangan, Peter Robinson and Eli Upfal

Lower Bounds for Number-in-Hand Multiparty Communication Complexity, Made Easy
Jeff Phillips, Elad Verbin and Qin Zhan

SINR Diagram with Interference Cancellation
Chen Avin, Asaf Cohen, Yoram Haddad, Erez Kantor, Zvi Lotker, Merav Parter and David Peleg

Ultra-Fast Rumor Spreading in Social Networks
Nikolaos Fountoulakis, Konstantinos Panagiotou and Thomas Sauerwald

A Little Advice Can Be Very Helpful
Arkadev Chattopadhyay, Jeff Edmonds, Faith Ellen and Toniann Pitassi

Physarum Can Compute Shortest Paths
Vincenzo Bonifaci, Kurt Mehlhorn and Girish Varma

Parallelism and Time in Hierarchical Self-Assembly
Ho-Lin Chen and David Doty


Day 2 at SODA
January 19, 2010, 10:54 pm
Filed under: Uncategorized | Tags: , ,

Editor: Guest Post again by Amitabh Trehan

I am in the business meeting: surprisingly, it’s a lot of fun, maybe due to the free drinks and the humorous presentations by the contenders for SODA 2012,  freed, it seems, from the betas, thetas and hidden constants! Ok then, after multi-round voting by a show of hands and a play of non-musical chairs on a large scale (to help get the count correct), Kyoto has narrowly triumphed in it’s bid for 2012 over Barcelona: the other contenders (in order of votes received) were Prague, Rome and New York. There were some wild celebrations and tennis style fist-pumping!

There were many exciting talks today ranging from algorithmic game theory to distribued computing to graph algorithms and some indescribable topics like those at the business meeting!

Noam Nisan gave a very interesting talk on Google’s auction for tv ads.  I think I’ve been seeing the mention of this article on the blogosphere somewhere but avoided reading it (maybe I thought it was a google advert :). First, it was  indeed a surprise that Google markets television ads, but as it turns out, they have good reason to do so. Google brings a new system of marketing advertisement time that is different from the traditional office room deals, and which brings nice algorithmic challenges too. For the adverstisor, Google brings a web-based interface, precise targeting of content, and excellent feedback. The auctions are run everyday (for the next days advertisements, I think). Informally, the auction has slots for sale, bidders bid with their value for the particular slots and total budget, and the auction outputs the set of slots won by each bidder, and the price for which that slot is given.   The problem model is fairly straightforward with some simple constraints. The algorithm tries to optimize revenue, efficiency, and fairness.

The  auction is based on simultaneous ascending auction, and basically is as follows:  each ad-slot has an associated price that keeps increasing throughout the auction. Prices start at low reserve prices, and rise whenever there is “over-demand” for an ad-slot — i.e. a slot that is currently held
by one bidder is desired by another. Such small price increases keep going on until there is no
“over-demand”, at which point the auction closes. The basic step in the auction is the calculation
of the “demand” of a bidder at current prices — i.e. which set of slots would this bidder desire to
acquire assuming that slots are priced as given. In its basic form, the calculation of this demand is
done by a greedy algorithm that chooses slots according to decreasing bid-to-price ratio. Under certain theoretical assumptions (“gross substitutes”) it is known that this procedure ends with a “Walrasian market equilibrium” and under even stricter assumptions these prices are incentive compatible — i.e. they give no bidder any strategic reason to under or over bid.  In reality, some slots may go unsold though, and they are put through the wringer again till google is sure nobody wants them. At that point google puts up public service ads on them such as those asking people to desist from taking drugs. As Noam said, it’s not the case that google doesn’t want you to take drugs, it’s just that they have empty slots :); So, a betting hunch I have from this talk is that if you see too many public service ads coming on the television, maybe the slots are going cheap and it’s time to buy them!!!

I would like to briefly mention a few more talks here: Monotonicity in Bargaining Networks by Yossi Azar et al. talked about bargaining networks.  In these networks, the nodes are players, and each edge has a certain amount of money on it that can be split among the endpoints of that edge only if the endpoints reach an agreement on how to split the dollar.  However, each node is allowed to split the money with at most one neighbor.  There are many real world situations where such networks exist e.g. among buyers and sellers where the edge weight can be considered as the difference in value of the object being sold, and in dating or marriage, where the edge weight would relate to the benefit of a relationship. A stable solution is related to the fractional min cover and thereby, maximum matching.  The authors show that for several solution concepts (i.e. stable outcomes) for this game are monotonic on bipartite graphs, where monotonic means that adding an edge between nodes x and y can only increase the profit of x and y on the bargaining network.Interesting questions remain: What are the  mechanisms of bargaining? Are there simple distributed algorithms for these solutions? lower bounds?

The evening talks on A model of computation for map reduce, and Deterministic algorithms for the Lovasz local lemma drew much interest from the audience. I also enjoyed the talks Highwway dimension, shortest paths and provable efficient algos, and max flows in planar graphs.



SODA at Austin has a good fizz.
January 18, 2010, 1:17 pm
Filed under: Uncategorized | Tags: , ,
Austn downtown at night

Room with a view: Austin night © Amitabh Trehan

Editor’s note: This is another guest post from  Amitabh Trehan.

Hello from SODA at Austin. First, a few questions as warmup exercise (It’s a theory conference!):
– If I blog about the conference (my second conference reporting), should I not get a Press card and full financial support from the organizers? 🙂
– If the proceedings are on disk only, should there be more (or less) electric points in the seminar rooms?
– A European friend asked me ‘Is continental breakfast supposed to be just a bowl of fruits and coffee? Which continent are they referring to’?
– How did Austin land up in the middle of Texas?

So, on to some reports now (before my editor cuts my writing for being mathematically imprecise!). Kudos to Austin: on Saturday, I got to watch a wonderful performance by the Austin symphonic Orchestra and violinist Nadja Salerno-Sonnenberg sitting in the very first row on a $5 student ticket, and today live blues at a downtown bar. The conference itself is big: there are 3 parallel sessions throughout the day from 9 am to almost 7 pm besides a plenary session and best paper presentation. Of course, I could only attend at most (n-2)/3 +  2 talks, where n was the total number of talks.

The best paper award has been given to Asadpour, Goemans, Madry, Gharan and Saberi’s paper An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem . This  paper breaks the 28 year old Θ(log n) bound set by Frieze et al for the classic Asymmetric Travelling Salesman Problem (ATSP).  To remind the reader, ATSP is the asymmetric version of the Traveling Salesman Problem (TSP) which is the problem of finding the minimum cost tour visiting each vertex at least once given a set V of n vertices, and a cost function c: V X V -> R+, and where for ATSP c(u,v) need not be equal to c(v,u). The technique they have used is similiar to the algorithm for the symmetric version by Christofides which gives a factor 3/2 approximation. At a very high level, this is to  construct a special spanning tree, find  minimum cost Eulerian augmentation of this tree, and shortcut the resulting Eulerian walk to get the result. The spanning tree is special in the sense that it is a thin tree. The algorithm uses Held-Karp Linear Programming (LP) relaxation and a thin tree is roughly defined with respect to the optimum solution x* of the relaxation as a spanning tree that, for every cut, contains a small multiple of the corresponding values of x* in the cut. Shayan Oveis Gharan. presenting the paper, highlighted as important the ideas of ‘thinness’ and that of maximum entropy: explicitly fix the desired combinatorial structure, and then maximize the randomness. Overall, a very nice paper.

The other talk I will briefly discuss is the plenary session talk by Cynthia Dwork on ‘Differential Privacy’. She called it ‘postmodern privacy data analysis’. From what little I know (which is not much) of postmodernism, this seems an apt analogy: the author is supposed to be much less important (or almost anonymous) as compared to the work of the author. The aim of differential privacy is not only important but urgent in our time: How to publicly release statistical information about a set of people without compromising the privacy of any individual. Identity thefts and the  ease with which one can obtain personal information is now getting alarming. After giving a brief history of the problem and explaining the concepts of adjacent databases, epsilon-differential privacy, linkage attacks etc, she presented the new extensions of their work using the H1N1 epidemic as an example: Continual observation and Pan-privacy.  Continual observation is the question of how one can analyze aggregate user information to monitor, say, regional health conditions, while maintaining differential privacy. Pan-privacy maintains the privacy of the database “inside and out” completely hiding the pattern or appearance; protecting against both announced (legal action, subpoenas etc) and unannounced intrusions. The pan-privacy density estimator, as it is called, is based on an idea from the social sciences called randomized response given by Warner in 1965. The idea was to do a coin toss while recording the response of a respondent in a survey and record the response according to the result of the toss and possible truthfulness of the respondent’s statement. As Cynthia showed in her last slide, there are many animals now in the differential privacy zoo!

Besides these, I attended some interesting talks in the day ranging from property testing to graph theory (minors etc) and algorithms about clique-width and kernels. There was also, as one participant called ‘the Kawarabayashi’ session which had  four papers by Ken-ichi Kawarabayashi, one less than last year (as far as I know).