Machinations


Crowd Sourcing
March 25, 2011, 8:38 pm
Filed under: Uncategorized | Tags: , ,

Those who know me, know that I’m a sucker for crowdsourcing.  I think crowdsourcing is one of the more interesting new trends in computer science in recent years, and can see it opening doors into whole new worlds of research in security, maching learning, algorithms, scientific computing (e.g. protein folding),  etc.

There’s a  new blog post by Panos Ipeirotis on this topic titled Crowdsouring goes professional: The rise of the verticals.  The post describes several new companies that are building businesses on top of old crowdsourcing systems like Mechanical Turk.  These companies offer services like audio transcription, translation, proof-reading, and OCR.

The list of applications is interesting and the trend is exciting, but I’m still wondering what the killer app(s) will be.  Somehow it doesn’t seem very imaginative to use crowdsourcing to perform the same applications that computers can already perform (albeit less effectively).  On a more whimsical note, maybe this project is moving in the right direction.

Advertisements


Postdoc Opportunities
March 11, 2011, 3:10 pm
Filed under: Uncategorized | Tags: , ,

A quick post from Gopal Pandurangan who has 2 postdoc positions at NTU in Singapore:

I just recently got a grant from Singapore govt. for research in distributed algorithms. I am looking for postdocs (2 slots are there).  May I please request you to publicize in your blog the contents of the following link: http://www.ntu.edu.sg/home/gopal/postdoc.html [quoted below]

Multiple postdoctoral positions (official title: “Research Fellows”) are available, starting April 2011, in the Division of Mathematical Sciences, Nanyang Technological University, Singapore. All recent Ph.D.s (or those close to finishing) who work in theoretical computer science with a focus on one or more of the following areas are encouraged to apply: randomized/probabilistic algorithms, distributed algorithms/computing, approximation algorithms, network algorithms, communication complexity, and other related areas in algorithms. Experience in prior work in algorithms area as evidenced by publications in top algorithms and theory conferences is desired. All candidates with strong publication record in theory and algorithms (e.g., SODA, FOCS, STOC, PODC, SPAA, DISC) are encouraged to apply.

Initial appointment will be for one year with the possibility of renewal for an additional year based on performance and availability of funds. Salaries are competitive and are determined according to the successful applicants’ accomplishments, experience and qualifications. Travel funding is available.

The Mathematical Sciences Division at NTU has a vibrant research group in theoretical computer science, with several faculty, post-doctoral research fellows, and Ph.D. students from all over the world.

The candidates should send their CV, a brief research statement, and two letters of recommendation to Prof. Gopal Pandurangan (gopalpandurangan@gmail.com).

Singapore is a highly developed country and is well situated and connected. The quality of living is very high (with per capita income among the top 5 in the world), while the cost of living compared to developed countries is relatively low.

 



PODC 2011 Accepted papers
March 9, 2011, 6:54 pm
Filed under: Uncategorized | Tags: , , ,

Accepted papers for PODC 2011 are now up on the web here.  There are many interesting papers – here are a few that immediately catch my eye:

  • Error-Free Multi-Valued Consensus with Byzantine Failures by Guanfeng Liang and Nitin Vaidya.  This paper gives a deterministic algorithm for solving multi-valued consensus with Byzantine faults that has asymptotically optimal bandwidth when the number of bits in the input values are large.  The cool thing about the paper is that it uses network coding and an interesting amortized analysis.  Essentially there are multiple runs of  a single bit consensus algorithm.  The single bit consensus algorithm has lightweight checks and is 1) very efficient when the checks detect no malicious behavior; and 2) able to “kick out” a Byzantine processor whenever the checks do detect malicious behavior.
  • “Xheal: Localized Self-Healing Using Expanders” by Amitabh Trehan and Gopal Pandurangan.   Amitabh, my former PhD student, and periodic guest blogger on this blog, is now a postdoc at Technion working with Shay Kutten and Ron Lavi.  This paper is still very secret as Amitabh won’t even send me a copy of the submitted version.  However, we can guess that the paper describes an algorithm that ensures an overlay network 1) keeps paths between nodes short; and  2) keeps degrees of nodes small.  That word “localized” is very tantalizing as it suggests that, unlike previous work, this new self-healing algorithm requires only local communication.
  • “Adaptively Secure Broadcast, Revisted” by Juan Garay, Jonathan Katz, Ranjit Kumaresan and Hong-Sheng Zhou.  Again this paper doesn’t seem to be up on the web and I didn’t get to read it as a member of the PC.  However, the problem itself, described in the paper Adaptively Secure Broadcast by Martin Hirt and Vassilis Zikas, looks interesting: basically the goal is secure broadcast in a scenario where an adaptive adversary can corrupt up to a 1/3 fraction of the players, including the sender.  The original paper by Hirt and Zikas shows that past results on the secure broadcast problem are not secure under composition, and they then present a new protocol that is secure under composition, along with a new simulation-basted security notion for the problem.
  • “Fault-Tolerant Spanners: Better and Simpler” by Michael Dinitz and Robert Krauthgamer.  This paper shows how to create spanners of general graphs that are tolerant to vertex failures.  They give a transformation for k >= 3 that converts every k-spanner construction with at most f(n) edges into a k-spanner construction that can tolerate r faults and uses O(r^3 log n)*f(2n/r) edges.  For the case k=2, they improve the approximation ratio from O(r log n) to O(log n).  Bonus: This last result uses an LP relaxation and the Lovasz Local Lemma.
  • There are several papers that did not make it as regular papers that I hope to see as brief announcements.  Privacy concerns keep me from mentioning anything in particular here, but there were some nice lower bound results that were invited to be resubmitted as brief announcements.  I hope the authors will accept.

Question: Are there no game theory papers this year except the two that we submitted?  This seems surprising.



PODC 2011
March 7, 2011, 5:39 pm
Filed under: Uncategorized | Tags: , , , , ,

Decisions for PODC 2011 (in San Jose) have just been mailed out this morning. In a later post, I’ll talk about some of the interesting accepted papers this year – there are many.  For now I wanted to mention two of my own papers that were accepted and are now up on my web page.

Conflict on a Communication Channel.  This is a paper that I had previously blogged about here.  I’m still very excited about the idea of maintaining security invariants while spending asymptotically less than an adversary  that is trying to break them.  In this paper, we were able to do this for a few problems related to jamming in sensor networks and denial of service attacks.  However, the application of this kind of game theoretic analysis to security problems is still wide open.  Many interesting problems remain.

Scalable Mechanisms for Rational Secret Sharing.  This is a paper that I had not previously talked about in this blog.  The paper considers the problem of secret sharing when each player is rational.  In particular, we assume that each player prefers to learn the secret over not learning it, but prefers to be the only player learning the secret  over being one of many players learning it.  Much interesting work has been done on this problem in the last few years, with the main goal being to design a mechanism that ensures that ensures that all players learn the secret (thereby maximizing social welfare).  The new problem we focus on in this paper is designing mechanisms that both reveal the secret to all players and are scalable in the sense that they require each player to send just a constant number of messages in expectation.  Secret sharing is a subroutine for many cryptographic protocols, so one motivation is to develop a scalable tool for problems like rational multiparty computation and implementation of a mediator.



The Decline Effect
March 3, 2011, 11:15 pm
Filed under: Uncategorized | Tags: , ,

In the most recent issue of Nature there is a short piece on the decline effect, which was discussed in a longer article in the New Yorker a few months back [1].  Succinctly, the decline effect is the phenomena that the empirical support for certain scientific hypothesis declines over time.  The New Yorker article gives several examples of this effect.  One important example is decline in empirical support for the effectiveness of certain medical drugs.  For example, a study mentioned in the article shows that the demonstrated effectiveness of anti-depressants has decreased by as much as three-fold in the past decades.  A more frivolous example is the decline in empirical support for E.S.P. – this peaked in the 1930’s, with several research papers showing empirical support, but declined markedly in the next decade [2].

Several explanations have been proposed for the effect, and there are even scientists who are now doing research specifically on the decline effect!  To me, the most reasonable explanation proposed consists of two parts: 1) scientists like hypotheses that are surprising but true; and 2) the criteria for the “truth” of a hypothesis (in many publication venues) is empirical support with 95% confidence.  The second fact suggests that, when empirically testing a hypothesis that, a priori, has a 50% chance of being true, there is a 2.5% chance of getting a false positive.  The first fact suggests that many hypotheses that are tested have an a priori chance of being true that is not much more than 50%.  After all, if a hypothesis has, a priori, close to a 100% chance of being true, then it’s not very surprising, is it?  Thus, these two facts together suggest there will be a non-negligible fraction of hypothesis that scientists investigate, for which empirical testing will give false positives.

One solution for the decline effect, proposed in both articles, is the creation of a kind of “Journal of Negative Results”, which would publish empirical results that contradict certain hypotheses.  I heard this idea bantered around a lot during my grad student days, but I don’t think it will ever go very far.  Where’s the glory in finding facts that are boring but true?  Of course, there’s also the possibility of either 1) raising the 95% empirical support threshold necessary for publication; or 2) requiring empirical verification by many independent studies before allowing for publication.  Both of these ideas seem reasonable, but could slow down the scientific process.

Another idea (that was not proposed in either of the articles) is to raise the a priori support for a hypothesis.  In Computer Science, we have a mathematical methodology that frequently allows us to prove at least parts of a hypothesis that we want to show is true.  I think the decline effect is a nice justification for including at least some theoretical analysis in any paper in Computer Science.  Sometimes experiments can allow us to reach further than a purely theoretical study will allow.  However, it seems important, in almost all cases, to provide at least some theoretical support for a hypothesis.

[1] The New Yorker rarely publishes articles on science, but when they do, the articles generally seem to be among the very best science writing out there.

[2] Believe it or not, according to the New York Times, a paper in “one of psychology’s most respected journals” recently claimed support for the existence of a certain type of ESP in college students.  It remains to be seen whether or not there will be a decline effect in the support for this new hypothesis (hint: If I was a betting man, I wouldn’t bet on the hypothesis!)