Machinations


Homework and the Internet
January 29, 2010, 8:56 pm
Filed under: Uncategorized | Tags: ,

This is what I have been reduced to:

“Let P (x), Q(x), R(x), S (x, y) be the predicates, “x is a true dungeon master”, “x has sex appeal”, “x is a wood-elf ”, “x is a friend to y”. Translate the following statements into predicate logic.

• A true dungeon master is a friend to all wood-elves
• Only true dungeon masters have sex appeal
• Bob is not a friend to some wood-elf

Prove from the statements that Bob does not have sex appeal.”

I’m not sure when it happened, but for at least several years it has been the case that solutions for the problems in any good textbook can be found somewhere on the Internet.  When I first started teaching, my homework sets were usually a combination of easier problems from the text book and harder problems that I made up.  These days, even for the easy problems, I can’t really assign anything from a text book.

Definitely the ubiquity of solutions to written hw problems has its downsides.  It has made my job a little bit harder, but this hasn’t been a big problem really.  The major downside is that there some very good problems that students are missing out on.  For example, I recently covered the proof that square root of 2 is irrational in my mathematical foundations of computer science class.  A great hw problem to ask students after they see this proof is to prove that square root of 3 is irrational.  However, I really don’t feel it is fair to assign this hw problem in a large class where certainly at least one student will look up the solution on the Internet.

I’m curious how other lecturers are dealing with this issue, and how students feel about it.  Should we be trying harder to hide the solutions to good hw problems?  Should we plant “fake” solutions on the web?  What happens when you google “dungeon master” and “sex appeal”? (P.S.  Hello to my CS261 students!)




Day 2 at SODA
January 19, 2010, 10:54 pm
Filed under: Uncategorized | Tags: , ,

Editor: Guest Post again by Amitabh Trehan

I am in the business meeting: surprisingly, it’s a lot of fun, maybe due to the free drinks and the humorous presentations by the contenders for SODA 2012,  freed, it seems, from the betas, thetas and hidden constants! Ok then, after multi-round voting by a show of hands and a play of non-musical chairs on a large scale (to help get the count correct), Kyoto has narrowly triumphed in it’s bid for 2012 over Barcelona: the other contenders (in order of votes received) were Prague, Rome and New York. There were some wild celebrations and tennis style fist-pumping!

There were many exciting talks today ranging from algorithmic game theory to distribued computing to graph algorithms and some indescribable topics like those at the business meeting!

Noam Nisan gave a very interesting talk on Google’s auction for tv ads.  I think I’ve been seeing the mention of this article on the blogosphere somewhere but avoided reading it (maybe I thought it was a google advert :). First, it was  indeed a surprise that Google markets television ads, but as it turns out, they have good reason to do so. Google brings a new system of marketing advertisement time that is different from the traditional office room deals, and which brings nice algorithmic challenges too. For the adverstisor, Google brings a web-based interface, precise targeting of content, and excellent feedback. The auctions are run everyday (for the next days advertisements, I think). Informally, the auction has slots for sale, bidders bid with their value for the particular slots and total budget, and the auction outputs the set of slots won by each bidder, and the price for which that slot is given.   The problem model is fairly straightforward with some simple constraints. The algorithm tries to optimize revenue, efficiency, and fairness.

The  auction is based on simultaneous ascending auction, and basically is as follows:  each ad-slot has an associated price that keeps increasing throughout the auction. Prices start at low reserve prices, and rise whenever there is “over-demand” for an ad-slot — i.e. a slot that is currently held
by one bidder is desired by another. Such small price increases keep going on until there is no
“over-demand”, at which point the auction closes. The basic step in the auction is the calculation
of the “demand” of a bidder at current prices — i.e. which set of slots would this bidder desire to
acquire assuming that slots are priced as given. In its basic form, the calculation of this demand is
done by a greedy algorithm that chooses slots according to decreasing bid-to-price ratio. Under certain theoretical assumptions (“gross substitutes”) it is known that this procedure ends with a “Walrasian market equilibrium” and under even stricter assumptions these prices are incentive compatible — i.e. they give no bidder any strategic reason to under or over bid.  In reality, some slots may go unsold though, and they are put through the wringer again till google is sure nobody wants them. At that point google puts up public service ads on them such as those asking people to desist from taking drugs. As Noam said, it’s not the case that google doesn’t want you to take drugs, it’s just that they have empty slots :); So, a betting hunch I have from this talk is that if you see too many public service ads coming on the television, maybe the slots are going cheap and it’s time to buy them!!!

I would like to briefly mention a few more talks here: Monotonicity in Bargaining Networks by Yossi Azar et al. talked about bargaining networks.  In these networks, the nodes are players, and each edge has a certain amount of money on it that can be split among the endpoints of that edge only if the endpoints reach an agreement on how to split the dollar.  However, each node is allowed to split the money with at most one neighbor.  There are many real world situations where such networks exist e.g. among buyers and sellers where the edge weight can be considered as the difference in value of the object being sold, and in dating or marriage, where the edge weight would relate to the benefit of a relationship. A stable solution is related to the fractional min cover and thereby, maximum matching.  The authors show that for several solution concepts (i.e. stable outcomes) for this game are monotonic on bipartite graphs, where monotonic means that adding an edge between nodes x and y can only increase the profit of x and y on the bargaining network.Interesting questions remain: What are the  mechanisms of bargaining? Are there simple distributed algorithms for these solutions? lower bounds?

The evening talks on A model of computation for map reduce, and Deterministic algorithms for the Lovasz local lemma drew much interest from the audience. I also enjoyed the talks Highwway dimension, shortest paths and provable efficient algos, and max flows in planar graphs.



SODA at Austin has a good fizz.
January 18, 2010, 1:17 pm
Filed under: Uncategorized | Tags: , ,
Austn downtown at night

Room with a view: Austin night © Amitabh Trehan

Editor’s note: This is another guest post from  Amitabh Trehan.

Hello from SODA at Austin. First, a few questions as warmup exercise (It’s a theory conference!):
– If I blog about the conference (my second conference reporting), should I not get a Press card and full financial support from the organizers? 🙂
– If the proceedings are on disk only, should there be more (or less) electric points in the seminar rooms?
– A European friend asked me ‘Is continental breakfast supposed to be just a bowl of fruits and coffee? Which continent are they referring to’?
– How did Austin land up in the middle of Texas?

So, on to some reports now (before my editor cuts my writing for being mathematically imprecise!). Kudos to Austin: on Saturday, I got to watch a wonderful performance by the Austin symphonic Orchestra and violinist Nadja Salerno-Sonnenberg sitting in the very first row on a $5 student ticket, and today live blues at a downtown bar. The conference itself is big: there are 3 parallel sessions throughout the day from 9 am to almost 7 pm besides a plenary session and best paper presentation. Of course, I could only attend at most (n-2)/3 +  2 talks, where n was the total number of talks.

The best paper award has been given to Asadpour, Goemans, Madry, Gharan and Saberi’s paper An O(log n/ log log n)-approximation Algorithm for the Asymmetric Traveling Salesman Problem . This  paper breaks the 28 year old Θ(log n) bound set by Frieze et al for the classic Asymmetric Travelling Salesman Problem (ATSP).  To remind the reader, ATSP is the asymmetric version of the Traveling Salesman Problem (TSP) which is the problem of finding the minimum cost tour visiting each vertex at least once given a set V of n vertices, and a cost function c: V X V -> R+, and where for ATSP c(u,v) need not be equal to c(v,u). The technique they have used is similiar to the algorithm for the symmetric version by Christofides which gives a factor 3/2 approximation. At a very high level, this is to  construct a special spanning tree, find  minimum cost Eulerian augmentation of this tree, and shortcut the resulting Eulerian walk to get the result. The spanning tree is special in the sense that it is a thin tree. The algorithm uses Held-Karp Linear Programming (LP) relaxation and a thin tree is roughly defined with respect to the optimum solution x* of the relaxation as a spanning tree that, for every cut, contains a small multiple of the corresponding values of x* in the cut. Shayan Oveis Gharan. presenting the paper, highlighted as important the ideas of ‘thinness’ and that of maximum entropy: explicitly fix the desired combinatorial structure, and then maximize the randomness. Overall, a very nice paper.

The other talk I will briefly discuss is the plenary session talk by Cynthia Dwork on ‘Differential Privacy’. She called it ‘postmodern privacy data analysis’. From what little I know (which is not much) of postmodernism, this seems an apt analogy: the author is supposed to be much less important (or almost anonymous) as compared to the work of the author. The aim of differential privacy is not only important but urgent in our time: How to publicly release statistical information about a set of people without compromising the privacy of any individual. Identity thefts and the  ease with which one can obtain personal information is now getting alarming. After giving a brief history of the problem and explaining the concepts of adjacent databases, epsilon-differential privacy, linkage attacks etc, she presented the new extensions of their work using the H1N1 epidemic as an example: Continual observation and Pan-privacy.  Continual observation is the question of how one can analyze aggregate user information to monitor, say, regional health conditions, while maintaining differential privacy. Pan-privacy maintains the privacy of the database “inside and out” completely hiding the pattern or appearance; protecting against both announced (legal action, subpoenas etc) and unannounced intrusions. The pan-privacy density estimator, as it is called, is based on an idea from the social sciences called randomized response given by Warner in 1965. The idea was to do a coin toss while recording the response of a respondent in a survey and record the response according to the result of the toss and possible truthfulness of the respondent’s statement. As Cynthia showed in her last slide, there are many animals now in the differential privacy zoo!

Besides these, I attended some interesting talks in the day ranging from property testing to graph theory (minors etc) and algorithms about clique-width and kernels. There was also, as one participant called ‘the Kawarabayashi’ session which had  four papers by Ken-ichi Kawarabayashi, one less than last year (as far as I know).



Another Application of Game Theory
January 7, 2010, 5:43 pm
Filed under: Uncategorized | Tags: , , ,

Well it took me a trip out to Rome to find out about it, but David Kempe (at WINE) told me about another application of game theory developed in CS that is actually used in the real world.  Milind Tambe at USC has created a system that is currently used for scheduling security activities at the LA airport (and soon Pittsburgh airport) based on game theoretic concepts .  You can read about it here and in more detail in the research papers on Milind’s web page.

The basic idea seems like the following.  They analyze the problem as a Stackleberg game where the airport decides first on a probability distribution over which security activities will be scheduled that day (e.g. at which gates complete searches of luggage will occur, where and when drug sniffing dogs will be deployed, etc.) and then the “bad guys” get to decide on their actions e.g. which areas of the airport they will try to smuggle contraband through.  The goal of the airport security is of course to maximize the probability of catching illegal activity minimize the expected damage of illegal activity.  To do this, there needs to be some unpredictability on the part of the airport since the bad guys get to see the history of security activities.

This is really very cool research – David tells me that after implementing the system, LAX has been able to detect more illegal activity.