FOCI Travel Report
August 21, 2013, 7:58 pm
Filed under: Uncategorized | Tags: , , ,

Recently my student Mahdi Zamani  presented a paper, Towards Provably-Secure Scalable Anonymous Broadcast, at the FOCI ’13 (Free and Open Communication on the Internet).  This is his report on the workshop – Jared

Last week, I attended the third USENIX workshop on Free and Open Communications on the Internet (FOCI’13) in Washington, DC. The workshop consisted of three main sessions for peer-reviewed full papers and a rump session for quick presentations. The first session focused on state-level censorship by discussing various methods used by several governments around the world to suppress freedom of speech. The second session was centered around methods for detection and prevention of censorship in the Internet. These include methods for monitoring censorship, analysis of captured data, and designing efficient infrastructures for censorship prevention. The third session focused on technical methods for anonymous communication, an important approach for protecting against surveillance and censorship. The main sessions were followed by a one-hour rump session aimed at briefly presenting new challenges and ongoing work in this area. In the following, I give a summary of a few interesting papers presented in the workshop.

In the first session, John Verkamp from Indiana University explained how Twitter spam was used by the governments of China, Syria, Russia, and Mexico during five political events from 2011 to 2012 to inhibit political speech. Such spams appear as a large numbers of tweets with similar hashtags, but contents irrelevant to the subject. Effectively, this renders a politically-charged hashtag useless. The paper gives several interesting patterns that can be used to easily distinguish spam tweets from innocent tweets. For example, spam tweets usually use fewer stop words than non-spam tweets. In Mexico, spammers used account names that all had exactly 15 characters. These findings seem extremely helpful for guiding defense mechanisms against political spam in social networks.

Mobin Javed from UC Berkeley presented a paper by Zubair Nabi from Pakistan, where access to the Internet is mostly controlled by the state and is censored in a centralized way at the Internet exchange level. This paper shows several techniques used by the Pakistani government since 2006 to filter Internet content including DNS lookup, IP blacklisting, HTTP filtering, and URL keyword filtering. An interesting thing about this research is that it is the first study of the cause, effect, and mechanism of web censorship in Pakistan.

In the second session, Philipp Winter from the Tor project and Karlstad University presented an analyzer tool that provides usage statistics in order to see how Tor is blocked in various countries and to possibly improve Tor access. One challenging problem in Tor analysis is to find a node (called shell) inside the censoring network to capture and debug Tor traces, while respecting user’s privacy. In their tool, the user is supposed to run the tool voluntarily. I think one issue with this tool is how to incentivize people to use it. The paper does not provide any cost analysis of the tool hence it is unclear how much bandwidth and computation power it needs. Another open question is whether a strong adversary could use information from the tool to attack anonymity.

Shaddi Hasan from UC Berkeley talked about an infrastructure for future censorship-resistant networks (called dissent networks). He motivated the need for such an infrastructure by giving examples of recent Internet blackouts in the Arab Spring in Egypt, Libya, and Syria. They suggest four essential requirements for dissent networks: resilience against disruption, meaningful scalability, innocuous components, and resistance to tracking. They finally argue that anonymity is a necessary property of dissent networks although it can be used by adversaries to send false messages, impersonate other users, or execute sybil attacks. The downsides of anonymity are mainly due to the lack of reputation or accountability.

As part of a technical session in the USENIX Security Symposium, Henry Corrigan-Gibbs from Yale University presented an anonymous messaging system called Verdict, which is built on top of their previous protocol Dissent. Both protocols are based on Dining Cryptographers networks (DC-Nets), which provide anonymity with resistance against traffic analysis attacks. DC-Nets suffer from poor scalability and vulnerability to jamming attacks. Dissent addresses the jamming problem of DC-Nets retroactively while Verdict solves the poor scalability and detects disruptions proactively. Unlike DC-Nets, Verdict is cryptographic in nature and assumes a client-server architecture, where at least one server is honest. Each client attaches non-interactive zero knowledge proofs to its ciphertext to prove correctness. In order to decrease the large bandwidth cost of such proofs, they do some optimizations to make them smaller. Also, the honest server defers the checking of proofs to after a disruption occurs. Although the paper reports several empirical results, it remains unclear how the protocol scales. Moreover, it is not clear from the empirical results how many bits are sent by each party for sending one anonymous message.

Anonymity is known to protect people from the Internet surveillance and censorship. But there is an argument: What if adversaries (e.g. terrorists) use anonymity to escape from legal surveillance that is done to ensure people’s safety? Fortunately, I had a chance during the rump session to talk about this topic.  Roger Dingledine, who is a co-founder of the Tor project and William Allen Simpson, an independent network security professional were strong defenders of the availability of full anonymity, without exception.  They both disagree with “traceable anonymity” that means anonymity protected by law (see this: and generally anything that gives anyone or any authority (including judicial system) the power to control anonymity. Roger gave an example: suppose person A in Syria who is in danger of death because of his speech. Also, suppose person B in the US who is in danger of death because of a terrorist attack. Now, ask “Can Tor save or harm these persons?”. The answer is that it can definitely save person A’s life but it cannot directly harm person B’s life because terrorists have also many other ways to attack. This means that the benefits we get from freedom of speech is much more than the costs we pay for that. See this talk by Roger:


Leave a Comment so far
Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: