Live-Blogging HotNets 2011 (Day Two)

(OK, not quite live...)

Day One was over here.

Session 5

Christopher Riederer spoke about auctioning your personal information. Unfortunately I missed the talk, but it must have been a good one since there was quite a bit of discussion.

Next up was Vincent Liu speaking about Tor Instead of IP — which is just what it sounds like, Tor as an Internet architecture. Of course you can't just use Tor and they have proposals for controlling incoming traffic, DoS, and getting better efficiency (lower stretch) with enough diversity of jurisdiction and plausible routing-policy-compliance. Similar to what Telex does with network-layer steganography, the general approach here is to make Internet connectivity an all or nothing proposition: If you can get anywhere outside a censored region, you can get everywhere so all the censor can do is block the entire Internet.

Last was Gurney et al's Having your Cake and Eating it too: Routing Security with Privacy Protections. The notion of security here is that the neighbors of an AS can verify that the AS in question selected and advertised routes according to its agreed-upon policy. The privacy is that they can verify this without revealing any more information than the verification itself. The paper presents protocols to verify several particular policies (e.g., if the AS was offered some route, then it advertised one). Could be useful for debugging interdomain connectivity issues.

Session 6

Steven Hong presented Picasso which provides a nice abstraction, hiding the complexity of full duplex signal shaping to utilize discontiguous spectrum fragments.

Mohammad Khojastepour spoke about using antenna cancellation to improve full duplex wireless.

Souvik Sen asked, Can physical layer frequency response information improve WiFi localization? Yes. And it involves driving Roombas around cafeterias.

Session 7

Poor old TCP felt snubbed at this workshop until Keith Winstein started roasting it. TCP is designed to work well in steady-state with very particular assumptions. It fails under messy real world conditions — hugely varying RTTs, stochastic rather than congestion-induced loss, very dynamic environments, and so on. Keith's goal is to make transmission control more robust and efficient by modeling network uncertainty. So the end-host has a model of the network (potentially including topology, senders, queues, etc.) but any of these elements can have unknown parameters describing their behavior. Then it maintains a probability distribution over possible parameter values and updates its beliefs as evidence comes in from the network (e.g., ACKs). At any given time, it takes whatever action maximizes expected utility (= some throughput / fairness / latency metric) given the current distribution of possible situations. The action is a delay until sending next packet. It's a beautifully intuitive idea, or as a certain reviewer put it,

"This is the craziest idea I've heard in a very long time."

Keith showed a simulation in one small example where this approach decides to use a slow start behavior — without that being hard-coded in — but then after getting a little info, immediately sends at the "right" rate. But there are big challenges. Some discussion touched on state explosion in the model, the danger of overfitting with an overly-complicated model, and how much the sender needs to know about the network.

Q: What would be the first variable you'd add to the network model beyond what TCP captures? A: Stochastic loss.

Q (Barath): Should we add a control plane to disseminate information about the network to assist the model? A: anything that gets more info is good.

Q (Ion): When won't this work? A: Model mismatch — if the truth is something the model didn't consider.

Q: Do the smart senders coexist? If you have 2 versions of the algorithm, do they coexist? A: Good question.

Next, a movie. The BitMate paper — "Bittorrent for the Less Priviliged" — by Umair Waheed and Umar Saif was one of the few entirely-non-US papers. Unfortunately visa issues prevented their attendance but Umar presented via a movie. The problem is that high bandwidth BitTorrent nodes for mutually helpful clusters but no one wants to upload to low bandwidth nodes. The solution included several interesting mechanisms but Umar said the one that got 80% of the benefit was something they call "Realistic Optimistic Unchoke" which improved unchoking of low-bandwidth peers. BitMate gets dramatic improvements in efficiency and fairness.

Umar took questions via Skype. It was so 21st century.

BitMate was written up in the NYTimes.

Q: what's your experience with real deployment? A: 40,000 users from 170 countries — a lot from US "for reasons that escape me" (~40% from North America). Many users from Iran, probably to circumvent censorship.

Vyas Sekar told us that a particular real 80,000 user network with tens of sites had 900 routers and 636 middleboxes (Firewalls, NIDS, Media gateways, load balanceers, proxies, VPN gateways, WAN optimizers, Voice gateways). Problem with middleboxes: device sprawl. Another problem: dealing with many vendors (Bruce Davie, Cisco: that problem we can fix!). Result: high CapEx and high OpEx. (Just network security cost $6B in 2010, $10B in 2016.) Also, middleboxes today are inflexible and difficult to extend.

So most net innovation happens via middleboxes, but it doesn't come easily. And middleboxes have been missing from the research community's discussion of innovation in networks. The Vision: Enable innovation in middlebox deployments. Approach: (1) software-centric implementations, (2) consolidated physical platform, (3) logically centralized open management APIs. There are also opportunities for reduced costs (via multiplexing) and improved efficiency (via reuse of functions like session reconstruction of TCP flows).

Session 8

This was the data center session with three more cool papers, but unfortunately I had to leave early.

See the full program.

Overall I was impressed with the ideas and engaging presentations. Thanks to the chairs Aditya and Ion for a very well-run workshop.

Live-blogging HotNets 2011

Lots of exciting talks and discussion at HotNets. Here are a few highlights.

Session 1

The first session was on Internet architecture. Ali Ghodsi spoke about three unanswered questions for the burgeoning area of {data/information/content}-{centric/oriented} networking. These were privacy, data-plane efficiency, and whether ubiquitous caching (a key feature of nearly all the proposals) actually provides quantitative improvement. For the latter point, the argument is that work on web caching from the late 1990's indicated that if you have caching near the edge (as already exists in present-day web caches), then adding ubiquitous caching to the architecture does not provide much more benefit due to heavy-tailed access distributions. So, does the caching advantage of information-centric networking warrant such a large-scale architectural change?

In later one-on-one discussions, Dan Massey (who later gave an interesting talk on the IPv4 grey market) argued that at least for the NDN project, ubiquitous caching is not the focus. Something more important is being able to do policy-aware multipath load balancing — in a very dynamic way in the network, by shipping content requests optimistically to multiple locations and seeing what ends up working well. A kind of speculative execution for forwarding. This may not be specific to content-awareness, but Dan argued that if you want to make this work, you end up needing something like NDN's Pending Interest Table mechanism. (The discussion was brief, but hopefully I restated the argument accurately.)

Dave Andersen and Scott Shenker argued that the primary goal of a future Internet architecture should be to accommodate evolution within the architecture itself, rather than just adding new functionality. The XIA approach introduces the notion of data-plane fallbacks so the sender can ask for new functionality and if it isn't supported everywhere, things still work. Scott focused on bringing evolvability to the architecture by applying the principles of extensibility and modularity.

There were several questions about what would be the incentives to deploy either approach. Scott responded that while incentives are important, first we need to understand what technical mechanisms we need to make evolvability feasible — which previously we have not understood. Ion Stoica asked, What would be the first thing that would drive deployment of one of the future Internet architectures? Some of the answers included SCADA networks which need extreme security, content caching (despite the first talk!) where content providers have monetary incentives, and (from Hari) the ability to deploy differential pricing by having more information about applications' intent (though users may not like this!).

Another question was whether these architectures would actually have fixed the processes which led to ossification in practice (e.g., via middlebox problems); and whether they'd aid deployment of protocols like secure BGP which have had problems in practice.

Session 2

Haitao Zheng from UCSB spoke about building wireless data centers, where directional wireless interfaces on racks of servers can be dynamically steered to connect pairs of racks that need to communicate. If you want to connect racks of servers with high-bandwidth wireless links, interference is a big problem. Their approach is 3D beamforming: Rather than aiming the radio directly (in the 2D plane at the top of racks), bounce it off a reflective ceiling and put an absorber around the target. This direction of reception reduces interference. In addition to having many pretty pictures of interference patterns, this is part of a line of work (in wireless and optical) that has a very cool approach — we always think of changing the traffic flow to match the topology; now we can change the physical topology to match the traffic.

Abhinav Pathak talked about finding energy bugs in mobile devices — as he said, that hits three hot keywords.

Jonathan Perry spoke about Rateless Spinal Codes. My main question: why do coding schemes get to have such cool names?

Session 3

Mark Reitblatt spoke on "Consistent Updates for Software-Defined Networks: Change You Can Believe In!". Here is the problem they are solving: As you are reconfiguring your network how can you be sure your policy (like availability or security) is preserved even during the transition? Traditionally, this is hard because of the inconsistency of having one set of forwarding rules deployed some places, and another set deployed other places. Actually, it might seem impossible. Even if you magically deploy a change everywhere instantly, you can still get policy violations because packets travel across non-negligible time. Can you solve it? Yes you can!

Junda Liu should get some sort of award for giving the most entertaining talk that also featured a state machine diagram.

Barath Raghavan calculated the energy and emergy of the Internet, which has been getting some press recently and which generated a lot of discussion on the complexities and implications of measuring society's energy use.

Awesome feature of this session: all the talks finished early!

Session 4

Jon Howell spoke on a proposed refactoring (and narrowing) of the API for web applications executing on user machines.

Ethan Katz-Bassett spoke about Machiavellian Routing. The coolness here is a trick by which ISPs can control inbound routing, so if they notice there is a connectivity problem at some AS they can induce other senders to avoid the problem.

Dan Massey noted that a grey market is emerging for IPv4 addresses and argued that we need a way not to prevent the market from existing outside the traditional Internet governance, but instead to verify what transactions happen. This would make the market more honest and efficient. Most interesting point from discussion (I think from Jon Howell): Why do we want to improve the IPv4 market? This will allow more efficient use of available IPv4 addresses ... but if we let the market be as baroque and inconvenient as possible, it will encourage deployment of IPv6 sooner!

Onward to Day Two...