This is Day Two. Day One is here.
Mobile and Wireless
Calum Harrison presented work on making rateless codes more power-efficient. Although rateless codes do a great job of approaching the Shannon capacity of the wireless channel, they're computationally expensive, and this can be a problem on mobile devices. This paper tries to also optimize for cost of decoding measured in terms of CPU operations, and gets 10-70% fewer operations with competitive rate. [Calum Harrison, Kyle Jamieson: Power-Aware Rateless Codes in Mobile Wireless Communication]
Shailendra Singh showed that there isn't one single wireless transmission strategy that is always best. DAS, Reuse, Net-mimo — for each there exists a profile of the user (are they moving, how much interference is there, etc.) for which that scheme is better than the others, which this paper experimentally verified. TRINITY is a system they're building to automatically get the best of each scheme in a heterogeneous world. [Shailendra Singh, Karthikeyan Sundaresan, Amir Khojastepour, Sampath Rangarajan, Srikanth Krishnamurthy: One Strategy Does Not Serve All: Tailoring Wireless Transmission Strategies to User Profiles]
Narseo Vallina-Rodriguez argued for something that may be slightly radical: "onloading" traffic from a wired DSL network onto wireless networks. We sometimes think of wireless bandwidth as a scarce resource, but actually your wireless throughput could easily be twice your DSL in some situations. If there is spare wireless capacity, why not use it? 40% of users use less than 10% of their allocated wireless data volume. They tested this idea in a variety of locations at different times and can get order-of-magnitude improvements in video streaming buffering. Apparently the reviewers noted that wireless providers wouldn't be a big fan of this — but Narseo noted that his coauthors are all from Telefonica. Interesting question from Brad Karp: How did we get here? Telefonica owns the DSL and wireless; if you need additional capacity is it cheaper to build out wireless capacity or wired? The answer seems to be that wired is way cheaper, but we need to have wireless anyway. Another commenter: this is promising because measurements show congestion on wireless and DSL peaks at different times. Open question: Is this benefit going to be true long term? [Narseo Vallina-Rodriguez, Vijay Erramilli, Yan Grunenberger, Laszlo Gyarmati, Nikolaos Laoutaris, Rade Stanejovic, Konstantina Papagiannaki: When David helps Goliath: The Case for 3G OnLoading.]
Data Center Networks
Mosharaf Chowdhury's work dealt with the fact that the multiple recent projects improving data center flow scheduling are dealing with just that — flows — with each flow in isolation. On the other hand, applications mean there are dependencies: for example, a partition-aggregate workload may need all of its flows to finish, and if one finishes earlier, it's useless. The goal of Coflow is to expose that information to the network to improve scheduling. One question that was asked was what is the tradeoff with complexity of the API. [Mosharaf Chowdhury, Ion Stoica: Coflow: An Application Layer Abstraction for Cluster Networking]
Nathan Farrington presented a new approach to build hybrid data center networks, with both a traditional packet-switched network and a circuit-switched (e.g., optical) network. An optical switch provides much higher point-to-point bandwidth but switching is slow — far too slow for packet-level switching. Prior work used hotspot scheduling, where the circuit switch is configured to help the elephant flows. But performance of hot spot scheduling depends on the traffic matrix. Here, Nathan introduced Traffic Matrix Scheduling: the idea is to repeatedly iterate between a series of switch configurations (input-output assignments), such that the collection of all assignments fulfills the entire traffic matrix. Q: Once you reach 100% traffic over optical, is there anything stopping you from eliminating the packet switched network entirely? Still there is latency on the order of 1 ms to complete one round of assignments; 1 ms is much higher than electrical DC network RTTs. Q: Where does the traffic matrix come from? Do you have to predict, or wait until you've buffered some traffic? Either way, there's a tradeoff. [Nathan Farrington, George Porter, Yeshaiahu Fainman, George Papen, Amin Vahdat: Hunting Mice with Microsecond Circuit Switches]
Mohammad Alizadeh took another look at finishing flows quickly in data centers. There are a number of recent protocols which are relatively complex. Their design is beautifully simple: each packet has a priority, and routers simply forward high priority packets first. They can have extremely small queues since the dropped packets are likely low priority anyway. End-hosts can set each packet's priority based on flow size, and perform very simple congestion collapse avoidance. Performance is very good, though with some more work to do for elephant flows in high-utilization regimes. [Mohammad Alizadeh, Shuang Yang, Sachin Katti, Nick McKeown, Balaji Prabhakar, Scott Shenker: Deconstructing Datacenter Packet Transport]
Lunch!
Routing and Forwarding
Gábor Rétvári tackled a compelling question: How much information is actually contained in a forwarding table? Can we compress the FIB down to a smaller size, making router hardware simpler and longer-lasting? Turns out, there's not so much information in a FIB: with some new techniques, a realistic DFZ FIB compresses down to 60-400 Kbytes, or 2-6 bits per prefix! A 4 million prefix FIB can fit in just 2.1 Mbyte of memory. Now the interesting thing is that this compression can support reasonably fast lookup directly on the compressed FIB, at least asymptotically speaking, based on an interesting new line of theory research on string self-indexing. One problem: They really need more realistic FIBs. The problem is that widely-available looking glass servers obscure the next-hops, which affect compression. "We are inclined to commit crimes to get your FIBs." Before they turn to a life of crime, why not send them FIBs? They have a demo! Question for the future: Can we use compressed forwarding tables at line speed? [Gábor Rétvári, Zoltán Csernátony, Attila Körösi, János Tapolcai, András Császár, Gábor Enyedi, Gergely Pongrácz: Compressing IP Forwarding Tables for Fun and Profit]
Nicola Gvozdiev wins the award for best visualizations with some nice animation of update propagation among iBGP routers. Their work is developing the algorithms and systems necessary to propagate state changes in iBGP, without causing any transient black holes or forwarding loops. [Nikola Gvozdiev, Brad Karp, Mark Handley: LOUP: Who's Afraid of the Big Bad Loop?]
Vasileios Kotronis's work takes SDN-based routing a step further: Don't just centralize within a domain, outsource your routing control to a contractor! One cool thing here, besides reduced management costs, is that you can go beyond what an individual domain can otherwise do — for example, the contractor has interdomain visibility and can perform cross-domain optimization, debug policy conflicts, etc. [Vasileios Kotronis, Bernhard Ager, Xenofontas Dimitropoulos: Outsourcing The Routing Control Logic: Better Internet Routing Based on SDN Principles]
User Behavior and Experience
Rade Stanojevic presented results from a large data set of mobile service plans (roughly a billion each of calls, SMS/MMS messages, and data sessions). The question: Are economic models of how users select bandwidth and service plans realistic? What choices do real people make? In fact, only 20% of customers choose the optimal tariff. 37% mean overpayment, 26% median. Another interesting result: use of service peaks immediately after purchase, and then decays steadily over at least a month, even with unlimited service (so it's not just because people are conservative as they near their service limits). Several Questions: Do these results really demonstrate irrationality? Users may buy more service than they need, so they don't need to worry about (and pay) comparatively pricey overage fees. Comment from an audience member: One has to imagine the marketing department of Telefonica has that exact same CDF of "irrationality" as their metric of success. [Rade Stanojevic, Vijay Erramilli, Konstantina Papagiannaki: Understanding Rationality: Cognitive Bias in Network Services]
Athula Balachandran presented a study working towards a quantitative metric to score user experience of video delivery (in particular, how long users end up watching the video). The problem here is that predicting user experience based on quantitative observables is hard: it's a complex function of initial startup delay, how often the player buffers, buffering time, bit rate, the type of video, and more. The paper analyzes how well user experience can be predicted using several techniques, based on data from Conviva. [Athula Balachandran, Vyas Sekar, Aditya Akella, Srinivasan Seshan, Ion Stoica, Hui Zhang: A Quest for an Internet Video Quality-of-Experience Metric]
Vijay Erramilli presented a measurement study of how web sites act on information that they know about you. In particular, do sites use price discrimination based on information they collect about your browsing behavior? Starting with clean machines and having them visit sites based on certain high- or low-value browsing profiles, they could subsequently measure how a set of search engines and shopping sites present results and prices to those different user profiles. They uncovered evidence of differences in search results, and some price differences on aggregators such as a mean 15% difference in hotel prices on Cheaptickets. Interestingly, there were also significant price differences based on the client's physical location. Q from Saikat Guha: How can you differentiate the vendor's intentional discrimination from unintentional? For example, in ad listings, having browsed a certain site can cause a Rolex ad to display, which bumps off an ad for a lower priced product. [Jakub Mikians, László Gyarmati, Vijay Erramilli, Nikolaos Laoutaris: Detecting Price and Search Discrimination in the Internet]
That's it! See you all next year...