(OK, not quite live...)
Session 5
Christopher Riederer spoke about auctioning your personal information. Unfortunately I missed the talk, but it must have been a good one since there was quite a bit of discussion.
Next up was Vincent Liu speaking about Tor Instead of IP — which is just what it sounds like, Tor as an Internet architecture. Of course you can't just use Tor and they have proposals for controlling incoming traffic, DoS, and getting better efficiency (lower stretch) with enough diversity of jurisdiction and plausible routing-policy-compliance. Similar to what Telex does with network-layer steganography, the general approach here is to make Internet connectivity an all or nothing proposition: If you can get anywhere outside a censored region, you can get everywhere so all the censor can do is block the entire Internet.
Last was Gurney et al's Having your Cake and Eating it too: Routing Security with Privacy Protections. The notion of security here is that the neighbors of an AS can verify that the AS in question selected and advertised routes according to its agreed-upon policy. The privacy is that they can verify this without revealing any more information than the verification itself. The paper presents protocols to verify several particular policies (e.g., if the AS was offered some route, then it advertised one). Could be useful for debugging interdomain connectivity issues.
Session 6
Steven Hong presented Picasso which provides a nice abstraction, hiding the complexity of full duplex signal shaping to utilize discontiguous spectrum fragments.
Mohammad Khojastepour spoke about using antenna cancellation to improve full duplex wireless.
Souvik Sen asked, Can physical layer frequency response information improve WiFi localization? Yes. And it involves driving Roombas around cafeterias.
Session 7
Poor old TCP felt snubbed at this workshop until Keith Winstein started roasting it. TCP is designed to work well in steady-state with very particular assumptions. It fails under messy real world conditions — hugely varying RTTs, stochastic rather than congestion-induced loss, very dynamic environments, and so on. Keith's goal is to make transmission control more robust and efficient by modeling network uncertainty. So the end-host has a model of the network (potentially including topology, senders, queues, etc.) but any of these elements can have unknown parameters describing their behavior. Then it maintains a probability distribution over possible parameter values and updates its beliefs as evidence comes in from the network (e.g., ACKs). At any given time, it takes whatever action maximizes expected utility (= some throughput / fairness / latency metric) given the current distribution of possible situations. The action is a delay until sending next packet. It's a beautifully intuitive idea, or as a certain reviewer put it,
"This is the craziest idea I've heard in a very long time."
Keith showed a simulation in one small example where this approach decides to use a slow start behavior — without that being hard-coded in — but then after getting a little info, immediately sends at the "right" rate. But there are big challenges. Some discussion touched on state explosion in the model, the danger of overfitting with an overly-complicated model, and how much the sender needs to know about the network.
Q: What would be the first variable you'd add to the network model beyond what TCP captures? A: Stochastic loss.
Q (Barath): Should we add a control plane to disseminate information about the network to assist the model? A: anything that gets more info is good.
Q (Ion): When won't this work? A: Model mismatch — if the truth is something the model didn't consider.
Q: Do the smart senders coexist? If you have 2 versions of the algorithm, do they coexist? A: Good question.
Next, a movie. The BitMate paper — "Bittorrent for the Less Priviliged" — by Umair Waheed and Umar Saif was one of the few entirely-non-US papers. Unfortunately visa issues prevented their attendance but Umar presented via a movie. The problem is that high bandwidth BitTorrent nodes for mutually helpful clusters but no one wants to upload to low bandwidth nodes. The solution included several interesting mechanisms but Umar said the one that got 80% of the benefit was something they call "Realistic Optimistic Unchoke" which improved unchoking of low-bandwidth peers. BitMate gets dramatic improvements in efficiency and fairness.
Umar took questions via Skype. It was so 21st century.
BitMate was written up in the NYTimes.
Q: what's your experience with real deployment? A: 40,000 users from 170 countries — a lot from US "for reasons that escape me" (~40% from North America). Many users from Iran, probably to circumvent censorship.
Vyas Sekar told us that a particular real 80,000 user network with tens of sites had 900 routers and 636 middleboxes (Firewalls, NIDS, Media gateways, load balanceers, proxies, VPN gateways, WAN optimizers, Voice gateways). Problem with middleboxes: device sprawl. Another problem: dealing with many vendors (Bruce Davie, Cisco: that problem we can fix!). Result: high CapEx and high OpEx. (Just network security cost $6B in 2010, $10B in 2016.) Also, middleboxes today are inflexible and difficult to extend.
So most net innovation happens via middleboxes, but it doesn't come easily. And middleboxes have been missing from the research community's discussion of innovation in networks. The Vision: Enable innovation in middlebox deployments. Approach: (1) software-centric implementations, (2) consolidated physical platform, (3) logically centralized open management APIs. There are also opportunities for reduced costs (via multiplexing) and improved efficiency (via reuse of functions like session reconstruction of TCP flows).
Session 8
This was the data center session with three more cool papers, but unfortunately I had to leave early.
Overall I was impressed with the ideas and engaging presentations. Thanks to the chairs Aditya and Ion for a very well-run workshop.
This was lovely, thanks for sharing
ReplyDelete