I was going to do the lightning talks separately, but
they all went so quickly (like lightning!) the notes
rather ran together--apologies, but at least I put some
===== separators in to make it somewhat clearer. :)
Matt
(again apologies for typos, and now off to lunch!)
2008.02.18 Lightning talk #1
Laird Popkin, Pando networks
Doug Pasko, Verizon networks
P4P: ISPs and P2P
DCIA, distributed computing industry
association,
P2P and ISPs
P2P market is maturing
digital content delivery is where things are heading;
content people are excited about p2p as disruptive
way to distribute content.
BBC doing production quality P2P traffic;
rapidly we're seeing huge changes, production
people are seeing good HD rollout.
Nascent P2P market pre 2007
Now, P2P is become a key part of the portfolio
for content delivery
P2P bandwidth usage
cachelogic slide, a bit dated, with explosion of
youtube, the ratio is sliding again the other way,
but it's still high.
Bandwidth battle
ISPs address P2P
upgrade network
deploy p2p caching
terminate user
rate limit p2p traffic
P2P countermeasures
use random ports
Fundamental problem; our usual models for managing
traffic don't apply anymore. It's very dynamic, moves
all over the place.
DCIA has P4P working group, goal is to get ISPs working
with the p2p community, to allow shared control of
the infrastructure.
Make tracking infrastructure smarter.
Partnership
Pando, Verizon, has a unch of other members.
There's companies in the core working group,
and many more observing.
Goal is it design framework to allow ISPs and P2P
networks to guide connectivity to optimize traffic
flows, provide better performance and reduce network
impact.
P2P alone doesn't understand topology, and has no
idea of cost models and peering relationships.
So, goal is to blend business requirements together
with network topology.
Reduce hop count, for example.
Want an industry solution to arrive before a
regulatory pressure comes into play.
Drive the solution to be carrier grade, rather
than ad-hoc solutions.
P2P applications with P4P benefits
better performance, faster downloads
less impact on ISPs results in fewer restrictions
P4P enables more efficient delivery.
CDN model (central pushes, managed locations)
P2P, more chaotic, no central locations,
P2P+P4P, knowledge of ISP infrastructure, can
form adjacencies among local clients as much
as possible.
Traditional looking network management, but pushed
to network layer.
P4P goals
share topology in a flexible, controlled way;
sanitized, generalized, summarized set of information,
with privacy protections in place; no customer or user
information out, without security concerns.
Need to be flexibile to be usable across many P2P
applications and architectures (trackers, trackerless)
Needs to be easy to implement, want it to be an open
standard; any ISP/P2P can implement it.
P4P architecture slide
P2P clients talk to Ptracker to figure out who to
talk to; Ptracker talks to Itracker to get guidance
on which peers to connect to which; so peers get told
to connect to nearby peers.
It's a joint optimization problem; minimize utilization
by P2P, while maximizing download performance.
At the end of this, goal is customer to have a better
experience; customer gets to be happier.
Data exchanged in P4P; network maps go into Itracker,
provides a weight matrix between locations without
giving topology away.
Each PID has IP 'prefix' associated with it in the
matrix, has percentage weighting of how heavily
people in one POP should connect to others.
Ran simulations on Verizon and Telefonica networks.
Zero dollars for the ISPs, using Yale modelling,
shows huge reduction in hop counts, cutting down
long haul drastically. Maps to direct dollar
savings.
Results also good for P2P, shorter download times,
with 50% to 80% increases in download speeds
and reductions in download time.
This isn't even using caching yet.
P4PWG is free to join
monthly calls
mailing list
field test underway
mission is to improve
Marty Lafferty (marty(a)dcia.org)
Laird (laird(a)pando.com)
Doug (doug.pasko(a)verizon.com)
Q: interface, mathematical model; why not have a
model where you ask the ISP for a given prefix, and
get back weighting. But the communication volume
between Ptracker and Itracker was too large for that
to work well; needed chatter for every client that
connected. The map was moved down into the Ptracker
so it can do the mapping faster as in-memory
operation, even in the face of thousands of mappings
per second.
The architecture here is one proof of concept test;
if there's better designs, please step forward and
talk to the group; goal is to validate the basic idea
that localizing traffic reduces traffic and improves
performance.
They're proving out, and then will start out the
Danny Mcphereson, when you do optimization, you will
end up with higher peak rates within the LAN or within
the POP; p2p isn't a big part of intradomain traffic,
as opposed with localized traffic, where it's 80-90%
of the traffic.
What verizon has seen is that huge amounts of P2P
traffic is crossing peering links.
What about Net Neutrality side, and what they might
be contributing in terms of clue factor to that
issue?
It's definitely getting attention; but if they can
stem the vertical line, and make it more reasonable,
should help carriers manage their growth pressures
better.
Are they providing technical contributions to the
FCC, etc.? DCIA is sending papers to the FCC, and
is trying to make sure that voices are being heard
on related issues as well.
Q: Bill Norton, do the p2p protocols try to infer any
topological data via ping tests, hop counts, etc.?
Some do try; others use random peer connections;
others try to reverse engineer network via traceroutes.
One attempts to use cross-ISP links as much as
possible, avoids internal ISP connections as much
as possible.
P4P is addition to existing P2P networks; so this
information can be used by the network for whatever
information the P2P network determines its goal is.
Is there any motivation from the last-mile ISP to
make them look much less attractive? It seems to
actually just shift the balance, without altering
the actual traffic volume; it makes it more localized,
without reducing or increasing the overall level.
How are they figuring on distributing this information
from the Itracker to the Ptracker? Will it be via a
BGP feed? If there's a central tracker, the central
tracker will get the map information; for distributed
P2P networks, there's no good answer yet; each peer
asks Itracker for guidance, but would put heavy load
on the Itracker.
If everyone participates, it'll be like a global,
offline IGP with anonymized data; it's definitely
a challenge, but it's information sharing with a
benefit.
Jeff--what stops someone from getting onto a tracker
box, and maybe changing the mapping to shift all traffic
against one client, to DoS them?
This is aimed as guidance; isn't aimed to be the
absolute override. application will still have some
intelligence built in.
Goal will be to try to secure the data exchange and
updates to some degree.
==================================================================
2008.02.18 Cable break impacts in Mideast
Alin Popescu, Renesys
Cable breaks
1/30/2008, mediterranean/gulf area many cables
damaged
Flag, SEAMEWE4, Flag-FAlcon;
some severed, others had power issues
6856 networks from 23 countries affected
BGP updates from 250 peering sessions with
170 unique ASes
30 Jan to 6 Feb
analysis limited to outages only
ignored countries with fewer than 5 prefixes
(Yemen, Oman, etc.)
UAE cut, Egypt cuts on map, countries with
outages highlighted.
Cable break 0438 hours UTC, but several events
then later had outages at 0730
Feb 2nd, middle east Flag-Falcon cable in persian
gulf.
outages per country vs percentage; India had most
prefixes, but small percentage.
1456 out of 1502, 95% prefixes had some outage.
Telecom Egypt got several new providers
FLAG was significantly impacted
A lot of mid-level networks dropped some set of
upstream connectivity, and started routing through
Telecom Egypt.
Prefixes over time shown, with edges involving
TEgypt; shows the time series with the other big
Egyptian ASes as well; they drop big global networks
in favour of TEgypt.
Internet is still vulnerable to disasters
cost vs reliability
geography plays an important role
atlantic breaks happen all the time, good redundancy
taiwan straits, suez canal, too much of a natural choke
point
Internet intelligence important to allow customers to
intelligently choose new pathways or better provider
diversity.
Same series of outage stats through feb 17th shows that
there are still some significant networks not yet
recovered, even three weeks later.
========================================================
2008.02.18 Lightning talk #3
configuration-only approach to shrinking FIBs
Prof paul francis
virtual aggregation
single ISP without coordination can reduce size of
FIB with config tricks on routers.
Reduces FIB size on router, doesn't reduce RIB size
on route reflectors.
Status
tested a couple of versions of VA by configuring on
Linux and Cisco routers
simple, static, small-scale experiments (~10 routers)
cisco 7301 and 12000
modeled using data from a large ISP
router topology and traffic matrix
have not tested on a live network
have not tested dynamics
have not tested at large scale
Cornell owns some IPR
francis@
Goal of this talk
looking for some gurus to work with him on this.
Would like to come up with a 'best practices'
model.
Goal is to partition the DFZ table among
existing FIBs
take address space; partition it into
virtual prefixes; could be /7's, each ISP
can size them as it sees fit.
Assign each VA to a given router, run it as
a virtual VPN
each router then knows
routes to all sub-prefixes within its virtual
network
routes to all other virtual networks.
So routers follow less specific virtual aggregates
to a router with more specific, which then tunnels
out to the destination router.
Path length can increase
no so bad if there's a router covering each virtual
prefix locally.
border router issue
problem is that border routers need full routing tables
to peer with a non-participant neighbor ISPs.
Routers are both L2 and L3 devices;
set up route reflectors; RR peers with neighbor, does
reflection to appropriate routers internally.
use BGP next-hop to get routes to right place.
Uses layer2 to tunnel routers through border routers.
Increasing latency, increasing router load; exploits fact
that there's a power line; 90% of traffic goes to 10% of
destinations; put those 10% in your native tables.
Requires tracking what your top 10% destinations are over
time.
About 15 years out is when you finally start to grow and
run into the challenges with this system, based on table
growth.
=========================================================
2008.02.18 Lightning talk #4
Comcast, third lightning talk of morning
Alain Durand, Comcast
2nd DHCPv6 bake-off
vancouver, BC, Dec 2007
Was in very bad shape last year; no off
the shelf products.
8 vendors
14 implementations
6 DHCPv6 servers
5 DHCPv6 clients
3 DHCPv6 relays
15 participants
2.5 days of testing right after Vancouver IETF
findings
most implementations are now mature and ready for
production
both commercial and open source solutions available
DCHPv6 clients are slowly getting integrated into
operating systems
Next steps
3rd DHCPv6 bake-ff in march, March 5-7th 2008 in
Philly before IETF 71
DHCPv6 address assignment available on the v6-only
WLAN at IETF71
Working with vendors to get DHCPv6 clients integrated
in computer operating systems and home gateways.
They've gone a long way in the past few years; code
is getting much more solid, and finally ready for
rollout.
Mike, UCB; client, server working well, relay agents
are the weak spots; hard to scale some of this still.
Yes, relay points caused many issues during the bake
off; but there is still much progress happening.
Plan is to meet back here at 2pm, you're on your
own for an hour. Start back at 2pm with next
talk.
There's a sheet floating around with places to have
lunch near to the hotel. Thanks, see you in an hour.