from the academic side of the house
For the first set of IPv6 records, a team from the University of Tokyo, WIDE Project, NTT Communications, JGN2, SURFnet, CANARIE, Pacific Northwest Gigapop and other institutions collaborated to create a network path over 30,000 kilometers in distance, crossing 6 international networks - over 3/4 the circumference of the Earth. In doing so, the team successfully transferred data in the single and multi-stream categories at a rate of 7.67 Gbps which is equal to 230,100 terabit-meters per second (Tb-m/s). This record setting attempt leveraged standard TCP to achieve the new mark. The next day, the team used a modified version of TCP to achieve an even greater record. Using the same 30,000 km path, the network was able to achieve a throughput of 9.08 Gbps which is equal to 272,400 Tb-m/s for both the IPv6 multi and single stream categories. In doing so, the team surpassed the current IPv4 records, proving that IPv6 networks are able to provide the same, if not better, performance as IPv4. --bill
On Tue, Apr 24, 2007, bmanning@karoshi.com wrote:
The next day, the team used a modified version of TCP to achieve an even greater record. Using the same 30,000 km path, the network was able to achieve a throughput of 9.08 Gbps which is equal to 272,400 Tb-m/s for both the IPv6 multi and single stream categories. In doing so, the team surpassed the current IPv4 records, proving that IPv6 networks are able to provide the same, if not better, performance as IPv4.
As one of the poor bastards still involved in rolling out VoIP over satellite delivered IP at the moment, I can safely say I'm (currently) happy noone's trying to push H.323 over IPv6 over these small-sized satellite links. Lord knows we have enough trouble getting concurrent calls through 20 + 20 + byte overheads when the voice payload's -20- bytes. (That said, I'd be so much happer if the current trend 'ere wasn't to -avoid- delivering serial ports for the satellite service so we can run VoFR or PPP w/header compression - instead being presented IP connectivity only at either end, but you can't have everything..) Adrian
Adrian Chadd wrote:
On Tue, Apr 24, 2007, bmanning@karoshi.com wrote:
The next day, the team used a modified version of TCP to achieve an even greater record. Using the same 30,000 km path, the network was able to achieve a throughput of 9.08 Gbps which is equal to 272,400 Tb-m/s for both the IPv6 multi and single stream categories. In doing so, the team surpassed the current IPv4 records, proving that IPv6 networks are able to provide the same, if not better, performance as IPv4.
As one of the poor bastards still involved in rolling out VoIP over satellite delivered IP at the moment, I can safely say I'm (currently) happy noone's trying to push H.323 over IPv6 over these small-sized satellite links. Lord knows we have enough trouble getting concurrent calls through 20 + 20 + byte overheads when the voice payload's -20- bytes.
(That said, I'd be so much happer if the current trend 'ere wasn't to -avoid- delivering serial ports for the satellite service so we can run VoFR or PPP w/header compression - instead being presented IP connectivity only at either end, but you can't have everything..)
Adrian
Does anybody have any working v6 header suppression/compression working yet? When I was doing VoIP over VSAT people kept trying to give me modems with Ethernet on them, not good for doing any header compression. -- Leigh
bmanning@karoshi.com writes:
The next day, the team used a modified version of TCP to achieve an even greater record. Using the same 30,000 km path, the network was able to achieve a throughput of 9.08 Gbps which is equal to 272,400 Tb-m/s for both the IPv6 multi and single stream categories. In doing so, the team surpassed the current IPv4 records, proving that IPv6 networks are able to provide the same, if not better, performance as IPv4.
Good job. Two questions, though: (1) Do the throughput figures count only the data payload (i.e., anything above the TCP layer), or all the bits from the protocol stack? If the latter, it seems a little unreasonable to credit IPv6 with its own extra overhead -- though I'll concede that with jumbo datagrams, that's not all that much. (2) Getting this kind of throughput seems to depend on a fast physical layer, plus some link-layer help (jumbo packets), plus careful TCP tuning to deal with the large bandwidth-delay product. The IP layer sits between the second and third of those three items. Is there something about IPv6 vs. IPv4 that specifically improves perfomance on this kind of test? If so, what is it? Jim Shankland
Jim Shankland wrote:
bmanning@karoshi.com writes:
The next day, the team used a modified version of TCP to achieve an even greater record. Using the same 30,000 km path, the network was able to achieve a throughput of 9.08 Gbps which is equal to 272,400 Tb-m/s for both the IPv6 multi and single stream categories. In doing so, the team surpassed the current IPv4 records, proving that IPv6 networks are able to provide the same, if not better, performance as IPv4.
Good job. Two questions, though:
(1) Do the throughput figures count only the data payload (i.e., anything above the TCP layer), or all the bits from the protocol stack? If the latter, it seems a little unreasonable to credit IPv6 with its own extra overhead -- though I'll concede that with jumbo datagrams, that's not all that much.
(2) Getting this kind of throughput seems to depend on a fast physical layer, plus some link-layer help (jumbo packets), plus careful TCP tuning to deal with the large bandwidth-delay product. The IP layer sits between the second and third of those three items. Is there something about IPv6 vs. IPv4 that specifically improves perfomance on this kind of test? If so, what is it?
Jim Shankland
Also, it's a "modified" TCP not just tuned. I wonder how modified it is? Will it talk to an un-modified TCP stack (whatever that really is) ? -- Leigh Porter
On Tue, 24 Apr 2007 09:24:13 -0700 Jim Shankland <nanog@shankland.org> wrote:
(2) Getting this kind of throughput seems to depend on a fast physical layer, plus some link-layer help (jumbo packets), plus careful TCP tuning to deal with the large bandwidth-delay product. The IP layer sits between the second and third of those three items. Is there something about IPv6 vs. IPv4 that specifically improves perfomance on this kind of test? If so, what is it?
I wonder if the router forward v6 as fast. --Steve Bellovin, http://www.cs.columbia.edu/~smb
Steven M Bellovin writes:
Jim Shankland <nanog@shankland.org> wrote:
(2) Getting this kind of throughput seems to depend on a fast physical layer, plus some link-layer help (jumbo packets), plus careful TCP tuning to deal with the large bandwidth-delay product. The IP layer sits between the second and third of those three items. Is there something about IPv6 vs. IPv4 that specifically improves perfomance on this kind of test? If so, what is it?
I wonder if the router forward v6 as fast.
In the 10 Gb/s space (sufficient for these records, and I'm not familiar with 40 Gb/s routers), many if not most of the current gear handles IPv6 routing lookups "in hardware", just like IPv4 (and MPLS). For example, the mid-range platform that we use in our backbone forwards 30 Mpps per forwarding engine, whether based on IPv4 addresses, IPv6 addresses, or MPLS labels. 30 Mpps at 1500-byte packets corresponds to 360 Gb/s. So, no sweat. Routing table lookups(*) are what's most relevant here, because the other work in forwarding is identical between IPv4 and IPv6. Again, many platforms are able to do line-rate forwarding between 10 Gb/s ports. -- Simon, AS559. (*) ACLs (access control lists) are also important, but again, newer hardware can do fairly complex IPv6 ACLs at line rate.
On Tue, 24 Apr 2007, Jim Shankland wrote:
Date: Tue, 24 Apr 2007 09:24:13 -0700 From: Jim Shankland <nanog@shankland.org> Subject: Re: from the academic side of the house
(1) Do the throughput figures count only the data payload (i.e., anything above the TCP layer), or all the bits from the protocol stack? If the latter, it seems a little unreasonable to credit IPv6 with its own extra overhead -- though I'll concede that with jumbo datagrams, that's not all that much.
Data payload is counted as bytes transmitted and received by iperf. So application layer all the way.
(2) Getting this kind of throughput seems to depend on a fast physical layer, plus some link-layer help (jumbo packets), plus careful TCP tuning to deal with the large bandwidth-delay product.
That last part has been researched for quite some time already, though mainly with "long" transatlantic layer 2 (Ethernet) paths mainly.
The IP layer sits between the second and third of those three items. Is there something about IPv6 vs. IPv4 that specifically improves perfomance on this kind of test? If so, what is it?
Not that was specificly mentioned for this test I believe... Kind regards, JP Velders
On Tue, 24 Apr 2007 bmanning@karoshi.com wrote:
Date: Tue, 24 Apr 2007 15:36:51 +0000 From: bmanning@karoshi.com Subject: from the academic side of the house
For the first set of IPv6 records, a team from the University of Tokyo, WIDE Project, NTT Communications, JGN2, SURFnet, CANARIE, Pacific Northwest Gigapop and other institutions collaborated to create a network path over 30,000 kilometers in distance, crossing 6 international networks - over 3/4 the circumference of the Earth. In doing so, the team successfully transferred data in the single and multi-stream categories at a rate of 7.67 Gbps which is equal to 230,100 terabit-meters per second (Tb-m/s). This record setting attempt leveraged standard TCP to achieve the new mark.
Mind you, those crazy Japanese do this every year between christmas and newyear... ;) Most of the pipes they used also carry other research traffic throughout most of the year... This year was even more cumbersome because of some issues with the OC192's between Amsterdam and the USA... Kind regards, JP Velders
On Sun, Apr 29, 2007 at 01:57:26PM +0200, JP Velders wrote:
On Tue, 24 Apr 2007 bmanning@karoshi.com wrote:
Date: Tue, 24 Apr 2007 15:36:51 +0000 From: bmanning@karoshi.com Subject: from the academic side of the house
For the first set of IPv6 records, a team from the University of Tokyo, WIDE Project, NTT Communications, JGN2, SURFnet, CANARIE, Pacific Northwest Gigapop and other institutions collaborated to create a network path over 30,000 kilometers in distance, crossing 6 international networks - over 3/4 the circumference of the Earth. In doing so, the team successfully transferred data in the single and multi-stream categories at a rate of 7.67 Gbps which is equal to 230,100 terabit-meters per second (Tb-m/s). This record setting attempt leveraged standard TCP to achieve the new mark.
Mind you, those crazy Japanese do this every year between christmas and newyear... ;) Most of the pipes they used also carry other research traffic throughout most of the year... This year was even more cumbersome because of some issues with the OC192's between Amsterdam and the USA...
Kind regards, JP Velders
we -love- the crazy Japanese doing this kind of stuff. the US folks seemed to have lost momentum in the past decade. while the pipes do get re-purposed on a regular basis, they do tend to shake out interoperable problems, as you note above. me, i await the spiral loop that includes the southern hemisphere ... --bill
participants (7)
-
Adrian Chadd
-
bmanning@karoshi.com
-
Jim Shankland
-
JP Velders
-
Leigh Porter
-
Simon Leinen
-
Steven M. Bellovin