Best utilizing fat long pipes and large file transfer
Hi, I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT). I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem. I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers. thanks, Sean
Date: Thu, 12 Jun 2008 15:37:47 -0700 From: Sean Knox <sean@craigslist.org>
Hi,
I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT). I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers.
Not very fat or very long. I need to deal with 10GE over 200 ms (or more). These should be pretty easy, but as you realize, you will need large enough windows to keep the traffic in transit from filling the window and stalling the flow. The laws of physics (speed of light) are not forgiving. There is a project from Martin Swaney at U-Delaware (with Guy Almes and Aaron Brown) to do exactly what you are looking for. http://portal.acm.org/citation.cfm?id=1188455.1188714&coll=GUIDE&dl=GUIDE and http://www.internet2.edu/pubs/phoebus.pdf ESnet, Internet2 and Geant demonstrated it at last November's SuperComputing Conference in Reno. The idea is to use tuned proxies that are close to the source and destination and are optimized for the delay. Local systems can move data through them without dealing with the need to tune for the delay-bandwidth product. Note that this "man in the middle" may not play well with many security controls which deliberately try to prevent it, so you still may need some adjustments. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
The idea is to use tuned proxies that are close to the source and destination and are optimized for the delay. Local systems can move data through them without dealing with the need to tune for the delay-bandwidth product. Note that this "man in the middle" may not play well with many security controls which deliberately try to prevent it, so you still may need some adjustments.
and for those of us who are addicted to simple rsync, or whatever over ssh, you should be aware of the really bad openssh windowing issue. randy
And while I certainly like open source solutions, there are plenty of commercial products that do things to optimize this. Depending on the type of traffic the products do different things. Many of the serial-byte caching variety (e.g. Riverbed/F5) now also do connection/flow optimization and proxying, while many of the network optimizers now are doing serial-byte caching. I also for a while was looking for multicast based file transfer tools, but couldn't find any that were stable. I'd be interested in seeing the names of some of the projects Robert is talking about- perhaps I missed a few when I looked. One thing that is a simple solution? Split the file and then send all the parts at the same time. This helps a fair bit, and is easy to implement. Few things drive home the issues with TCP window scaling better than moving a file via ftp and then via ttcp. Sure, you don't always get all the file, but it does get there fast! --D On Thu, Jun 12, 2008 at 5:02 PM, Randy Bush <randy@psg.com> wrote:
The idea is to use tuned proxies that are close to the source and destination and are optimized for the delay. Local systems can move data through them without dealing with the need to tune for the delay-bandwidth product. Note that this "man in the middle" may not play well with many security controls which deliberately try to prevent it, so you still may need some adjustments.
and for those of us who are addicted to simple rsync, or whatever over ssh, you should be aware of the really bad openssh windowing issue.
randy
-- -- Darren Bolding -- -- darren@bolding.org --
Randy Bush <randy@psg.com> writes:
and for those of us who are addicted to simple rsync, or whatever over ssh, you should be aware of the really bad openssh windowing issue.
As a user of hpn-ssh for years, I have to wonder if there is any reason (aside from the sheer cussedness for which Theo is infamous) that the window improvements at least from hpn-ssh haven't been backported into mainline openssh? I suppose there might be portability concerns with the multithreaded ciphers, and there's certainly a good argument for not supporting NONE as a cipher type out of the box without a recompile, but there's not much excuse for the fixed size tiny buffers - I mean, it's 2008 already... -r
From: "Robert E. Seastrom" <rs@seastrom.com> Date: Thu, 12 Jun 2008 21:15:49 -0400
Randy Bush <randy@psg.com> writes:
and for those of us who are addicted to simple rsync, or whatever over ssh, you should be aware of the really bad openssh windowing issue.
As a user of hpn-ssh for years, I have to wonder if there is any reason (aside from the sheer cussedness for which Theo is infamous) that the window improvements at least from hpn-ssh haven't been backported into mainline openssh? I suppose there might be portability concerns with the multithreaded ciphers, and there's certainly a good argument for not supporting NONE as a cipher type out of the box without a recompile, but there's not much excuse for the fixed size tiny buffers - I mean, it's 2008 already...
Theo is known for his amazing stubbornness, but for area involving security and cryptography, I find it hard to say that his conservatism is excessive. Crypto is hard and often it is very non-intuitive. I remember the long discussions on entropy harvesting and seeding in FreeBSD which fortunately has cryptography professionals who could pick every nit and make sure FreeBSD did not end up with Debian-type egg all over its virtual face. Than again, the tiny buffers are silly and I can't imagine any possible security issue there. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
"Kevin Oberman" <oberman@es.net> writes:
From: "Robert E. Seastrom" <rs@seastrom.com> Date: Thu, 12 Jun 2008 21:15:49 -0400
Randy Bush <randy@psg.com> writes:
and for those of us who are addicted to simple rsync, or whatever over ssh, you should be aware of the really bad openssh windowing issue.
As a user of hpn-ssh for years, I have to wonder if there is any reason (aside from the sheer cussedness for which Theo is infamous) that the window improvements at least from hpn-ssh haven't been backported into mainline openssh? I suppose there might be portability concerns with the multithreaded ciphers, and there's certainly a good argument for not supporting NONE as a cipher type out of the box without a recompile, but there's not much excuse for the fixed size tiny buffers - I mean, it's 2008 already...
Theo is known for his amazing stubbornness, but for area involving security and cryptography, I find it hard to say that his conservatism is excessive. Crypto is hard and often it is very non-intuitive. I remember the long discussions on entropy harvesting and seeding in FreeBSD which fortunately has cryptography professionals who could pick every nit and make sure FreeBSD did not end up with Debian-type egg all over its virtual face.
Than again, the tiny buffers are silly and I can't imagine any possible security issue there.
Many good reasons to not goof with the crypto. The window size was the main thing I was poking at. ---rob
Robert E. Seastrom wrote:
As a user of hpn-ssh for years, I have to wonder if there is any reason (aside from the sheer cussedness for which Theo is infamous) that the window improvements at least from hpn-ssh haven't been backported into mainline openssh? I suppose there might be portability concerns with the multithreaded ciphers, and there's certainly a good argument for not supporting NONE as a cipher type out of the box without a recompile, but there's not much excuse for the fixed size tiny buffers - I mean, it's 2008 already...
Fedora 8 and 9 and Ubuntu 8.04 include the upstream OpenSSH which include large window patches. OpenSSH 4.7 ChangeLog contains:
Other changes, new functionality and fixes in this release: ... * The SSH channel window size has been increased, and both ssh(1) sshd(8) now send window updates more aggressively. These improves performance on high-BDP (Bandwidth Delay Product) networks.
Cheers, Glen -- Glen Turner
Glen Turner <gdt@gdt.id.au> writes:
Fedora 8 and 9 and Ubuntu 8.04 include the upstream OpenSSH which include large window patches. OpenSSH 4.7 ChangeLog contains:
Other changes, new functionality and fixes in this release: ... * The SSH channel window size has been increased, and both ssh(1) sshd(8) now send window updates more aggressively. These improves performance on high-BDP (Bandwidth Delay Product) networks.
Turns out that the Mac does too. Haven't checked FreeBSD 7 yet, but 6.x is definitely lagging. Thanks for the clue, -r
Date: Fri, 13 Jun 2008 09:02:31 +0900 From: Randy Bush <randy@psg.com>
The idea is to use tuned proxies that are close to the source and destination and are optimized for the delay. Local systems can move data through them without dealing with the need to tune for the delay-bandwidth product. Note that this "man in the middle" may not play well with many security controls which deliberately try to prevent it, so you still may need some adjustments.
and for those of us who are addicted to simple rsync, or whatever over ssh, you should be aware of the really bad openssh windowing issue.
Actually, OpenSSH has a number of issues that restrict performance. Read about them at <http://www.psc.edu/networking/projects/hpn-ssh/> Pittsburgh Supercomputer Center fixed these problems and on FreeBSD, you can get these by replacing the base OpenSSH with the openssh-portable port and select the HPN (High Performance Networking) and OVERWRITE_BASE options. I assume other OSes can deal with this or you can get the patches directly from: <http://www.psc.edu/networking/projects/hpn-ssh/openssh-5.0p1-hpn13v3.diff.gz> The port may also be built to support SmartCards which we require for authentication. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
On 2008-06-12, Kevin Oberman <oberman@es.net> wrote:
The idea is to use tuned proxies that are close to the source and destination and are optimized for the delay.
OpenBSD has relayd(8), a versatile tool which can be used here. There is support for proxying TCP connections. These can be modified in a few ways - socket options (nodelay, sack, socket buffer) can be adjusted on the relayed connection, also SSL can be offloaded. It works with the firewall state table and can retain the original addresses. Parts of this are only in development snapshots at present but will be in the 4.4 release.
At 06:37 PM 6/12/2008, you wrote:
I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT). I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers.
In our experience, you can't get to line speed with over 20-30ms of latency using TCP regardless of how much you tweak it. We transfer files across the US with 60-70ms at line speeds with UDP based file transfer programs. There are a number of open source projects out there designed for this purpose. -Robert Tellurian Networks - Global Hosting Solutions Since 1995 http://www.tellurian.com | 888-TELLURIAN | 973-300-9211 "Well done is better than well said." - Benjamin Franklin
Date: Thu, 12 Jun 2008 19:26:56 -0400 From: Robert Boyle <robert@tellurian.com>
At 06:37 PM 6/12/2008, you wrote:
I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT). I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers.
In our experience, you can't get to line speed with over 20-30ms of latency using TCP regardless of how much you tweak it. We transfer files across the US with 60-70ms at line speeds with UDP based file transfer programs. There are a number of open source projects out there designed for this purpose.
Clearly you have failed to try very hard or to check into what others have done. We routinely move data at MUCH higher rates over TCP at latencies over 50 ms. one way (>100 ms. RTT). We find it fairly easy to move data at over 4 Gbps continuously. If you can't fill a GE to 80% (800 Mbps) at 30 ms, you really are not tying very hard. Note: I am talking about a single TCP stream running for over 5 minutes at a time on tuned systems. Tuning for most modern network stacks is pretty trivial. Some older stacks (e.g. FreeBSD V6) are hopeless. I can't speak to how Windows does as I make no use of it for high-speed bulk transfers. It's also fairly easy to run multiple parallel TCP streams to completely fill a 10 Gbps pipe at any distance. This is the preferred method for moving large volumes of data around the world in the R&E community as it requires little or no system tuning and is available in several open-source tools. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
At 12:01 PM 6/13/2008, Kevin Oberman wrote:
Clearly you have failed to try very hard or to check into what others have done. We routinely move data at MUCH higher rates over TCP at latencies over 50 ms. one way (>100 ms. RTT). We find it fairly easy to move data at over 4 Gbps continuously.
That's impressive.
If you can't fill a GE to 80% (800 Mbps) at 30 ms, you really are not tying very hard. Note: I am talking about a single TCP stream running for over 5 minutes at a time on tuned systems. Tuning for most modern network stacks is pretty trivial. Some older stacks (e.g. FreeBSD V6) are hopeless. I can't speak to how Windows does as I make no use of it for high-speed bulk transfers.
Let me refine my post then... In our experience, you can't get to line speed with over 20-30ms of latency using TCP on _Windows_ regardless of how much you tweak it. >99% of the servers in our facilities are Windows based. I should have been more specific. -Robert Tellurian Networks - Global Hosting Solutions Since 1995 http://www.tellurian.com | 888-TELLURIAN | 973-300-9211 "Well done is better than well said." - Benjamin Franklin
Robert Boyle wrote:
At 12:01 PM 6/13/2008, Kevin Oberman wrote:
Clearly you have failed to try very hard or to check into what others have done. We routinely move data at MUCH higher rates over TCP at latencies over 50 ms. one way (>100 ms. RTT). We find it fairly easy to move data at over 4 Gbps continuously.
That's impressive.
If you can't fill a GE to 80% (800 Mbps) at 30 ms, you really are not tying very hard. Note: I am talking about a single TCP stream running for over 5 minutes at a time on tuned systems. Tuning for most modern network stacks is pretty trivial. Some older stacks (e.g. FreeBSD V6) are hopeless. I can't speak to how Windows does as I make no use of it for high-speed bulk transfers.
Let me refine my post then... In our experience, you can't get to line speed with over 20-30ms of latency using TCP on _Windows_ regardless of how much you tweak it. >99% of the servers in our facilities are Windows based. I should have been more specific.
I'll stipulate that I haven't looked too deeply into this problem for Windows systems. But I can't imagine it would be too hard to put a firewall/proxy (think Socks) and set the FW/proxy to adjust (or use an always on, tuned, tunnel) the TCP settings between the two FW/proxies on either side of the link. It has reasonably little invasion or reconfiguration and is probably reasonably solid. DJ
On Fri, 13 Jun 2008, Robert Boyle wrote:
Let me refine my post then... In our experience, you can't get to line speed with over 20-30ms of latency using TCP on _Windows_ regardless of how much you tweak it. >99% of the servers in our facilities are Windows based. I should have been more specific.
Its actually not that hard on windows. -Dan
goemon@anime.net wrote:
Its actually not that hard on windows.
Don't make me laugh. Instructions that start "Enable TCP window scaling and time stamps by using the Registry Editor to browse to location [HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters] and add the key Tcp1323Opts with value 3" are "hard". If you think otherwise, pick up the phone, pretend to work for an ISP Help Desk, and walk someone who doesn't work in IT through the changes. Microsoft scatter the tuning information for Windows Xp all across their website. Some of it is undocumented by Microsoft (and thus may change without notice). The only saving grace is DrTCP, a third party application which hides all of the registry detail (and potential for disaster) under a nice GUI. Then there's the deliberate nobbling of the TCP implementation, such as the restriction to ten of connections to Windows Xp SP3. Apparently you're meant to buy Windows Server if you are running P2P applications :-) Windows Vista is a vast improvement over Windows Xp (and I bet that isn't said of many components of Vista). It has a autotuning TCP with a 16MB buffer, which makes the defaults fine for ADSL and cable, but still requires machines at universities to be altered. Vista offers an alternative TCP congestion control algorithm -- Compound TCP. Not much is known of the performance attributes of this algorithm, mainly because I.P claims prevented its incorporation into Linux, the corral where most TCP algorithm shoot-outs take place. -- Glen Turner
On Mon, 16 Jun 2008, Glen Turner wrote:
Then there's the deliberate nobbling of the TCP implementation, such as the restriction to ten of connections to Windows Xp SP3. Apparently you're meant to buy Windows Server if you are running P2P applications :-)
are you quite sure it is *10 tcp connections*? have you tested this? -Dan
I have tested it with Icecast using audio streams and it is 100 not 10. moved to w2k server and the glass wall at 100 streams went away. Bob On Mon, 16 Jun 2008 10:25:18 -0700 (PDT), goemon@anime.net wrote:
On Mon, 16 Jun 2008, Glen Turner wrote:
Then there's the deliberate nobbling of the TCP implementation, such as the restriction to ten of connections to Windows Xp SP3. Apparently you're meant to buy Windows Server if you are running P2P applications :-)
are you quite sure it is *10 tcp connections*? have you tested this?
-Dan
On Mon, Jun 16, 2008 at 12:40:47PM +0930, Glen Turner wrote:
"Enable TCP window scaling and time stamps by using the Registry Editor to browse to location [HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Tcpip\Parameters] and add the key Tcp1323Opts with value 3"
are "hard". If you think otherwise, pick up the phone, pretend to work for an ISP Help Desk, and walk someone who doesn't work in IT through the changes.
For what it's worth, I did first-tier for about 4 years, and yes, I had to walk some people through that... and as long as they weren't the type to *get frustrated* whild doing things they didn't understand, it generally went swimmingly. While I was driving down the interstate. In my 21 year old *stickshift* BMW. With a Big Mac in the other hand. ;-) So as long as *you* know how to drive the tools, and you're not in a hurry, that's not "hard". Just "complex". Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://baylink.pitas.com '87 e24 St Petersburg FL USA http://photo.imageinc.us +1 727 647 1274 Those who cast the vote decide nothing. Those who count the vote decide everything. -- (Joseph Stalin)
Date: Fri, 13 Jun 2008 12:40:48 -0400 From: Robert Boyle <robert@tellurian.com>
At 12:01 PM 6/13/2008, Kevin Oberman wrote:
Clearly you have failed to try very hard or to check into what others have done. We routinely move data at MUCH higher rates over TCP at latencies over 50 ms. one way (>100 ms. RTT). We find it fairly easy to move data at over 4 Gbps continuously.
That's impressive.
If you can't fill a GE to 80% (800 Mbps) at 30 ms, you really are not tying very hard. Note: I am talking about a single TCP stream running for over 5 minutes at a time on tuned systems. Tuning for most modern network stacks is pretty trivial. Some older stacks (e.g. FreeBSD V6) are hopeless. I can't speak to how Windows does as I make no use of it for high-speed bulk transfers.
Let me refine my post then... In our experience, you can't get to line speed with over 20-30ms of latency using TCP on _Windows_ regardless of how much you tweak it. >99% of the servers in our facilities are Windows based. I should have been more specific.
Sorry, but I don't do Windows, but my friends who do claim that this is not true. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
Take a look at some of the stuff from Aspera. Mark On Thu, Jun 12, 2008 at 03:37:47PM -0700, Sean Knox wrote:
Hi,
I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT). I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers.
thanks, Sean
Hi,
I'm looking for input on the best practices for sending large files
There are both commercial products (fastcopy) and various "free"(*) products (bbcp, bbftp, gridftp) that will send large files. While they can take advantage of larger windows they also have the capability of using multiple streams (dealing with the inability to tune the tcp stack). There are, of course, competitors to these products which you should look into. As always, YMWV. Some references: http://www.softlink.com/fastcopy_techie.html (Some parts of NASA seem to like fastcopy) http://nccs.gov/user-support/general-support/data-transfer/bbcp/ (Full disclosure, bbcp was written by someone who sits about 3 meters from where I am sitting, but I cannot find a nice web reference from him about the product, so I am showing a different sites documentation) http://doc.in2p3.fr/bbftp/ (One of the first to use multistream for BaBar) http://www.globus.org/grid_software/data/gridftp.php (Part of the globus toolkit. Somewhat heavier weight if all you want is file transfer.) http://fasterdata.es.net/tools.html (A reference I am surprised Kevin did not point to :-) http://www.slac.stanford.edu/grp/scs/net/talk/ggf5_jul2002/NMWG_GGF5.pdf (A few year old performance evaluation) www.triumf.ca/canarie/amsterdam-test/References/010402-ftp.ppt (Another older performance evaluation) Gary (*) Some are GPL, and some (modified) BSD licenses. Which one is "free enough" depends on some strongly held beliefs of the validator.
I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT).
providing you have RFC1323 type extensions enabled on a semi-decent OS, a 4MB TCP window should be more than sufficient to fill a GbE pipe over 30msec. with a modified TCP stack, that uses TCP window sizes up to 32MB, i've worked with numerous customers to achieve wire-rate GbE async replication for storage-arrays with FCIP. the modifications to TCP were mostly to adjust how it reacts to packet loss, e.g. don't "halve the window". the intent of those modifications is that it doesn't use the "greater internet" but is more suited for private connections within an enterprise customer environment. that is used in production today on many Cisco MDS 9xxx FC switch environments.
I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
given you don't want to modify all your hosts, you could 'proxy' said TCP connections via 'socat' or 'netcat++'. cheers, lincoln.
Does anybody heard if comcast is having problems today? Thank you, Taeko
On Thu, Jun 12, 2008 at 06:02:52PM -0700, Thompson, Taeko wrote:
Does anybody heard if comcast is having problems today?
Since I got on shift two hours ago, I've done nothing but stare at traceroutes into and out of Comcast space trying to reassure dozens of customers that we're not down, Comcast is having problems. Our upstream claims they've been dealing with Comcast customers all (US) day. I'm pretty sure there's some serious weirdness going on in there. (Oh, and don't reply to an existing message to start a new thread) - Matt
On Fri, 13 Jun 2008, Randy Bush wrote:
Does anybody heard if comcast is having problems today?
lucy was having problems in eugene orygun. she diagnosed and then gave up and went to dinner.
randy
I have a comcast business line in Western WA and have seen no hiccups so far today. Main IP is 75.146.60.201. If someone that is seeing issues can send me an IP, I can traceroute to see if I can reach it from within Comcast. -- Steve Equal bytes for women.
On Fri, 13 Jun 2008, Randy Bush wrote:
Does anybody heard if comcast is having problems today?
lucy was having problems in eugene orygun. she diagnosed and then gave up and went to dinner.
randy
I have a comcast business line in Western WA and have seen no hiccups so far today. Main IP is 75.146.60.201.
If someone that is seeing issues can send me an IP, I can traceroute to see if I can reach it from within Comcast. -- Steve Equal bytes for women.
I have Smokeping running from behind my Comcast (Eastern MA / New England) and have alarms of latency from 6:28P-7:18pm EST. Not sure if attachments make it through - but doc of last 3 hr graph showing .13% loss avg and 4.57% max loss. Seems clean otherwise. On Tuesday I had alarms going all day long.....I run monitors to my Corp Network and Legacy Genuity DNS and the results are the same for both. Eric
On Thu, 12 Jun 2008 22:01:03 -0400 <ekagan@axsne.com> wrote:
On Fri, 13 Jun 2008, Randy Bush wrote:
Does anybody heard if comcast is having problems today?
lucy was having problems in eugene orygun. she diagnosed and then gave up and went to dinner.
randy
I have a comcast business line in Western WA and have seen no hiccups so far today. Main IP is 75.146.60.201.
If someone that is seeing issues can send me an IP, I can traceroute to see if I can reach it from within Comcast. -- Steve Equal bytes for women.
I have Smokeping running from behind my Comcast (Eastern MA / New England) and have alarms of latency from 6:28P-7:18pm EST. Not sure if attachments make it through - but doc of last 3 hr graph showing .13% loss avg and 4.57% max loss. Seems clean otherwise. On Tuesday I had alarms going all day long.....I run monitors to my Corp Network and Legacy Genuity DNS and the results are the same for both.
I didn't get online tonight till ~8pm, but I haven't noticed any problems at all. (I'm on 74/8.) --Steve Bellovin, http://www.cs.columbia.edu/~smb
Just to confirm from here (Toronto): core2-rtr-to#sh ip bgp 73.72.92.0 % Network not in table Paul -----Original Message----- From: Tom [mailto:bifrost@minions.com] Sent: Thursday, June 12, 2008 9:52 PM To: nanog@merit.edu Subject: Re: comcast On Thu, 12 Jun 2008, Thompson, Taeko wrote:
Does anybody heard if comcast is having problems today?
I've got a customer in 73.72.92.0/24, and I don't see the prefix on the net. ---------------------------------------------------------------------------- "The information transmitted is intended only for the person or entity to which it is addressed and contains confidential and/or privileged material. If you received this in error, please contact the sender immediately and then destroy this transmission, including all attachments, without copying, distributing or disclosing same. Thank you."
Tom, Where would that be located. From my house my UUNet/MCI/Verizon Business Link doesn't have it. My speakeasy link doesn't have it either. All of Comcast was out in my neighborhood (Alexandria, VA) yesterday at 7pm when I got home, was still out at 11pm when I went to bed, up and running fine this AM. David -- http://dcp.dcptech.com
-----Original Message----- From: Tom [mailto:bifrost@minions.com] Sent: Thursday, June 12, 2008 9:52 PM To: nanog@merit.edu Subject: Re: comcast
On Thu, 12 Jun 2008, Thompson, Taeko wrote:
Does anybody heard if comcast is having problems today?
I've got a customer in 73.72.92.0/24, and I don't see the prefix on the net.
when was the last time you saw this prefix reachable? i dont see anything announced from comcasts 73.0.0.0/8 allocation within the past 2 weeks... On Thu, Jun 12, 2008 at 9:51 PM, Tom <bifrost@minions.com> wrote:
On Thu, 12 Jun 2008, Thompson, Taeko wrote:
Does anybody heard if comcast is having problems today?
I've got a customer in 73.72.92.0/24, and I don't see the prefix on the net.
On Thu, Jun 12, 2008 at 10:24 PM, Christian <christian@visr.org> wrote:
when was the last time you saw this prefix reachable?
i dont see anything announced from comcasts 73.0.0.0/8 allocation within the past 2 weeks...
FYI: Internally within Comcast it does route: $ mtr --report -c 1 73.0.0.1 HOST: blue Loss% Snt Last Avg Best Wrst StDev 1. linksys 0.0% 1 0.9 0.9 0.9 0.9 0.0 2. c-24-98-192-1.hsd1.ga.comcas 0.0% 1 7.3 7.3 7.3 7.3 0.0 3. ge-2-5-ur01.a2atlanta.ga.atl 0.0% 1 8.0 8.0 8.0 8.0 0.0 4. te-9-1-ur02.a2atlanta.ga.atl 0.0% 1 6.0 6.0 6.0 6.0 0.0 5. te-9-3-ur01.b0atlanta.ga.atl 0.0% 1 6.5 6.5 6.5 6.5 0.0 6. 68.85.232.62 0.0% 1 6.4 6.4 6.4 6.4 0.0 7. po-15-ar01.b0atlanta.ga.atla 0.0% 1 7.6 7.6 7.6 7.6 0.0 8. te-4-1-cr01.atlanta.ga.cbone 0.0% 1 9.0 9.0 9.0 9.0 0.0 9. te-1-1-cr01.charlotte.nc.cbo 0.0% 1 11.8 11.8 11.8 11.8 0.0 10. te-1-1-cr01.richmond.va.cbon 0.0% 1 19.5 19.5 19.5 19.5 0.0 11. te-1-1-cr01.mclean.va.cbone. 0.0% 1 24.0 24.0 24.0 24.0 0.0 12. te-1-1-cr01.philadelphia.pa. 0.0% 1 25.6 25.6 25.6 25.6 0.0 13. be-40-crs01.401nbroadst.pa.p 0.0% 1 26.5 26.5 26.5 26.5 0.0 14. be-50-crs01.ivyland.pa.panjd 0.0% 1 28.8 28.8 28.8 28.8 0.0 15. po-10-ar01.verona.nj.panjde. 0.0% 1 41.7 41.7 41.7 41.7 0.0 16. po-10-ar01.eatontown.nj.panj 0.0% 1 33.5 33.5 33.5 33.5 0.0 17. po-10-ur01.middletown.nj.pan 0.0% 1 34.4 34.4 34.4 34.4 0.0 18. po-10-ur01.burlington.nj.pan 0.0% 1 48.0 48.0 48.0 48.0 0.0 -Jim P.
interesting enough mine goes the other way brokenrobot:~ christian$ traceroute 73.72.92.1 traceroute to 73.72.92.1 (73.72.92.1), 64 hops max, 40 byte packets 1 192.168.2.1 (192.168.2.1) 12.130 ms 1.135 ms 1.262 ms 2 * * * 3 ge-2-3-ur01.jerseycity.nj.panjde.comcast.net (68.86.220.185) 10.710 ms 9.638 ms 12.299 ms 4 po-10-ur02.jerseycity.nj.panjde.comcast.net (68.86.209.246) 15.747 ms 13.280 ms 10.082 ms 5 po-10-ur01.narlington.nj.panjde.comcast.net (68.86.209.250) 11.373 ms 10.847 ms 11.873 ms 6 po-10-ur02.narlington.nj.panjde.comcast.net (68.86.158.178) 36.034 ms 11.622 ms 16.032 ms 7 po-70-ar01.verona.nj.panjde.comcast.net (68.86.209.254) 17.642 ms 16.334 ms 11.358 ms 8 te-0-7-0-7-crs01.plainfield.nj.panjde.comcast.net (68.86.72.18) 25.986 ms 13.774 ms 12.524 ms 9 te-4-1-cr01.newyork.ny.cbone.comcast.net (68.86.72.17) 22.172 ms 24.545 ms 17.774 ms 10 te-9-1-cr01.philadelphia.pa.cbone.comcast.net (68.86.68.5) 16.040 ms 14.670 ms 14.232 ms 11 te-9-1-cr01.mclean.va.cbone.comcast.net (68.86.68.1) 32.585 ms 20.303 ms 28.397 ms 12 te-9-1-cr02.pittsburgh.pa.cbone.comcast.net (68.86.68.125) 24.909 ms 24.261 ms 34.669 ms 13 te-8-1-cr01.cleveland.oh.cbone.comcast.net (68.86.68.117) 28.253 ms 26.949 ms 27.105 ms 14 te-9-1-cr01.chicago.il.cbone.comcast.net (68.86.68.22) 37.728 ms 59.564 ms 36.835 ms 15 te-9-1-cr01.omaha.ne.cbone.comcast.net (68.86.68.30) 53.805 ms 54.945 ms 53.015 ms 16 te-9-1-cr01.denver.co.cbone.comcast.net (68.86.68.42) 62.957 ms 74.028 ms 63.913 ms 17 te-9-1-cr01.denverqwest.co.cbone.comcast.net (68.86.68.146) 62.570 ms 68.635 ms 62.655 ms 18 te-2-1-cr01.santateresa.tx.cbone.comcast.net (68.86.68.150) 74.011 ms 74.431 ms 76.615 ms 19 te-8-1-cr01.losangeles.ca.cbone.comcast.net (68.86.68.81) 92.442 ms 93.805 ms 93.965 ms 20 te-9-1-cr01.sacramento.ca.cbone.comcast.net (68.86.68.69) 105.415 ms 102.575 ms 103.331 ms 21 te-0-2-0-1-ar01.oakland.ca.sfba.comcast.net (68.87.192.134) 111.564 ms 111.281 ms 106.768 ms 22 te-9-3-ur02.pittsburg.ca.sfba.comcast.net (68.87.192.133) 105.908 ms 105.997 ms 108.539 ms 23 GE-6-1-ur01.pittsburg.ca.sfba.comcast.net (68.87.199.173) 107.303 ms 108.250 ms 110.819 ms 24 ge-0-1-ubr02.pittsburg.ca.sfba.comcast.net (68.87.197.22) 119.945 ms * 107.782 ms On Thu, Jun 12, 2008 at 10:35 PM, Jim Popovitch <yahoo@jimpop.com> wrote:
when was the last time you saw this prefix reachable?
i dont see anything announced from comcasts 73.0.0.0/8 allocation within
On Thu, Jun 12, 2008 at 10:24 PM, Christian <christian@visr.org> wrote: the
past 2 weeks...
FYI: Internally within Comcast it does route:
$ mtr --report -c 1 73.0.0.1 HOST: blue Loss% Snt Last Avg Best Wrst StDev 1. linksys 0.0% 1 0.9 0.9 0.9 0.9 0.0 2. c-24-98-192-1.hsd1.ga.comcas 0.0% 1 7.3 7.3 7.3 7.3 0.0 3. ge-2-5-ur01.a2atlanta.ga.atl 0.0% 1 8.0 8.0 8.0 8.0 0.0 4. te-9-1-ur02.a2atlanta.ga.atl 0.0% 1 6.0 6.0 6.0 6.0 0.0 5. te-9-3-ur01.b0atlanta.ga.atl 0.0% 1 6.5 6.5 6.5 6.5 0.0 6. 68.85.232.62 0.0% 1 6.4 6.4 6.4 6.4 0.0 7. po-15-ar01.b0atlanta.ga.atla 0.0% 1 7.6 7.6 7.6 7.6 0.0 8. te-4-1-cr01.atlanta.ga.cbone 0.0% 1 9.0 9.0 9.0 9.0 0.0 9. te-1-1-cr01.charlotte.nc.cbo 0.0% 1 11.8 11.8 11.8 11.8 0.0 10. te-1-1-cr01.richmond.va.cbon 0.0% 1 19.5 19.5 19.5 19.5 0.0 11. te-1-1-cr01.mclean.va.cbone. 0.0% 1 24.0 24.0 24.0 24.0 0.0 12. te-1-1-cr01.philadelphia.pa. 0.0% 1 25.6 25.6 25.6 25.6 0.0 13. be-40-crs01.401nbroadst.pa.p 0.0% 1 26.5 26.5 26.5 26.5 0.0 14. be-50-crs01.ivyland.pa.panjd 0.0% 1 28.8 28.8 28.8 28.8 0.0 15. po-10-ar01.verona.nj.panjde. 0.0% 1 41.7 41.7 41.7 41.7 0.0 16. po-10-ar01.eatontown.nj.panj 0.0% 1 33.5 33.5 33.5 33.5 0.0 17. po-10-ur01.middletown.nj.pan 0.0% 1 34.4 34.4 34.4 34.4 0.0 18. po-10-ur01.burlington.nj.pan 0.0% 1 48.0 48.0 48.0 48.0 0.0
-Jim P.
On Thu, Jun 12, 2008 at 10:35 PM, Jim Popovitch <yahoo@jimpop.com> wrote:
18. po-10-ur01.burlington.nj.pan 0.0% 1 48.0 48.0 48.0 48.0 0.0
23 114 ms 122 ms 113 ms ge-0-1-ubr02.pittsburg.ca.sfba.comcast.net [68.8 7.197.22] Für eine Weile hatten wir Zugang durch eine Hong Kong basiert Piraten-Netzwerk -M<
On Thu, Jun 12, 2008 at 11:34 PM, Martin Hannigan <hannigan@gmail.com> wrote:
On Thu, Jun 12, 2008 at 10:35 PM, Jim Popovitch <yahoo@jimpop.com> wrote:
18. po-10-ur01.burlington.nj.pan 0.0% 1 48.0 48.0 48.0 48.0 0.0
23 114 ms 122 ms 113 ms ge-0-1-ubr02.pittsburg.ca.sfba.comcast.net [68.8 7.197.22]
Für eine Weile hatten wir Zugang durch eine Hong Kong basiert Piraten-Netzwerk
Yeah, I've saw similar in traces a few days back, I wondered wtf. -Jim P.
On Thu, 12 Jun 2008 18:02:52 -0700 "Thompson, Taeko" <Taeko.Thompson@wizards.com> wrote: They've been fine in my area (atlanta), though there was a fair bit of downtime last week. I did, however, notice today that my port 25 blocks are gone... which wasn't the case last week.
Does anybody heard if comcast is having problems today?
Thank you, Taeko
Hi Sean, from thursday, we have copied some ~300 GB packages from Prague to San Diego (~200 ms delay, 10 GE flat ethernet end machines connected via 1GE) files using RBUDP which worked great. Each scenario needs some planning. You have to answer several questions: 1) What is the performance of storage subsystem (sometimes you need to connect external harddrives or tape robot) 2) How many files you need to transfer? 3) How big are these files? 4) What is the consistency scenarion (it is file consistency or package consistency)? In example, I've sent some film data. Lot (~30.000) of 10 MB DPXes. Consistency was package based. Harddrives have been at the beggining connected via iLink (arrived on this media), then moved to eSATA (went to shop, bought another drive and connected it into export machine). Main tuning for RBUDP has been to buy another harddrive and tar these files. Regards Michal
Hi,
I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT). I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers.
thanks, Sean
Many thanks for great replies on and off-list. The suggestions basically ranged from these options: 1. tune TCP on all hosts you wish to transfer between 2. create tuned TCP proxies and transfer through those hosts 3. setup a socat (netcat++) proxy and send through this host 4. use an alternative to plain netcat/scp for large file transfers My needs are pretty simple: occasionally I need to push large database files (300Gb+) around linux hosts. #4 seems like the best option for me. People suggested a slew of methods to do this: RBUDP, gridftp, bbcp, and many others, with programs either sending with reliable UDP or breaking large transfers into multiple streams. Because it was easy to use right away, I tried RBUDP and was able to copy a tarball at about 700Mb/s over a 20ms delay link, and when factoring in destination disk write speed, isn't too bad a starting point. GridFTP and bbcp look very useful too; I'll be exploring them as well. The presentation links Kevin O. sent were very interesting. I've looked at HPN-SSH before but haven't played with it much. I'll definitely try it out based on the feedback from this thread. Thanks again. sk Sean Knox wrote:
Hi,
I'm looking for input on the best practices for sending large files over a long fat pipe between facilities (gigabit private circuit, ~20ms RTT). I'd like to avoid modifying TCP windows and options on end hosts where possible (I have a lot of them). I've seen products that work as "transfer stations" using "reliable UDP" to get around the windowing problem.
I'm thinking of setting up servers with optimized TCP settings to push big files around data centers but I'm curious to know how others deal with LFN+large transfers.
thanks, Sean
participants (28)
-
Bob Bradlee
-
Buhrmaster, Gary
-
Christian
-
Darren Bolding
-
David Prall
-
Deepak Jain
-
ekagan@axsne.com
-
Forsaken
-
Glen Turner
-
goemon@anime.net
-
Jay R. Ashworth
-
Jim Popovitch
-
Kevin Oberman
-
Lincoln Dale
-
Martin Hannigan
-
Matt Palmer
-
mdavis@drink-duff.com
-
Michal Krsek
-
Paul Stewart
-
Randy Bush
-
Robert Boyle
-
Robert E. Seastrom
-
Sean Knox
-
Steve Pirk
-
Steven M. Bellovin
-
Stuart Henderson
-
Thompson, Taeko
-
Tom