My company is looking for ways to improve throughput for data transfers between individual servers. We’re exploring the creation of Etherchannels using multiple server NICs, but Etherchannel seems to have the limitation of not supporting per-packet load-balancing, therefore limiting traffic between two individual hosts to 1 Gig. In most of my research, I’ve seen 10-GigE used for traffic aggregation and for the “unified fabric” solution that Cisco and others are pushing. I’m interested in knowing if any of you have attempted to implement 10-GigE at the host level to improve network throughput between individual servers and what your experience was in doing so. Thanks in advance, Jason
As long as it's not a single connection that you're looking to get over 1Gb, etherchannel should actually work. It uses a hash based on (I believe) source and destination IP and port, so it should roughly balance connections between the servers. The other option, if you're using Linux, is to use balance-rr mode on the bonding driver. This should deliver per-packet balancing and the switch doesn't have to know anything about the bonding. Documentation for the bonding driver is here: http://www.cyberciti.biz/howto/question/static/linux-ethernet-bonding-driver... -- Alex Thurlow Blastro Networks http://www.blastro.com http://www.roxwel.com http://www.yallwire.com On 5/1/2009 12:55 PM, Jason Shoemaker wrote:
My company is looking for ways to improve throughput for data transfers between individual servers. We’re exploring the creation of Etherchannels using multiple server NICs, but Etherchannel seems to have the limitation of not supporting per-packet load-balancing, therefore limiting traffic between two individual hosts to 1 Gig.
In most of my research, I’ve seen 10-GigE used for traffic aggregation and for the “unified fabric” solution that Cisco and others are pushing. I’m interested in knowing if any of you have attempted to implement 10-GigE at the host level to improve network throughput between individual servers and what your experience was in doing so.
Thanks in advance,
Jason
-- Alex Thurlow Blastro Networks http://www.blastro.com http://www.roxwel.com http://www.yallwire.com
Once upon a time, Alex Thurlow <alex@blastro.com> said:
As long as it's not a single connection that you're looking to get over 1Gb, etherchannel should actually work. It uses a hash based on (I believe) source and destination IP and port, so it should roughly balance connections between the servers.
That depends on the devices on each end. For example, some switches can only hash on MAC addresses, some can look at IPs, and some can look at ports. -- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
It's also host dependent. The switch will hash based on the algorithms it supports, however it can be asymmetrical if the host doesn't support the same ones. Most host OS's don't hash outbound to the switch based on layer 4 or above. Most only hash at layer 2. There are 10GE cards for most hardware and OS platforms. Getting them to run at a fraction of that speed depends on application and IP stack tuning. Even then, there are significant bottlenecks. That's one reason Infiniband for HPC has taken off. ---- Matthew Huff | One Manhattanville Rd OTA Management LLC | Purchase, NY 10577 http://www.ox.com | Phone: 914-460-4039 aim: matthewbhuff | Fax: 914-460-4139
-----Original Message----- From: Chris Adams [mailto:cmadams@hiwaay.net] Sent: Friday, May 01, 2009 2:16 PM To: nanog@nanog.org Subject: Re: 10-GigE for servers
Once upon a time, Alex Thurlow <alex@blastro.com> said:
As long as it's not a single connection that you're looking to get over 1Gb, etherchannel should actually work. It uses a hash based on (I believe) source and destination IP and port, so it should roughly balance connections between the servers.
That depends on the devices on each end. For example, some switches can only hash on MAC addresses, some can look at IPs, and some can look at ports.
-- Chris Adams <cmadams@hiwaay.net> Systems and Network Administrator - HiWAAY Internet Services I don't speak for anybody but myself - that's enough trouble.
On Fri, 1 May 2009, Jason Shoemaker wrote:
My company is looking for ways to improve throughput for data transfers between individual servers. We’re exploring the creation of Etherchannels using multiple server NICs, but Etherchannel seems to have the limitation of not supporting per-packet load-balancing, therefore limiting traffic between two individual hosts to 1 Gig.
Have you thought about Infiniband? Dual 10 gig cards cost about $50 and 24 port switches about $1200 on ebay. Infiniband has just a fraction of the latency of ethernet (even 10 get eth). You get the lowest latency if your application supports Infiniband, but if not you can run IP over IB.
<> Nathan Stratton CTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net http://www.blinkmind.com
Athens GA, tried to call in a ticket (Metro Ethernet) and was told a master ticket was already in place for my circuits. Other than the ticket # they wouldn't give me any details. Anybody know anything? --p
2009/5/1 Nathan Stratton <nathan@robotics.net>:
On Fri, 1 May 2009, Jason Shoemaker wrote:
My company is looking for ways to improve throughput for data transfers between individual servers. We’re exploring the creation of Etherchannels using multiple server NICs, but Etherchannel seems to have the limitation of not supporting per-packet load-balancing, therefore limiting traffic between two individual hosts to 1 Gig.
Have you thought about Infiniband? Dual 10 gig cards cost about $50 and 24 port switches about $1200 on ebay. Infiniband has just a fraction of the latency of ethernet (even 10 get eth). You get the lowest latency if your application supports Infiniband, but if not you can run IP over IB.
Infiniband is dying technology..
<>
Nathan Stratton CTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net http://www.blinkmind.com
-- [ Rodrick R. Brown ] http://www.rodrickbrown.com http://www.linkedin.com/in/rodrickbrown
Cisco is trying to push another kind of adapter called CNA, integrating Ethernet and FCoE for use in a "loss less" network. It uses several proprietary specifications. More information can be obtained here: http://www.qlogic.com/Products/Datanetworking_products_CNA_QLE8042.aspx http://www.qlogic.com/SiteCollectionDocuments/Products/Products_RightNAV_pdf... http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/data_sheet_c78-52... http://www.emulex.com/products/strategic-direction/oneconnect-universal-cna.... gg 2009/5/3 Rodrick Brown <rodrick.brown@gmail.com>
On Fri, 1 May 2009, Jason Shoemaker wrote:
My company is looking for ways to improve throughput for data transfers between individual servers. We’re exploring the creation of Etherchannels using multiple server NICs, but Etherchannel seems to have the
2009/5/1 Nathan Stratton <nathan@robotics.net>: limitation
of not supporting per-packet load-balancing, therefore limiting traffic between two individual hosts to 1 Gig.
Have you thought about Infiniband? Dual 10 gig cards cost about $50 and 24 port switches about $1200 on ebay. Infiniband has just a fraction of the latency of ethernet (even 10 get eth). You get the lowest latency if your application supports Infiniband, but if not you can run IP over IB.
Infiniband is dying technology..
<>
Nathan Stratton CTO, BlinkMind, Inc. nathan at robotics.net nathan at blinkmind.com http://www.robotics.net http://www.blinkmind.com
-- [ Rodrick R. Brown ] http://www.rodrickbrown.com http://www.linkedin.com/in/rodrickbrown
-- []'s Lívio Zanol Puppim
The 10G cards from Neterion (http://www.neterion.com/products/xframeE.html) perform extremely well. Much of your potential limiting rate will be in the CPU and networking resources. I agree with many of the other folks that etherchannel should work exceedingly well, but if you want to consolidate, there are many choices for single-system 10G ethernet cards. -alan Jason Shoemaker wrote:
My company is looking for ways to improve throughput for data transfers between individual servers. We’re exploring the creation of Etherchannels using multiple server NICs, but Etherchannel seems to have the limitation of not supporting per-packet load-balancing, therefore limiting traffic between two individual hosts to 1 Gig.
In most of my research, I’ve seen 10-GigE used for traffic aggregation and for the “unified fabric” solution that Cisco and others are pushing. I’m interested in knowing if any of you have attempted to implement 10-GigE at the host level to improve network throughput between individual servers and what your experience was in doing so.
Thanks in advance,
Jason
participants (9)
-
Alan Hannan
-
Alex Thurlow
-
Chris Adams
-
Darden, Patrick S.
-
Jason Shoemaker
-
Livio Zanol Puppim
-
Matthew Huff
-
Nathan Stratton
-
Rodrick Brown