http://slashdot.org/topic/datacenter/terabit-ethernet-is-dead-for-now/ Terabit Ethernet is Dead, for Now by Mark Hachman | September 26, 2012 A straw poll of the IEEE's high-speed Ethernet group finds that 400-Gbits/s is almost unanimously preferred. Sorry, everybody: terabit Ethernet looks like it will have to wait a while longer. The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group met this week in Geneva, Switzerland, with attendees concluding—almost to a man—that 400 Gbits/s should be the next step in the evolution of Ethernet. A straw poll at its conclusion found that 61 of the 62 attendees that voted supported 400 Gbits/s as the basis for the near term “call for interest,” or CFI. The bandwidth call to arms was sounded by a July report by the IEEE, which concluded that, if current trends continue, networks will need to support capacity requirements of 1 terabit per second in 2015 and 10 terabits per second by 2020. In 2015 there will be nearly 15 billion fixed and mobile-networked devices and machine-to-machine connections. The report goes on to predict that, from 2010 to 2015, global IP traffic will experience a fourfold increase from 20 exabytes per month in 2010 to 81 exabytes per month in 2015, a 32 percent CAGR. Storage growth is expected to grow to 7910 exabytes in 2015, with over half of it accessed via Ethernet. Of course, one of the first places the new, faster Ethernet links will occur will be in the data center. With that in mind, the IEEE 802.3 group began formulating a response. However, virtually all attendees seemed to be in agreement before the meeting opened, as only one presentation focused on the feasibility of one-terabit Ethernet, eventually concluding that 400 Gbits/s made more sense in the near term. Kai Cui and Peter Stassar from Huawei Technologies suggested that the most cost-effective method for developing a 1-terabit Physical Medium Dependent (PMD) would be to leverage today’s 100-Gbit technology, which isn’t yet in high volume, and therefore not cost-optimized. “[The] cost target for 1Tb/s needs to be at or below 100G cost/bit*sec and required R&D investments should be modest,” they wrote as part of their presentation. “100GbE technology based architecture would imply 40 lanes at 25G, which clearly would imply impractically big packages and large amount of interface signals,” Cui and Stassar added, which would need to reduce the number of electrical and optical interface lanes to enable a reasonable package size. While alternative modulation formats could be used (5λx200G DP-16QAM, 4 bits/symbol, 25G) “neither the multi-level nor the phase modulation format based technologies have been demonstrated to be sufficiently mature to justify usage in client PMDs towards 100Gb/s to 1Tb/s applications.” They concluded: “1Tb/s does seem a ‘bridge too far’ at least for the coming 3 to 4 years.” Chris Cole of optical components maker Finisar presented the case for a 400-Gbit CFI, with backing from Brocade, Cisco, HP, IBM, Intel, Juniper, and Verizon, among others. Like Huawei’s Cui and Stassar, Cole indicated that 400-Gbit Ethernet can reuse 100 GbE building blocks, and fits within the existing dense 100 GbE roadmap. Faster data rates require “exotic” implementations, with higher R&D investments required and a longer time to market. “Data rates beyond 400Gb/s require an increasingly impractical number of lanes if 100GbE technology is reused,” he said. 400 Gbit/s also makes more sense than a 4×100 Gb/s link aggregation, Cole added, as fewer items promotes management efficiency. Individual link congestion is also a concern: “Without faster links, [the] link count grows exponentially, therefore management pain grows exponentially.” Cole suggested that a potential 400 Gb/s MAC/PCS ASIC could be fabricated in either 20- or 28-nm CMOS, using a 400-bit wide bus and a 1 GHz clock rate. “There is a strong desire to reuse 802.3ba, 802.3bj, and 802.3bm technology building blocks,” he said. That’s not to say that terabit Ethernet won’t be needed, Cole concluded, or 1.6 terabit Ethernet, at that. The timeframes for those followon CFIs could be between 3 to 6 years, he said. The CFI hasn’t formally occurred; until it does, nothing has been decided. So far, the most likely dates for formalizing the CFI will take place in either November or next month. But at this point, it looks like terabit Ethernet is a dead duck, at least for the near future.
On Thu, Sep 27, 2012 at 8:51 AM, Eugen Leitl <eugen@leitl.org> wrote:
http://slashdot.org/topic/datacenter/terabit-ethernet-is-dead-for-now/
Terabit Ethernet is Dead, for Now
I recall 40Gbit/s Ethernet being promoted heavily for similar reasons as the ones in this article, but then 100Gbit/s being the technology that actually ended up in most places. Could this be the same thing happening? -- Darius Jahandarie
In a message written on Thu, Sep 27, 2012 at 08:58:09AM -0400, Darius Jahandarie wrote:
I recall 40Gbit/s Ethernet being promoted heavily for similar reasons as the ones in this article, but then 100Gbit/s being the technology that actually ended up in most places. Could this be the same thing happening?
Everything I've read sounds like a repeat of the same broken decision making that happened last time. That is unsurprising though, the same people are involved. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/
On Thu, Sep 27, 2012 at 6:04 AM, Leo Bicknell <bicknell@ufp.org> wrote:
In a message written on Thu, Sep 27, 2012 at 08:58:09AM -0400, Darius Jahandarie wrote:
I recall 40Gbit/s Ethernet being promoted heavily for similar reasons as the ones in this article, but then 100Gbit/s being the technology that actually ended up in most places. Could this be the same thing happening?
Everything I've read sounds like a repeat of the same broken decision making that happened last time.
That is unsurprising though, the same people are involved.
If the vendors are saying costs will be prohibitive, are you willing to pay significantly more for the interfaces to make them do it anyways? -- -george william herbert george.herbert@gmail.com
On Sep 27, 2012, at 8:58 AM, Darius Jahandarie <djahandarie@gmail.com> wrote:
I recall 40Gbit/s Ethernet being promoted heavily for similar reasons as the ones in this article, but then 100Gbit/s being the technology that actually ended up in most places. Could this be the same thing happening?
I would say yes, except for the physics involved here. Getting the signal done optically is the "easy" part. I'm not concerned if the next step after 100 is 400. It's in the right direction and a fair multiple. There is also a problem in the 100GbE space where the market pricing hasn't yet reached an amount whereby the economics are "close enough" to push people beyond N*10G. - Jared
That problem IMO will only be worse with a 4x speed multiplier over 100G what premium will anyone be willing to spend to have a single 400G pipe over 4 bonded 100G pipes? -jim On Thu, Sep 27, 2012 at 10:07 AM, Jared Mauch <jared@puck.nether.net> wrote:
On Sep 27, 2012, at 8:58 AM, Darius Jahandarie <djahandarie@gmail.com> wrote:
I recall 40Gbit/s Ethernet being promoted heavily for similar reasons as the ones in this article, but then 100Gbit/s being the technology that actually ended up in most places. Could this be the same thing happening?
I would say yes, except for the physics involved here. Getting the signal done optically is the "easy" part.
I'm not concerned if the next step after 100 is 400. It's in the right direction and a fair multiple. There is also a problem in the 100GbE space where the market pricing hasn't yet reached an amount whereby the economics are "close enough" to push people beyond N*10G.
- Jared
On Sep 27, 2012, at 9:26 AM, jim deleskie <deleskie@gmail.com> wrote:
That problem IMO will only be worse with a 4x speed multiplier over 100G what premium will anyone be willing to spend to have a single 400G pipe over 4 bonded 100G pipes?
When you consider that 10GE is less than 10X the price of Gig-E, which is less than 10X the price of Fast-E (Slow-E by today's standards?) ... The economics don't really make sense that 40GE > 4 * 10GE, and 100GE > 10X 10GE ... The manufacturers are probably shooting themselves in the foot here, because for anyone who does not need anything faster than 10GE (which represents many ISP's, enterprises, etc), they would consider 40/100GE if it were cheap enough to have a nice luxury, but not if they are paying a huge premium. This translates into fewer overall sales, which translates into the product becoming "niche", and the parts ending up more expensive for those who do need it (Tier 1 ISP's, CDN's, Large Tier 2's, Exchanges, etc). As we all know, margins in many of those types of businesses are not huge in the first place, translating to an even smaller demand for that technology. That smaller demand ultimately translates to fewer dollars available for developing the next generation. The market doesn't just need faster interlinks, it needs them to be cheaper (per-bit), too! -Phil
On Thu, 27 Sep 2012, jim deleskie wrote:
That problem IMO will only be worse with a 4x speed multiplier over 100G what premium will anyone be willing to spend to have a single 400G pipe over 4 bonded 100G pipes?
I'd say most are not willing to pay any premium at all, but are willing to adopt 4x interface speed when there is parity in cost/bit/s to bonded tech. I opposed 40GE, but since physics is a lot of the problem here, I think 400GE is favorable over 1TE. Already now we're sitting with platforms with forwarding performance per slot that doesn't really match 100GE nicely, imagine the equivalent problem for 1TE. By the time this is ready, will platforms be at slightly over 1T per slot, perhaps it then makes more sense to have 3x400GE instead of 1x1TE per slot. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Thu, Sep 27, 2012 at 9:41 AM, Mikael Abrahamsson <swmike@swm.pp.se>wrote:
I opposed 40GE, but since physics is a lot of the problem here, I think 400GE is favorable over 1TE. Already now we're sitting with platforms with forwarding performance per slot that doesn't really match 100GE nicely, imagine the equivalent problem for 1TE. By the time this is ready, will platforms be at slightly over 1T per slot, perhaps it then makes more sense to have 3x400GE instead of 1x1TE per slot.
1Tb/s per slot is a reality that will be here sooner than many might realize. I think the bonded vs. native argument can come down to optical bandwidth. If you are limited to N number of wavelengths on a segment having faster native interfaces becomes desirable (assuming similar optical bandwidth per unit). -Steve
Jared Mauch wrote:
There is also a problem in the 100GbE space where the market pricing hasn't yet reached an amount whereby the economics are "close enough" to push people beyond N*10G.
The problem is that physical layer of 100GE (with 10*10G) and 10*10GE are identical (if same plug and cable are used both for 100GE and 10*10GE). Both 100GE and 10*10GE use trunking. The difference is whether trunking is done below (100GE) or above (10*10GE) L2 framing. While 100GE has lower HOL delay (though already negligible with 10GE), 10*10GE is more flexible. Still, for 100GE, under some circumstances, 100GE with 4*25G may become less expensive than 10*10GE. But, as it is unlikely that 1TE will be 4*250G or 400GE will be 2*200G, faster Ethernet has little, if any, economical merit. Masataka Ohta
On 9/29/12, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Jared Mauch wrote: ... The problem is that physical layer of 100GE (with 10*10G) and 10*10GE are identical (if same plug and cable are used both for 100GE and 10*10GE).
Interesting. Well, I would say if there are no technical improvements that will significantly improve performance over the best possible carrier Ethernet bonding implementation and no cost savings at the physical layer over picking the higher data rate physical layer standard, _after_ considering the increased hardware costs due to newly manufactured components for a standard that is just newer. E.g. If no fewer transceivers and fewer strands of fiber required, or shorter wavelength required, so it doesn't enable you to achieve greater throughput over the same amount of light spectrum on your cabling, and therefore lower cost at sufficient density, then: in that case, there will probably be fairly little point in having the higher rate standard exist in the first place, as long as the bonding mechanisms available are good for the previous standard. Just keep bonding together more and more data links at basic units of 10GE, until the required throughput capacity has been achieved. It's not as if a newer 1 Tbit standard, will make the bits you send get read at the other end faster than the speed of light. Newer standard does not necessarily mean more reliable, technically better, or more efficient, so it is prudent to consider what is actually achieved that would benefit networks considered to be potential candidates for implementation of the new standard, before actually making it a standard... -- -JH
On 9/30/12 12:05 PM, Jimmy Hess wrote:
Jared Mauch wrote: ... The problem is that physical layer of 100GE (with 10*10G) and 10*10GE are identical (if same plug and cable are used both for 100GE and 10*10GE). Interesting. Well, I would say if there are no technical improvements that will significantly improve performance over the best
On 9/29/12, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote: possible carrier Ethernet bonding implementation and no cost savings at the physical layer over picking the higher data rate physical layer standard, _after_ considering the increased hardware costs due to newly manufactured components for a standard that is just newer. There is a real-estate problem. 10 sfp+ connectors takes a lot more space than one qsfp+. mtp/mpo connectors and the associated trunk ribbon cables are a lot more compact than the equivalent 10Gbe footprint terminated as LC. When you add cwdm as 40Gb/s lr4 does the fiber count drops by a lot. E.g. If no fewer transceivers and fewer strands of fiber required, or shorter wavelength required, so it doesn't enable you to achieve greater throughput over the same amount of light spectrum on your cabling, and therefore lower cost at sufficient density, then: in that case, there will probably be fairly little point in having the higher rate standard exist in the first place, as long as the bonding mechanisms available are good for the previous standard.
Just keep bonding together more and more data links at basic units of 10GE, until the required throughput capacity has been achieved.
It's not as if a newer 1 Tbit standard, will make the bits you send get read at the other end faster than the speed of light. Newer standard does not necessarily mean more reliable, technically better, or more efficient, so it is prudent to consider what is actually achieved that would benefit networks considered to be potential candidates for implementation of the new standard, before actually making it a standard...
-- -JH
joel jaeggli wrote:
The problem is that physical layer of 100GE (with 10*10G) and 10*10GE are identical (if same plug and cable are used both for 100GE and 10*10GE). Interesting. Well, I would say if there are no technical improvements that will significantly improve performance over the best possible carrier Ethernet bonding implementation and no cost savings at the physical layer over picking the higher data rate physical layer standard, _after_ considering the increased hardware costs due to newly manufactured components for a standard that is just newer. There is a real-estate problem. 10 sfp+ connectors takes a lot more space than one qsfp+. mtp/mpo connectors and the associated trunk ribbon cables are a lot more compact than the equivalent 10Gbe footprint terminated as LC.
That's why I wrote:
(if same plug and cable are used both for 100GE and 10*10GE).
As is mentioned in 40G thread, 24 Port 40GE interface module of Extreme BD X8 can be used as 96 port 10GE.
When you add cwdm as 40Gb/s lr4 does the fiber count drops by a lot.
That's also possible with 4*10GE and 4*10GE is a lot more flexible to enable 3*10GE failure mode trivially and allows for very large skew. Masataka Ohta
On 30/09/12 20:05, Jimmy Hess wrote:
On 9/29/12, Masataka Ohta<mohta@necom830.hpcl.titech.ac.jp> wrote:
Jared Mauch wrote: ... The problem is that physical layer of 100GE (with 10*10G) and 10*10GE are identical (if same plug and cable are used both for 100GE and 10*10GE).
Interesting. Well, I would say if there are no technical improvements that will significantly improve performance over the best possible carrier Ethernet bonding implementation and no cost savings at the physical layer over picking the higher data rate physical layer standard,_after_ considering the increased hardware costs due to newly manufactured components for a standard that is just newer.
E.g. If no fewer transceivers and fewer strands of fiber required, or shorter wavelength required, so it doesn't enable you to achieve greater throughput over the same amount of light spectrum on your cabling, and therefore lower cost at sufficient density, then: in that case, there will probably be fairly little point in having the higher rate standard exist in the first place, as long as the bonding mechanisms available are good for the previous standard.
When you consider 100GBASE-LR4 (with its 4x25G form factor) there is some efficiency to be gained. ADVA & others now support the running of each channel on their DWDM muxes at ~28G, to suit carrying 100GBASE-LR4 over four of your existing waves. CFPs with 4xSFP+ tunable optics in the front are out there for this reason. Once you get your head (and wallet) around that, there becomes a case for running each of your waves at 2.5x the rate they're employed at now. The remaining question is then to decide if that's cheaper than running more fibre. Still a hard one to justify though, I agree. I've recently seen a presentation from EPF** (by Juniper) that was *very* interesting in the >100G race, from a technical perspective. Well worth hunting that one down if you can, as it details a lot about optic composition in future standards, optic densities/backplanes, etc. Tom ** I couldn't justify going, but the nerd porn is hard to turn down. :)
Tom Hill wrote:
Once you get your head (and wallet) around that, there becomes a case for running each of your waves at 2.5x the rate they're employed at now. The remaining question is then to decide if that's cheaper than running more fibre.
It depends on distance between senders and receivers. However, at certain distance it becomes impossible to use efficient (w.r.t. bits per symbol) encoding, because of noise of repeated EDFA amplification.
Still a hard one to justify though, I agree.
For 50Gbps lane, it becomes even harder and, for 100Gbps lane, it will likely to be impossible.
I've recently seen a presentation from EPF** (by Juniper) that was *very* interesting in the >100G race, from a technical perspective. Well worth hunting that one down if you can, as it details a lot about optic composition in future standards, optic densities/backplanes, etc.
This one? http://www.peering-forum.eu/assets/presentations2012/JunpierEPF7.pdf But, it does not say much about >100G. Masataka Ohta
On 2012-10-01 08:57, Masataka Ohta wrote:
Tom Hill wrote:
Once you get your head (and wallet) around that, there becomes a case for running each of your waves at 2.5x the rate they're employed at now. The remaining question is then to decide if that's cheaper than running more fibre.
It depends on distance between senders and receivers.
However, at certain distance it becomes impossible to use efficient (w.r.t. bits per symbol) encoding, because of noise of repeated EDFA amplification.
<500km not enough? https://www.de-cix.net/news-events/latest-news/news/article/de-cix-chooses-a...
Still a hard one to justify though, I agree.
For 50Gbps lane, it becomes even harder and, for 100Gbps lane, it will likely to be impossible.
Tell this to Ciena... ;) If you can afford Wave Logic 3 interfaces for your Nortel^WCiena 6500's, you'll find some pretty impressive things are actually possible, including 100G per 100GHz guide over very large distances (think Atlantic-large). Coherence appears to be the secret sauce in pushing the SnR boundaries, albeit I'm not going to pretend to even understand the physics involved, I was just lucky enough to speak to some people that do. :)
I've recently seen a presentation from EPF** (by Juniper) that was *very* interesting in the >100G race, from a technical perspective. Well worth hunting that one down if you can, as it details a lot about optic composition in future standards, optic densities/backplanes, etc.
This one?
http://www.peering-forum.eu/assets/presentations2012/JunpierEPF7.pdf
But, it does not say much about >100G.
Yes, that is the one. Slide #11 is the one I'm referring to, 'Projection of Form Factor Evolution to 400G', which is relevant to the discussion on optic densities and the push above 100G. Tom
On Mon, 1 Oct 2012, tom@ninjabadger.net wrote:
If you can afford Wave Logic 3 interfaces for your Nortel^WCiena 6500's, you'll find some pretty impressive things are actually possible, including 100G per 100GHz guide over very large distances (think Atlantic-large).
The amount of processing power and equipment in the transponder to achieve this is most likely out of scope for short term practical 400GE/1TBE that IEEE will put into standard. So serial lights on/off much faster than 25 Gbaud runs into serious physical limitations, as some physical effects increase 4-fold when you on/off blink 2 times as fast. That's why we do not have 100Gbaud 100GE, but instead 4x25Gbaud.
Coherence appears to be the secret sauce in pushing the SnR boundaries, albeit I'm not going to pretend to even understand the physics involved, I was just lucky enough to speak to some people that do. :)
Yes, for long-haul DWDM systems coherent detection is absolutely the way to go. For short and metro reach at low cost, that is probably going to be a bit further into the future. -- Mikael Abrahamsson email: swmike@swm.pp.se
tom@ninjabadger.net wrote:
It depends on distance between senders and receivers.
However, at certain distance it becomes impossible to use efficient (w.r.t. bits per symbol) encoding, because of noise of repeated EDFA amplification.
<500km not enough?
https://www.de-cix.net/news-events/latest-news/news/article/de-cix-chooses-a...
As it says: ADVA Optical Networking's 100G Metro solution is built on 4x28G direct detection technology and I wrote: Still, for 100GE, under some circumstances, 100GE with 4*25G may become less expensive than 10*10GE. 100GE over 500km could be fine.
For 50Gbps lane, it becomes even harder and, for 100Gbps lane, it will likely to be impossible.
Tell this to Ciena... ;)
If you can afford Wave Logic 3 interfaces for your Nortel^WCiena 6500's, you'll find some pretty impressive things are actually possible, including 100G per 100GHz guide over very large distances (think Atlantic-large).
I'm afraid it uses 8 or 4 lanes.
Coherence appears to be the secret sauce in pushing the SnR boundaries,
Just +3db, which is already counted, nothing more than that.
http://www.peering-forum.eu/assets/presentations2012/JunpierEPF7.pdf
But, it does not say much about >100G.
Yes, that is the one. Slide #11 is the one I'm referring to, 'Projection of Form Factor Evolution to 400G', which is relevant to the discussion on optic densities and the push above 100G.
As I wrote from the beginning that: (if same plug and cable are used both for 100GE and 10*10GE). physical form factors can be identical between 100GE (10*10G) and 10*10GE. Thus, the point of the slide #11 is not a valid counter argument against my point that trunked 40*10GE or 16*25GE is no worse than actually trunked 400GE with 40*10G or 16*25G. While slide #12 mentions 50Gbps per lane, it is too often impossible to be as practical as the Ethernet today. Masataka Ohta
Several good presentations were given at the IEEE meeting in Geneva last week about why we should do 400 GbE and not TbE. You can find them here: http://www.ieee802.org/3/ad_hoc/hse/public/12_09/index.shtml . Greg -- Greg Hankins <ghankins@mindspring.com>
On 9/27/12 5:58 AM, Darius Jahandarie wrote:
http://slashdot.org/topic/datacenter/terabit-ethernet-is-dead-for-now/
Terabit Ethernet is Dead, for Now I recall 40Gbit/s Ethernet being promoted heavily for similar reasons as the ones in this article, but then 100Gbit/s being the technology
On Thu, Sep 27, 2012 at 8:51 AM, Eugen Leitl <eugen@leitl.org> wrote: that actually ended up in most places. Could this be the same thing happening?
40Gb/s appears to be doing just fine in top of rack switches and datacenter distribution layer. Given that it's in most server NIC roadmaps in the relatively near term it doesn't have significant barriers to becoming the volume offering of choice. getting datacenters off of om3/4 multimode distribution is a long project however.
On 27/09/2012 14:58, Darius Jahandarie wrote:
I recall 40Gbit/s Ethernet being promoted heavily for similar reasons as the ones in this article, but then 100Gbit/s being the technology that actually ended up in most places. Could this be the same thing happening?
no. the IEEE working group was split between 40GE and 100GE and ended up supporting both. As a result, the vendors ended up having to split time and resources investigating both, which was to the huge detriment of the industry. It's a good thing that they're deciding on a single spec, even if it isn't as fast as some people might like. Nick
If they would have rolled out 1000G networks now, I guess we will have to plug in 17 MTP interfaces ;) HTH, Dan #13685 (RS/Sec/SP) The CCIE troubleshooting blog: http://dans-net.com Bring order to your Private VLAN network: http://marathon-networks.com On Thu, Sep 27, 2012 at 2:51 PM, Eugen Leitl <eugen@leitl.org> wrote:
http://slashdot.org/topic/datacenter/terabit-ethernet-is-dead-for-now/
Terabit Ethernet is Dead, for Now
by Mark Hachman | September 26, 2012
A straw poll of the IEEE's high-speed Ethernet group finds that 400-Gbits/s is almost unanimously preferred.
Sorry, everybody: terabit Ethernet looks like it will have to wait a while longer.
The IEEE 802.3 Industry Connections Higher Speed Ethernet Consensus group met this week in Geneva, Switzerland, with attendees concluding—almost to a man—that 400 Gbits/s should be the next step in the evolution of Ethernet. A straw poll at its conclusion found that 61 of the 62 attendees that voted supported 400 Gbits/s as the basis for the near term “call for interest,” or CFI.
The bandwidth call to arms was sounded by a July report by the IEEE, which concluded that, if current trends continue, networks will need to support capacity requirements of 1 terabit per second in 2015 and 10 terabits per second by 2020. In 2015 there will be nearly 15 billion fixed and mobile-networked devices and machine-to-machine connections.
The report goes on to predict that, from 2010 to 2015, global IP traffic will experience a fourfold increase from 20 exabytes per month in 2010 to 81 exabytes per month in 2015, a 32 percent CAGR. Storage growth is expected to grow to 7910 exabytes in 2015, with over half of it accessed via Ethernet. Of course, one of the first places the new, faster Ethernet links will occur will be in the data center.
With that in mind, the IEEE 802.3 group began formulating a response. However, virtually all attendees seemed to be in agreement before the meeting opened, as only one presentation focused on the feasibility of one-terabit Ethernet, eventually concluding that 400 Gbits/s made more sense in the near term.
Kai Cui and Peter Stassar from Huawei Technologies suggested that the most cost-effective method for developing a 1-terabit Physical Medium Dependent (PMD) would be to leverage today’s 100-Gbit technology, which isn’t yet in high volume, and therefore not cost-optimized. “[The] cost target for 1Tb/s needs to be at or below 100G cost/bit*sec and required R&D investments should be modest,” they wrote as part of their presentation.
“100GbE technology based architecture would imply 40 lanes at 25G, which clearly would imply impractically big packages and large amount of interface signals,” Cui and Stassar added, which would need to reduce the number of electrical and optical interface lanes to enable a reasonable package size. While alternative modulation formats could be used (5λx200G DP-16QAM, 4 bits/symbol, 25G) “neither the multi-level nor the phase modulation format based technologies have been demonstrated to be sufficiently mature to justify usage in client PMDs towards 100Gb/s to 1Tb/s applications.”
They concluded: “1Tb/s does seem a ‘bridge too far’ at least for the coming 3 to 4 years.”
Chris Cole of optical components maker Finisar presented the case for a 400-Gbit CFI, with backing from Brocade, Cisco, HP, IBM, Intel, Juniper, and Verizon, among others.
Like Huawei’s Cui and Stassar, Cole indicated that 400-Gbit Ethernet can reuse 100 GbE building blocks, and fits within the existing dense 100 GbE roadmap. Faster data rates require “exotic” implementations, with higher R&D investments required and a longer time to market. “Data rates beyond 400Gb/s require an increasingly impractical number of lanes if 100GbE technology is reused,” he said.
400 Gbit/s also makes more sense than a 4×100 Gb/s link aggregation, Cole added, as fewer items promotes management efficiency. Individual link congestion is also a concern: “Without faster links, [the] link count grows exponentially, therefore management pain grows exponentially.”
Cole suggested that a potential 400 Gb/s MAC/PCS ASIC could be fabricated in either 20- or 28-nm CMOS, using a 400-bit wide bus and a 1 GHz clock rate. “There is a strong desire to reuse 802.3ba, 802.3bj, and 802.3bm technology building blocks,” he said.
That’s not to say that terabit Ethernet won’t be needed, Cole concluded, or 1.6 terabit Ethernet, at that. The timeframes for those followon CFIs could be between 3 to 6 years, he said.
The CFI hasn’t formally occurred; until it does, nothing has been decided. So far, the most likely dates for formalizing the CFI will take place in either November or next month. But at this point, it looks like terabit Ethernet is a dead duck, at least for the near future.
participants (17)
-
Dan Shechter
-
Darius Jahandarie
-
Eugen Leitl
-
George Herbert
-
Greg Hankins
-
Jared Mauch
-
jim deleskie
-
Jimmy Hess
-
joel jaeggli
-
Leo Bicknell
-
Masataka Ohta
-
Mikael Abrahamsson
-
Nick Hilliard
-
Rosenthal Phil
-
Steve Meuse
-
Tom Hill
-
tom@ninjabadger.net