joel jaeggli wrote:
The problem is that physical layer of 100GE (with 10*10G) and 10*10GE are identical (if same plug and cable are used both for 100GE and 10*10GE). Interesting. Well, I would say if there are no technical improvements that will significantly improve performance over the best possible carrier Ethernet bonding implementation and no cost savings at the physical layer over picking the higher data rate physical layer standard, _after_ considering the increased hardware costs due to newly manufactured components for a standard that is just newer. There is a real-estate problem. 10 sfp+ connectors takes a lot more space than one qsfp+. mtp/mpo connectors and the associated trunk ribbon cables are a lot more compact than the equivalent 10Gbe footprint terminated as LC.
That's why I wrote:
(if same plug and cable are used both for 100GE and 10*10GE).
As is mentioned in 40G thread, 24 Port 40GE interface module of Extreme BD X8 can be used as 96 port 10GE.
When you add cwdm as 40Gb/s lr4 does the fiber count drops by a lot.
That's also possible with 4*10GE and 4*10GE is a lot more flexible to enable 3*10GE failure mode trivially and allows for very large skew. Masataka Ohta