Hello everyone, Under a Cloud project I ask myself to use equipment based on the Pica8 or Cumulus Networks. All in order to mount a Spine & Leaf architecture - Spine 40Gbps - Leaf in 10Gbps Someone of you there a feedback on this equipment. Regards, Yoann THOMAS CTO - Castle-IT
On Nov 1, 2015, at 23:53, Yoann THOMAS <ythomas@castle-it.fr> wrote:
Under a Cloud project I ask myself to use equipment based on the Pica8 or Cumulus Networks.
We’ve had some great conversations with Cumulus, but more generally, I think you need to look at the cloud project’s goals. Those should help inform your decision making process. Specifically, what are your SDN and generally, networking needs and use cases.
All in order to mount a Spine & Leaf architecture
- Spine 40Gbps - Leaf in 10Gbps
In a new cloud deployment of any size, you probably want more than 10G to the compute servers, especially if you’re carrying storage traffic.
Someone of you there a feedback on this equipment.
Regards,
Yoann THOMAS CTO - Castle-IT
Cheers, -j
Yoann THOMAS writes:
Under a Cloud project I ask myself to use equipment based on the Pica8 or Cumulus Networks.
Ah, quite different beasts. Cumulus Networks tries to really make the switch look like a Linux system with hardware-accelerated forwarding, so you can use stock programs that manipulate routing, e.g. Quagga, and all forwarding between the high-speed ports is done "in hardware". Most other systems including Pica8 treat the high-speed interfaces as different; you need special software to manipulate the configuration of the forwarding ASIC. I think in the case of Pica8 it's OpenFlow/Open vSwitch, for other systems it will be some sort of a ASIC-specific SDK. A colleague has built a proof-of-concept L3 leaf/spine network (using OSPFv2/OSPFv3 according to local tradition) with six 32x40GE Quanta switches running Cumulus Linux. So far it has been quite pleasant. There have been a few glitches, but those usually get fixed pretty quickly. We configure the switches very much like GNU/Linux servers, in our case using Puppet (Ansible or Chef would work just as well).
All in order to mount a Spine & Leaf architecture
- Spine 40Gbps - Leaf in 10Gbps
One interesting option is to get (e.g. 1RU 32x) 40G switches for both spine and leaf, and connect the servers using 1:4 break-out cables. Fewer SKUs, better port density at the cost of funny cabling. Also gives you a bit more flexibility with respect to uplinks (can have more than 6*40GE per leaf if needed) and downlinks (easy to connect some servers at 40GE). The new 32*100GE switches also look interesting, but they might still be prohibitively expensive (although you can save on spine count and cabling) unless you NEED the bandwidth or want to build something future-proof. They are even more flexible in that you can drive the ports as 4*10GE, 4*25GE (could be an attractive high-speed option once 25GE server adapters become common), 40GE, 2*50GE, 100GE. We have looked at Edge-Core and Quanta and they both look pretty solid. I think they are also both used by some of the Web "hypergiants". Others may be just as good - basically it's always the same Broadcom switching silicon (Trident II/II+ in the 40GE, Tomahawk in the 100GE switches) with a bit of glue; there may be subtle differences between vendors in quality, box design, airflow etc. It's a bit unhealthy that Broadcom is so dominant in this market - but probably not undeserved. There are a few alternative switching chipsets, e.g. Mellanox, Cavium XPliant that look competitive (at least on paper) and that may be more "open" than Broadcom's. I think both the software vendors (e.g. Cumulus Networks) and the ODMs (Edge-Core, Quanta etc.) are interested in these. -- Simon.
On Sun, Nov 1, 2015 at 11:53 PM, Yoann THOMAS <ythomas@castle-it.fr> wrote:
Hello everyone,
Under a Cloud project I ask myself to use equipment based on the Pica8 or Cumulus Networks.
All in order to mount a Spine & Leaf architecture
- Spine 40Gbps - Leaf in 10Gbps
Someone of you there a feedback on this equipment.
We've had a lot of success running Cumulus gear in cloudy production environments. They're easy to manage with infrastructure automation tools and perform as well as any other switch with the same hardware. There are a few features missing from the current general availability code (VRF is the main one for us), but the guys and gals at Cumulus are pretty responsive to requests for new features and I know that VRF is on its way. The main advantage IMO is the huge price break for 10g and 40g ports when compared with the other major players. -- Ian Clark Lead Network Engineer DreamHost
An easy start with Cumulus can be their VirtualBox image which can be found here https://cumulusnetworks.com/cumulus-vx/ This allows an easy start for a small evaluation PoC. On 3 November 2015 at 01:35, Ian Clark <ian.clark@dreamhost.com> wrote:
On Sun, Nov 1, 2015 at 11:53 PM, Yoann THOMAS <ythomas@castle-it.fr> wrote:
Hello everyone,
Under a Cloud project I ask myself to use equipment based on the Pica8 or Cumulus Networks.
All in order to mount a Spine & Leaf architecture
- Spine 40Gbps - Leaf in 10Gbps
Someone of you there a feedback on this equipment.
We've had a lot of success running Cumulus gear in cloudy production environments. They're easy to manage with infrastructure automation tools and perform as well as any other switch with the same hardware. There are a few features missing from the current general availability code (VRF is the main one for us), but the guys and gals at Cumulus are pretty responsive to requests for new features and I know that VRF is on its way. The main advantage IMO is the huge price break for 10g and 40g ports when compared with the other major players.
-- Ian Clark Lead Network Engineer DreamHost
participants (5)
-
Ian Clark
-
James Downs
-
Simon Leinen
-
Yoann THOMAS
-
Yuriy Babenko