Foundry BigIron series
We're considering switching to Foundry BigIrons (probably the 4000, as opposed to Cisco 6500 series switches. We're currently using 7206VXRs). Anyone have opinions (on or off list) on this product? Looking through the archives, I don't notice any discussions of this since about 2001 [1]. (http://www.foundrynet.com/products/l3backbone/bigiron/BigIronx00Datasheet.ht...) I'm mostly interested in its BGP implementation, as the boxes would be handling our core routing via BGP. (I haven't noticed anyone complaining about BGP problems in the past year or so); also opinions on the CLI would be helpful (I'm told it's 95% identical to IOS's CLI). The CLI was another complaint in the thread cited below, so I'm curious if it has improved since 2001. I'd be interested in hearing about the comparable Riverstone boxes as well. [1] thread starting at http://www.merit.edu/mail.archives/nanog/2002-09/msg00050.html -- "Since when is skepticism un-American? Dissent's not treason but they talk like it's the same..." (Sleater-Kinney - "Combat Rock")
I've gotten some really useful responses off list. Sorry for the extra noise, but I'm going to summarize the responses to the list later today (for the archives)... I'm removing names, email addresses, company names and other identifying stuff in case anyone doesn't want to be quoted publicly, but if you don't want me to quote you at all, please do send me a heads up. -- "Since when is skepticism un-American? Dissent's not treason but they talk like it's the same..." (Sleater-Kinney - "Combat Rock")
Here are the responses I got so far, trimmed and edited. Thanks once again - I got way more than I bargained for in the way of responses. I did also receive a response directly from someone at Foundry, (with at least one of the expected emails from $SALES_DROID at $COMPETITOR). Sorry for the super long post (about 22k so far) - I've tried to trim as much as possible (if you respond to this post, PLEASE don't fullquote the entire message). Hopefully this won't start any flamewars! I didn't say any of this stuff, so please don't write me telling me that I'm wrong or asking who wrote a particular comment.
From what I can tell, it sounds like the newer JetCore based stuff is much less problematic than the Ironcore stuff. Overall, sounds like the CLI is fairly easy to adapt to for someone who has experience with Cisco gear. Definitely some complaints about the BGP implementation; hard to tell whether those issues are resolved with newer hardware / software.
******* Then take Juniper! Foundry is crap, seriously. [ And a followup, after request for a more specific answer ] Foundry is crap, its end of life stuff. They wont support Ipv6 in hardware and certainly dont do things wirespeed. Besides that, take a look on some internet exchanges. For example ams-ix, allmost ZERO foundrys left, they just dont fit the BGP stuff. Most people allready replaced them with either a cisco or a juniper. We have several Junipers running, and LOVE them. We also have three foundry's and they go oput end this year, and i am happy to replace them. They just die with the numer of users and traffic we push in those boxes. And its not THAT much (70.000 customers and 1.5 GB traffic). Cisco is ok also, we use a couple of 6509s and several 7206VXRs but if you want to do serious BGP and move real traffic, go for Junipers, rock solid and cheaper then cisco also. ******* You're insane; use the cat 6500. [ And a followup, after requesting more information ] We USED Foundry, but the BGP implementation was horribly broken. ******* We built a BGP network using them where I worked in 1999 to 2000 or so. Aside from various problems they had on their WAN links we had a bunch of BGP problems, relating to memory allocation and the like. I wound up filtering routes between all my sites (as I had the same upstream from each). At the time they didn't follow the BGP rfc closely, skipping things like closest IGP hop to use router id as a rotue selection criteria. Shortly after I left they demoted them to layer 2 switches and they put int junipers. I might have some concerns regarding their ability to handle DoS attacks a la Extreme. Who knows, they may have gotten the kinks out and they sure where cheap but my experience was far from perfect. Hope that helps, ******* Just my experience. The interface ie command line is of course almost exactly the same so you'll have no trouble if you know cisco. Where I had problems is the particular bigiron I was working with would lock up hard under higher loads say 600 to 800 megabits on a gig. I'm not sure if it was load specific as the customer using the box was always under high load but it was known to lock up periodically. I had a 6509 and it only locked once as the result of what turned out to a be a counter growing to large. ******* We use Foundry and Riverstone boxes to run our ISP and have wonderful success with them...the Riverstone CLI is a bit unwieldy, but the hardware rocks. But we mainly use foundry big iron and net iron boxes at the core, and for the most part they are great. Foundry's tech support is also first class, and you just can't beat the speed. ******* Stay away from Riverstone. My upstream (GNAPS) just dumped them all...majorly unstable and they take forever to reboot when they crash. They switched to junipers (got a bunch of M20s used) and it's been rock solid since! ******* We use Foundry here and are switching to a combination of Foundry and Juniper to meet our needs. The main recommendation I have is to avoid the Ironcore-based products (older stuff) and go with the newer, Jetcore-based products. The older stuff does not support Netflow nor Sflow, and a few other BGP features that are really "must-have" in my opinion. ******* I would advise against it. I had some pretty extensive experience with this platform running a content network I left about 4 months ago. Some of the software implementations on their newer hardware may be better, but the BI4000 code we were running (their semi-latest) was totally bug ridden. Their BGP implmentation was not at all stable and their OSPF wasn't much better. They would drop routes for no reason and it would require kicking BGP or OSPF to fix the problem. In some cases removing it from the config and re-applying it. The boxes would randomly reload when applying an ACL change but with no consistancy. We also had a lot of bad luck with their hardware, we had a lot of the optics go bad on their Gig blades and sup modules. We were swapping hardware on an almost weekly basis. I think it was a bad manufacturing run. One ofo the problems with buying bigs chunks of hardware all at once. If you get a bad one, you have to suspect all of them. It was a pretty bad scene, I'm not the biggest fan iof the C company, but I was trying like hell to replace Foundry with them before I left. The only disclaimer being that they may have gotten better lately, but I wouldn't bet a network on it. The one good thing they have is a great price point. But as an engineer you will pay for it in other ways. ******* FWIW, I like the fdry gear. I ran some tests in the lab ~2 years ago btw BigIrons, Cat 6509/SUP1A/MSFC2 (aka OSR) and the BlackDiamonds. We vetoed the Extreme gear because everyone hated the CLI (it really sucks). The BigIron smoked the Cisco gear hands-down for L3 fwding and we had no issues with OSPF between them and our GSRs, 6509s, 3600s and Nokia IP650s. We had planned to use them as core LAN switches so we didn't test BGP. I had heard that there were a lot of problems with BGP way back, but from what I understand Exodus worked a lot of those bugs out with Foundry since they run a lot of them. If Exodus is using them without major problems I'd say the issues were resolved. The FDRY CLI is almost identical to IOS, except that you can do non-config mode commands while in config mode, unlike IOS. The learning curve was very small for all our guys. I've not played with the Riverstone gear, but have heard from people that it is quite nice. I know Telseon uses them exclusively for customer aggregation, we had gigE and fastE transport from our campus to the datacenters and to a remote office in SLC and had no problems related to the Riverstone gear, in fact they had one switch in our datacenter and didn't bother with a redundant unit, they trusted them that much. Once again, this was L2 transport, not BGP. Hopefully some of this is helpful even without any BGP info. ;) It is also worth noting that Foundry gave us excellent support in our testing, implementations and debugging. Their ASICs are outstanding, the 'life of a packet' through the switch and back out was the simplest most logical of all the switches we tested, one table for everything (port adj, mac, ip). Cisco's is by far the worst, requiring ARP, CAM and MLS to do L3. ******* There is a good reason why you have seen anything on it. The foundry boxes are crap. They are very unstable and have odd problems that you never see on cisco boxes. What exactly are you looking to do that would cause you to move away from cisco? ******* Its been awhile since I used the Foundry line. My experience is very positive. I can only speak highly of them. ******* My issues are along the same lines that Joel Perez mentioned in his reply regarding issues with the Riverstone boxes. Foundry is great, but their boxes are not able to keep up with what we are doing. ******* Anyway, we had some BigIrons as recently as a few months ago doing BGP on our network. My recommendation: STAY AWAY. When it works, it works fine... but when there's a problem (particularly with upstream providers) it's a bitch to figure out. Not to mention Foundry's continual layer 3 issues. Some of the recent worms brought our Foundry boxes to their knees, while our GSRs + 6509s just kept on ticking. No experience with Riverstone, other than as static-routing CPE. ******* We use the Foundry NetIrons and some of the BigIrons for a VoIP Bearer network. We have had some problems in the past with their implementation of BGP on these but I think it was mostly due to us over driving the box (i.e. we were trying to run BGP on a NetIron that cannot handle the cpu and memory load). When we first started using these boxes a while back we had some weird issues where copying and pasting too many lines of config into it from the cli would cause it to lock up. AFAIK, all of these issues have been resolved in the more recent versions of code. We are not using these on our backbone and therefore we do not have full BGP routing tables on them but for the ~30 /24s we have running through them they work quite well. Personally, I recommended about a year ago to our design team that we make these purely switches and make our Juniper M-series routers do all the routing decisions but my opinion and $.50 won't even buy a soda these days. :) Hope this helps. Let me know if you have other questions I can answer. ******* we at [ Large German ISP] use foundry bigiron 8000 and 15000 series at our datacenter. With almost 2 years of experience I realy can say.. DONT SWITCH TO FOUNDRY!!! The hardware is ok, but they are not able to implement quick workarounds in adequate time as they are implementing all great and new features directly into their asics. Thats a great plus on network performance but an even bigger minus in adapting features the customer wants to have. Features we would like to have - like netflow and ipv6 support are only supportet in their enormous expensive Management 4 modules or Velocity Management Modules series... with both you loose one plane of actiove ports.. the modules are packet with processors... no space for ports left. The Routing table is limited to 256.000 entrys (!!) no dynamic allocation possible. in default you can have somewhat of 32 subnets per interface... after some configuration and some reboots you might expand this to 128 per interface. no vlan subinterfaces supported, only global vlan interfaces with virtual router interfaces. The cli ist SORT OF cisco like - at least for the easy switch of cisco hardware to foundry. It looks the same.. but behaves completely different... all types of ethernet interfaces (ethernet fast ethernet gigabit ethernet) are just named 'ethernet x/y' within the CLI.. wich is realy confusing. They got some pro's but i found more con's 'till now... ******* We have Founry BigIron 4000s where I work. We run BGP/OSPF with several vlans and use VRRP-E for failover. The interface is very similar to Cisco's. You can run "show" commands while in config mode, which is nice. Cisco now has that feature with the "do" command and Juniper does it with "run". If you know Cisco, you can navigate a Foundry. :) We use prefix-lists for customer filtering and route-maps for anything more fancy. I haven't had any problems with the BGP operation. Problems we did have: o Newer code than what we run broke VRRP-E. There have been other releases since then, but haven't upgraded. o "show ip route | inc" crashed the router. Newer code is supposed to fix that. o ASIC limitations on the IronCore blades. I would get JetCore. o Lack of Netflow sampling on IronCore. We don't use it for billing but for traffic analysis. o Lack of URPF. Something I would like to see. Overall we are happy with the Foundries. Besides the VRRP-E disaster when we tried an upgrade, they just work. Of course, I do not do anything fancy with them. :) Mind sending a summary to the list? [ Hence, this summary ] ******* Be sure that whatever you buy doesn't do any route-caching, but rather builds a complete forwarding table in advance ala Cisco's CEF. Very many layer 3 boxes will fall down in the face of a random-source flood or a high-packet-rate random-destination worm. I had BigIrons and ServerIrons in production for a while. I found a bunch of small problems, such as: http://www.securityfocus.com/archive/1/144511 A mention of a simple exploit to take down the whole box was buried in the release notes, rather than announced to customers as an important patch. I stopped using Foundry after many incidents like this. They seemed more concerned about PR than the health of my network. And their claims of "line speed" were bogus. I bought most of a million dollars worth of Foundry gear, then discovered that the original line cards would cache the results of ACLs based on source IP, so if you could send a packet that would pass an ACL, you could then send any packet from the same IP and it would also pass. Later a switch was added to disable this, but then all packets got send through the management module and the thing crawled. I had the option of replacing management modules and line cards at my expense with newer version, which didn't exactly sit well with me. ******* We are a small provider in the Netherlands. We use 4000 as our core router. It outperforms most other routers we tested including juniper/cisco 12K. We have 3 connections: 2 transit & 1 national peering. It really satisfies our needs. [ Link to stats snipped ] We did a crash last week (warm reset) for the first time in over a year. That's when we upgraded the os to the latest. Before that, we experienced a spontanious reboot every couple of months or so. ******* It is my understanding that the foundry's have a tendancy to lock up from time to time and have to be reset (as far as BGP, anyway.) personally, I say get a used 7206 or buy a 7603 (essentially the same box but the 7200 is no longer sold by cisco.) ******* I have seen the MSFC2 with the sup1a fall over at around 500M. We are currently using 1 of those and 1 sup720 w/ MSFC3 in the 6500 series. We peaked so far this football season at 1.55 Gig outbound. We were seeing issues with the 720 / MSFC3 which is the beefiest router on the 6500 series. We could not run on just that, we had to balance our traffic (layer 2 and 3) within our core. Our provider uses a combination of Foundry net/bigron and Cisco GSRs. I personally like foundry, we use serverirons for loadbalancing, and we evaled the bigirons when we were doing our upgrades. We all voted for foundry, out finance dept took the two bids and chose cisco. Cisco is ok, I suppose. 6509 have a lot of incompatability issues. Especially with servers and nic drivers. Not sure if that will come into play for you in your design. ******* heres some notes: 1: if you implement, make sure to add: router bgp ignore-invalid-confed-as-path as if you do end up getting an invalid route, it will reject it as opposed to resetting the peer. 2: Foundry will always point you to the latest code. You will be doing frequent code upgrades to fix the bugs you have. 2a: The latest code is not up on their site, and you will need to talk to TAC to get it. The FTP Site also does not work. 2b: [ omitted by request ] 3: There is no user group to talk to other foundry users to get their input. 4: Trunking and Tagging have different meanings in cisco vs foundry. In foundry: a Trunk means a cisco Channel In Foundry: A Tagged link, is a cisco trunked link. The CLI is very close, but there are things that will get you. In Foundry: Port-name equals "description" In Foundry: Disable/enable equals "shutdown/No shutdown" 5: There are bugs that continually show up over revisions, (that are not fixed correctly, or are not ported back to the version you are running). 6: Buy extra GBICs. ******* I've got 4 Bigiron 4000s (with the newer jetcore chipset) speaking BGP. (and a fair number of FES4802s/FI4ks and older ironcore bigirons doing nonbgp things, speaking OSPF.. spanning tree, 802.1q vlans). We've been very happy with them, the ACL support is great, Foundry has been very responsive to the few issues we've had. My BGP configuration is simple, just four neighbors, limited routemapping.. But we do a fair bit with ACLs/rate limiting/ and SFlow. Their CLI is ciscoish, but not a perfect emulation. It suits me fine, but I could see how some people could find the differences annoying. I'd certantly recommend that you only get products using the Jetcore level chipset (i.e. all their *current* BI/Fi/Ni/FES products) as their jumbogram support, improved ACL support, SFlow support, and liklyhood of IPv6 Support (right now only in the special IPv6 netiron product) are the biggest reason to use FDRY at all. ******* I just got done migrating two datacenters away from using the BigIron 4000s at the core (speaking BGP with our own backbone)... primarily because the BIs couldn't handle massive syn floods (100kpps or more) without BGP going haywire, or even a few times just a flood of UDP packets. Even though this is WELL below the stated performance as would be expected according to the Foundry sales literature. Either with the flood traffic sourced inside or outside. This was with the ironcore, but we opted out of the JetCore upgrade (Foundry support's "fix" for the ticket we opened) and replaced the BI4000s with 6500 series switches (equiped with the now out of fashion Sup2/MSFC2/PFC2), which barely notice if there is some kind of (D)DoS underway. I had to rewrite all my tools for detecting them! :-) ******* The BigIrons are nice boxes. Just remember it acts like a switch first = then a router - If you run the router code. The are real solid boxes and, I never had an issue with the hardware for = 2 years but the code is a little buggy - I have forgotten the revision = they are on but ask them for a bug list of the currently recommended = code. Most of the issue can be worked around and they are pretty = responsive on releasing patches - Go with the jetcore line card if you = can they give you more diagnostic capabilities. ******* We've 2 Foundry BigIron 4000 working as core routers. They work really good, if you *only* want to route IPv4 traffic. IPv6 is a "no-no"; it doesn't exist and won't be implemented in near future, afaik. Also MPLS doesn't work on the BigIron-series. If you're searching for routers, have a look at the NetIron-series.. We installed BigIron-series, because they also worked as core switches at the same time. But as our BigIron aren't as flexible and upgrades of the Management- Interfaces are very expensive, we now change to Riverstone gear. Yesterday I got a RS3000 router, I'll have a look at it next time. People told me that Riverstone routers are as powerful as Juniper's backbone routers but really cheap, cheaper than cisco gear ;-) Maybe you could use your BigIrons as core switches and some Riverstone gear as core router or Riverstone XGS or ES as core switches.
I'm mostly interested in its BGP implementation, as the boxes would be handling our core routing via BGP.
The implementation is rock-solid, but you only can route IPv6 traffic and have no chance to play around with MPLS.
(I haven't noticed anyone complaining about BGP problems in the past year or so); also opinions on the CLI would be helpful (I'm told it's 95% identical to IOS's CLI). The CLI was another complaint in the thread cited below, so I'm curious if it has improved since 2001.
The CLI is very cisco-like. You see some differences, but you'll "accept" them, no need to think different ;-)
I'd be interested in hearing about the comparable Riverstone boxes as well.
A can't say as much at this time, but maybe you can get a testbox from Riverstone Networks. The RS3000 seems to be good for us, I'll test it next days. But they definetly aren't as expensive as Foundry, Juniper or Cisco gear and they do as much as they can within Asics. ******* We do not use BGP at all in our environment, but the problems that we have had with our Foundry units have been with the units themselves. The problems that we experienced were with the chassis. We had a couple of the first chassis that Foundry ran of the factory (their serial numbers were <100). The chassis had a habit of killing blades that were installed in it. By the time we got Foundry to replace the chassis, we had replaced a total of 7 blades (within 3 years). We also had problems where the units would spontaneously stop moving data. The interfaces on the box would always show clean information, the box just wouldnt pass packets from one interface to another. A reboot of the box would always clear up the problem. Another problem we experienced (and still do) is the box occasionaly drops our AppleTalk zones... thus stopping all AppleTalk traffic from routing. Since the chassis has been replaced, the only problem we still experience is the AppleTalk Zone dissapearence problem, but we are removing AppleTalk from our network anyway, so its mostly a non-issue. Basically the whole experience has soured me to Foundry. ******* [ Note - I am only quoting a portion of this message at the request of the sender ] [....] If you buy a BI4K, be sure you're getting the newer JetCore cards, not the old IronCore stuff. There are some big performance differences. They've been pretty stable, only a couple of surprise reloads. ACLs can be a problem, particularly outbound under heavy utilization e.g. slammer. Under those circumstances, they may leak. They also appear to use a disproportionate amount of CAM/TCAM under some circumstances, which I suspect is due to a bug that hasn't been reported because nobody else noticed the CAM consumption. Always check with your SE before trying any code upgrade. The Foundry support site rarely has the latest rev (e.g. they'll post 07.6.02, but not 07.6.02e, which contains a fix for the crash you'll get with 07.6.02). Might be obvious, but test every code rev thoroughly. Foundry's software QA seems to be on holiday much of the time, and previously fixed bugs creep back in. [....] [...] I'd summarize it as: Good feature set for most users, not a kitchen-sink like Cisco, good marketing and pricing, hardware seems to have decent capabilities, code base seems to have a surprising number of unexpected inter-dependencies between features, QA is poor, support is iffy and you should expect to keep after them. Due to the bugs/QA/support issues, we're seriously looking at switching to c6506 boxes. We eval'd JetCore upgrades, but still had unresolved issues even after an eval extension. ******* We run all foundry at the moment. It works, but it depends on what you want to do with it. We are switching to juniper and going to switch to just layer2 for the foundries. -- "Since when is skepticism un-American? Dissent's not treason but they talk like it's the same..." (Sleater-Kinney - "Combat Rock")
participants (1)
-
william+nanog@hq.dreamhost.com