High throughput bgp links using gentoo + stipped kernel
Hello Everyone, We are running: Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (rev 03) 2 bgp links from different providers using quagga, iptables etc.... We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either. I was hoping to get some feedback on what else we can strip from the kernel. If you have a similar setup for a stable platform the .config would be great! Also, what are your thoughts on migrating to OpenBSD and bgpd, not sure if there would be a performance increase, but the security would be even more stronger? Kind Regards, Nick
Hello Nick, Your email is pretty generic, the likelihood of anyone being able to provide any actual help or advice is pretty low. I suggest you check out Vyatta.org, its an Open Source router solution that uses Quagga for its underlying BGP management, and if you desire you can purpose a support package a few grand a year. Cheers, Mike -- Michael McConnell WINK Streaming; email: michael@winkstreaming.com phone: +1 312 281-5433 x 7400 cell: +506 8706-2389 skype: wink-michael web: http://winkstreaming.com On May 18, 2013, at 9:39 AM, Nick Khamis <symack@gmail.com> wrote:
Hello Everyone,
We are running:
Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (rev 03)
2 bgp links from different providers using quagga, iptables etc....
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
I was hoping to get some feedback on what else we can strip from the kernel. If you have a similar setup for a stable platform the .config would be great!
Also, what are your thoughts on migrating to OpenBSD and bgpd, not sure if there would be a performance increase, but the security would be even more stronger?
Kind Regards,
Nick
On 5/18/13, Michael McConnell <michael@winkstreaming.com> wrote:
Hello Nick,
Your email is pretty generic, the likelihood of anyone being able to provide any actual help or advice is pretty low. I suggest you check out Vyatta.org, its an Open Source router solution that uses Quagga for its underlying BGP management, and if you desire you can purpose a support package a few grand a year.
Cheers, Mike
--
Michael McConnell WINK Streaming; email: michael@winkstreaming.com phone: +1 312 281-5433 x 7400 cell: +506 8706-2389 skype: wink-michael web: http://winkstreaming.com
On May 18, 2013, at 9:39 AM, Nick Khamis <symack@gmail.com> wrote:
Hello Everyone,
We are running:
Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (rev 03)
2 bgp links from different providers using quagga, iptables etc....
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
I was hoping to get some feedback on what else we can strip from the kernel. If you have a similar setup for a stable platform the .config would be great!
Also, what are your thoughts on migrating to OpenBSD and bgpd, not sure if there would be a performance increase, but the security would be even more stronger?
Kind Regards,
Nick
Hello Michael, I totally understand how my question is generic in nature. I will defiantly take a look at Vyatta, and weigh the effort vs. benefit topic. The purpose of my email is to see how people with similar setups managed to get more out of their system using kernel tweaks or further stripping on their OS. In our case, we are using Gentoo. Nick.
I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few small BGP connections for a few year. They were running CentOS 5 + Quagga with a bunch of stuff turned off. Worked extremely well. We also had really small traffic back then. Server hardware has become amazingly fast under-the-covers these days. It certainly still can't match an ASIC designed solution from Cisco etc, but it should be able to push several GB of traffic. In HPC storage applications, for example, we have multiple servers with Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's not network, but it does demonstrate pushing data into daemon applications and back down to the kernel at high rates. Certainly a kernel routing table with no iptables and a small Quagga daemon in the background can push similar. In other words, get new hardware and design it flow. On Sun, May 19, 2013 at 10:58 AM, Nick Khamis <symack@gmail.com> wrote:
Hello Nick,
Your email is pretty generic, the likelihood of anyone being able to
On 5/18/13, Michael McConnell <michael@winkstreaming.com> wrote: provide
any actual help or advice is pretty low. I suggest you check out Vyatta.org, its an Open Source router solution that uses Quagga for its underlying BGP management, and if you desire you can purpose a support package a few grand a year.
Cheers, Mike
--
Michael McConnell WINK Streaming; email: michael@winkstreaming.com phone: +1 312 281-5433 x 7400 cell: +506 8706-2389 skype: wink-michael web: http://winkstreaming.com
On May 18, 2013, at 9:39 AM, Nick Khamis <symack@gmail.com> wrote:
Hello Everyone,
We are running:
Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (rev 03)
2 bgp links from different providers using quagga, iptables etc....
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
I was hoping to get some feedback on what else we can strip from the kernel. If you have a similar setup for a stable platform the .config would be great!
Also, what are your thoughts on migrating to OpenBSD and bgpd, not sure if there would be a performance increase, but the security would be even more stronger?
Kind Regards,
Nick
Hello Michael,
I totally understand how my question is generic in nature. I will defiantly take a look at Vyatta, and weigh the effort vs. benefit topic. The purpose of my email is to see how people with similar setups managed to get more out of their system using kernel tweaks or further stripping on their OS. In our case, we are using Gentoo.
Nick.
-- Zach Giles zgiles@gmail.com
On 5/19/13, Zachary Giles <zgiles@gmail.com> wrote:
I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few small BGP connections for a few year. They were running CentOS 5 + Quagga with a bunch of stuff turned off. Worked extremely well. We also had really small traffic back then.
Server hardware has become amazingly fast under-the-covers these days. It certainly still can't match an ASIC designed solution from Cisco etc, but it should be able to push several GB of traffic. In HPC storage applications, for example, we have multiple servers with Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's not network, but it does demonstrate pushing data into daemon applications and back down to the kernel at high rates. Certainly a kernel routing table with no iptables and a small Quagga daemon in the background can push similar.
In other words, get new hardware and design it flow.
What we are having a hard time with right now is finding that "perfect" setup without going the whitebox route. For example the x3250 M4 has one pci-e gen 3 x8 full length (great!), and one gen 2 x4 (Not so good...). The ideal in our case would be a newish xserver with two full length gen 3 x8 or even x16 in a nice 1u for factor humming along and being able to handle up to 64 GT/s of traffic, firewall and NAT rules included. Hope this is not considered noise to an old problem however, any help is greatly appreciated, and will keep everyone posted on the final numbers post upgrade. N.
Not noise! On May 19, 2013 10:20 AM, "Nick Khamis" <symack@gmail.com> wrote:
On 5/19/13, Zachary Giles <zgiles@gmail.com> wrote:
I had two Dell R3xx 1U servers with Quad Gige Cards in them and a few small BGP connections for a few year. They were running CentOS 5 + Quagga with a bunch of stuff turned off. Worked extremely well. We also had really small traffic back then.
Server hardware has become amazingly fast under-the-covers these days. It certainly still can't match an ASIC designed solution from Cisco etc, but it should be able to push several GB of traffic. In HPC storage applications, for example, we have multiple servers with Quad 40Gig and IB pushing ~40GB of traffic of fairly large blocks. It's not network, but it does demonstrate pushing data into daemon applications and back down to the kernel at high rates. Certainly a kernel routing table with no iptables and a small Quagga daemon in the background can push similar.
In other words, get new hardware and design it flow.
What we are having a hard time with right now is finding that "perfect" setup without going the whitebox route. For example the x3250 M4 has one pci-e gen 3 x8 full length (great!), and one gen 2 x4 (Not so good...). The ideal in our case would be a newish xserver with two full length gen 3 x8 or even x16 in a nice 1u for factor humming along and being able to handle up to 64 GT/s of traffic, firewall and NAT rules included.
Hope this is not considered noise to an old problem however, any help is greatly appreciated, and will keep everyone posted on the final numbers post upgrade.
N.
Hello Nick, On 18.05.2013, at 18:39, Nick Khamis <symack@gmail.com> wrote:
Hello Everyone,
We are running:
Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (rev 03)
2 bgp links from different providers using quagga, iptables etc....
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
I was hoping to get some feedback on what else we can strip from the kernel. If you have a similar setup for a stable platform the .config would be great!
Also, what are your thoughts on migrating to OpenBSD and bgpd, not sure if there would be a performance increase, but the security would be even more stronger?
Kind Regards,
Nick
You might be maxing out your server's PCI bus throughput, so it might be a better idea if you can get Ethernet NICs that are sitting at least on PCIe x8 slots. Leaving that aside, I take it you've configured some sort of CPU/PCI affinity? As for migration to another OS, I find FreeBSD better as a matter of network performance. The last time I checked OpenBSD was either lacking or was in the early stages of multiple cores support.
On 5/19/13, Nikola Kolev <nikky@mnet.bg> wrote:
You might be maxing out your server's PCI bus throughput, so it might be a better idea if you can get Ethernet NICs that are sitting at least on PCIe x8 slots.
Nikola, thank you so much for your response! It kind of looks that way, and we do have another candidate machine that has a PCIe 3 x8. First thing, I never liked riser card and the candidate IBM x3250 M$ does use them. Not sure how much of a hit I will take for that. Secondly are there any proven intel 4 port cards in PCIe 3 preferably pro 1000.
Leaving that aside, I take it you've configured some sort of CPU/PCI affinity?
For interrupts we disabled CONFIG_HOTPLUG_CPU in the kernel, and assigned interrupts to the less used core using APIC. I am not sure if there is anything more we can do?
As for migration to another OS, I find FreeBSD better as a matter of network performance. The last time I checked OpenBSD was either lacking or was in the early stages of multiple cores support.
I know I mentioned migration, but gentoo has been really good to us, and we grew really fond of her :). Hope I can tune it further before retiring it as our OS of choice. Nick.
As for migration to another OS, I find FreeBSD better as a matter of network performance. The last time I checked OpenBSD was either lacking or was in the early stages of multiple cores support.
If you do decide to go the FreeBSD route (you can run openbgpd on FreeBSD if you like), check out the POLLING option on ethernet NICs, it cuts down on the number of interrupts and can increase performance, particularly when dealing with smaller packets.
2013/5/19 Andrew Jones <aj@jonesy.com.au>
As for migration to another OS, I find FreeBSD better as a matter of
network performance. The last time I checked OpenBSD was either lacking or was in the early stages of multiple cores support.
If you do decide to go the FreeBSD route (you can run openbgpd on FreeBSD if you like), check out the POLLING option on ethernet NICs, it cuts down on the number of interrupts and can increase performance, particularly when dealing with smaller packets.
Polling on FreeBSD in modern NICs is discouraged. -- Eduardo Schoedler
Do you have a source on this? Reason I ask is because any recent documentation I've come across indicates that polling is recommended to reduce chances of livelock on a running system. On Mon, May 20, 2013 at 2:51 PM, Eduardo Schoedler <listas@esds.com.br>wrote:
2013/5/19 Andrew Jones <aj@jonesy.com.au>
As for migration to another OS, I find FreeBSD better as a matter of
network performance. The last time I checked OpenBSD was either lacking or was in the early stages of multiple cores support.
If you do decide to go the FreeBSD route (you can run openbgpd on FreeBSD if you like), check out the POLLING option on ethernet NICs, it cuts down on the number of interrupts and can increase performance, particularly when dealing with smaller packets.
Polling on FreeBSD in modern NICs is discouraged.
-- Eduardo Schoedler
-- Ryan Gard
Do you have a source on this? Reason I ask is because any recent documentation I've come across indicates that polling is recommended to reduce chances of livelock on a running system.
What recent documentation have you come across? Luigi did the polling stuff more than a decade ago. Polling fixes some issues and seems to cause others. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
+1 on the interrupt cpu assignment.... N. On 5/24/13, Nick Hilliard <nick@foobar.org> wrote:
On 24/05/2013 20:21, Joe Greco wrote:
Luigi did the polling stuff more than a decade ago. Polling fixes some issues and seems to cause others.
interrupt mitigation helps more than polling these days. Make sure you're using modern hardware.
Nick
On 13-05-24 03:17 PM, Ryan Gard wrote:
Do you have a source on this? Reason I ask is because any recent documentation I've come across indicates that polling is recommended to reduce chances of livelock on a running system.
This depends a *ton* of what NIC you are using. Polling IMO should not be enabled on modern NICs. They use MSI-X to distribute interrupts to each core through the PCI bus to the APIC. As well as various methods to reduce the number of interrupts generated per second. This polling thing only worked well 10years ago. (Or if you still have 10 year old gear) -Gabe
On Sat, May 18, 2013 at 11:39 AM, Nick Khamis <symack@gmail.com> wrote:
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
Hi Nick, You're done. You can buy more recent server hardware and get another small bump. You may be able to tweak interrupt rates from the NICs as well, trading latency for throughput. But basically you're done: you've hit the upper bound of what slow-path (not hardware assisted) networking can currently do. Options: 1. Buy equipment with a hardware fast path, such as the higher end Juniper and Cisco routers. 2. Split the load. Run multiple BGP routers and filter some portion of the /8's on each of them. On your IGP, advertise /8's instead of a default. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On Sun, 19 May 2013, William Herrin wrote:
On Sat, May 18, 2013 at 11:39 AM, Nick Khamis <symack@gmail.com> wrote:
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
Hi Nick,
You're done. You can buy more recent server hardware and get another small bump. You may be able to tweak interrupt rates from the NICs as well, trading latency for throughput. But basically you're done: you've hit the upper bound of what slow-path (not hardware assisted) networking can currently do.
Options:
1. Buy equipment with a hardware fast path, such as the higher end Juniper and Cisco routers.
I think you've misinterpreted his numbers. He's using 1gb ethernet interfaces, so that's 700 mbit/s. He didn't mention if he'd done any IP stack tuning, or what sort of crashes he's having...but people have been doing higher bandwith than this on Linux for years. ---------------------------------------------------------------------- Jon Lewis, MCP :) | I route | therefore you are _________ http://www.lewis.org/~jlewis/pgp for PGP public key_________
Hi Nick,
You're done. You can buy more recent server hardware and get another small bump. You may be able to tweak interrupt rates from the NICs as well, trading latency for throughput. But basically you're done: you've hit the upper bound of what slow-path (not hardware assisted) networking can currently do.
Options:
1. Buy equipment with a hardware fast path, such as the higher end Juniper and Cisco routers.
2. Split the load. Run multiple BGP routers and filter some portion of the /8's on each of them. On your IGP, advertise /8's instead of a default.
Regards, Bill Herrin
Hey Bill, thanks for your reply!!!! Yeah option 1...... I think we will do whatever it takes to avoid that route. I don't have a good reason for it, it's just preference. Great manufactures/produts.... etc..., we just like the flexibility we get with how things are setup right now. Not to mention extra rack space! Option 2 is exactly what we are looking at. But before that, we are looking at upgrading to a PCIe 3 x8 or x16 as mentioned earlier for that "small bump". If we hit 25% increase in throughout then that would keep the barracudas in suits at bay. But for now, they are really breathing down my back... :) N.
On Sun, May 19, 2013 at 11:34 AM, Nick Khamis <symack@gmail.com> wrote:
Hey Bill, thanks for your reply!!!! Yeah option 1...... I think we will do whatever it takes to avoid that route. I don't have a good reason for it, it's just preference. Option 2 is exactly what we are looking at.
Hi Nick, You might get enough of a bump from something like an HP DL380p gen8 to saturate your gig-e. I wouldn't bank on stably going any higher than that. And as someone else mentioned, definitely lose conntrack and stateful firewalling. If you need 'em, move 'em to interior boxes that aren't dealing with your main Internet pipe. If you're up for a challenge there are specialty NIC cards like the Endace DAG. They're usually used for packet capture but in principle they have the right kind of hardware fast path (e.g. TCAMs) built in to accomplish what you want to do. Heck of a challenge though. I haven't heard of anybody putting together a white-box fast path router before. Regards, Bill Herrin -- William D. Herrin ................ herrin@dirtside.com bill@herrin.us 3005 Crane Dr. ...................... Web: <http://bill.herrin.us/> Falls Church, VA 22042-3004
On 18. mai 2013 17:39, Nick Khamis wrote:
Hello Everyone,
We are running:
Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (rev 03)
2 bgp links from different providers using quagga, iptables etc....
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
I was hoping to get some feedback on what else we can strip from the kernel. If you have a similar setup for a stable platform the .config would be great!
Also, what are your thoughts on migrating to OpenBSD and bgpd, not sure if there would be a performance increase, but the security would be even more stronger?
This is some fairly ancient hardware, so what you can get out if it will be limited. Though gige should not be impossible. The usual tricks are to make sure netfilter is not loaded, especially the conntrack/nat based parts as that will inspect every flow for state information. Either make sure those parts are compiled out or the modules/code never loads. If you have any iptables/netfilter rules, make sure they are 1) stateless 2) properly organized (cant just throw everything into FORWARD and expect it to be performant). You could try setting IRQ affinity so both ports run on the same core, however I'm not sure if that will help much as its still the same cache and distance to memory. On modern NICS you can do tricks like tie rx of port 1 with tx of port 2. Probably not on that generation though. The 82571EB and 82573E is, while old, PCIe hardware, there should not be any PCI bottlenecks, even with you having to bounce off that stone age FSB that old CPU has. Not sure well that generation intel NIC silicon does linerate easily though. But really you should get some newerish hardware with on-cpu PCIe and memory controllers (and preferably QPI). That architectural jump really upped the networking throughput of commodity hardware, probably by orders of magnitude (people were doing 40Gbps routing using standard Linux 5 years ago). Curious about vmstat output during saturation, and kernel version too. IPv4 routing changed significantly recently and IPv6 routing performance also improved somewhat.
On Sat, May 18, 2013 at 11:39:55AM -0400, Nick Khamis wrote:
Hello Everyone,
We are running:
Gentoo Server on Dual Core Intel Xeon 3060, 2 Gb Ram Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06) Ethernet controller: Intel Corporation 82573E Gigabit Ethernet Controller (rev 03)
2 bgp links from different providers using quagga, iptables etc....
We are transmitting an average of 700Mbps with packet sizes upwards of 900-1000 bytes when the traffic graph begins to flatten. We also start experiencing some crashes at that point, and not have been able to pinpoint that either.
I was hoping to get some feedback on what else we can strip from the kernel. If you have a similar setup for a stable platform the .config would be great!
Also, what are your thoughts on migrating to OpenBSD and bgpd, not sure if there would be a performance increase, but the security would be even more stronger?
That hardware should be fine to do two gig ports upstream, with another two to go to your network? I'd check with "vmstat 1" to see what your interrupt rate is like, if it's above 40k/sec I'd check coalescing settings. I also prefer OpenBSD/OpenBGP myself. It's a simpler configuration, with less things to "fix". With Linux you have to disable reverse path filtering, screw around with iptables to do bypass on stateful filtering. Then Quagga itself can be buggy. (my original reason for shifting away from Linux was that Quagga didn't fix enough of Zebra's bugs.. although that was many years ago, things may have improved a little by then, but ime significantly buggy software tends to stay buggy even with fixing) With regards to security of OpenBSD versus Linux, you shouldn't be exposing any services to the world with either. And it's more stability/configuration that would push me to OpenBSD rather than performance. And with regards to crashing I'd try and figure out what was happening there quickly before making radical changes. Is it running out of memory, is Quagga dying? Is there a default route that works when Quagga crashes? One issue I had was I found Quagga crashing leaving a whole lot of routes lingering in the table, and I had a script that'd go through and purge them. I'm also a bit confused about your dual upstreams with two ethernet interfaces total, are they both sharing one pipe, or are there some Broadcom or such ethernet interfaces too. I've found Broadcom chipsets can be a bit problematic, and the only stability issue I've ever had with OpenBSD is a Broadcom interface wedging for minutes under DDOS attack, which was gigabit'ish speed DDOS with older hardware than you. oh, to check coalescing settings under linux use: "ethtool -c eth0; ethtool -c eth1" Ben.
Minor nitpicking I know.. On 20. mai 2013 01:23, Ben wrote:
With Linux you have to disable reverse path filtering, screw around with iptables to do bypass on stateful filtering.
You dont have to "screw around" with iptables. The kernel wont load the conntrack modules/code unless you actually try to load stateful rulesets*. rp filtering on by default I'd also argue is the better default setting, for the 99% of other usecases :-P With quagga I would tend to agree - but as you I have not used it ages and things do change for the better over time -- occasionally. * you CAN configure your kernel to always load it, but that is silly.
On Mon, 2013-05-20 at 11:23 +1200, Ben wrote:
With regards to security of OpenBSD versus Linux, you shouldn't be exposing any services to the world with either. And it's more stability/configuration that would push me to OpenBSD rather than performance.
And with regards to crashing I'd try and figure out what was happening there quickly before making radical changes. Is it running out of memory, is Quagga dying? Is there a default route that works when Quagga crashes? One issue I had was I found Quagga crashing leaving a whole lot of routes lingering in the table, and I had a script that'd go through and purge them.
Hi, We've been running a small AS with BIRD on Linux(debian) without any issue in two years of production on two software routers so far: http://bird.network.cz/ It uses less than 100MB of RAM per IPv4 DFZ, we run around 100 BGP sessions in 350M of RAM (process virtual). Looking glass developper by our members: http://lg.tetaneutral.net/prefix_bgpmap/gw+h3/ipv4?q=meh.net.nz http://lg.tetaneutral.net/summary/gw+h3/ipv4 Sincerely, Laurent http://tetaneutral.net http://as197422.peeringdb.com
On Mon, 2013-05-20 at 10:35 +0200, Laurent GUERBY wrote:
On Mon, 2013-05-20 at 11:23 +1200, Ben wrote:
With regards to security of OpenBSD versus Linux, you shouldn't be exposing any services to the world with either. And it's more stability/configuration that would push me to OpenBSD rather than performance.
And with regards to crashing I'd try and figure out what was happening there quickly before making radical changes. Is it running out of memory, is Quagga dying? Is there a default route that works when Quagga crashes? One issue I had was I found Quagga crashing leaving a whole lot of routes lingering in the table, and I had a script that'd go through and purge them.
Hi,
We've been running a small AS with BIRD on Linux(debian) without any issue in two years of production on two software routers so far:
I forgot to mention that there is now a paying support programme in case you need it: http://bird.network.cz/?support Laurent
participants (16)
-
Andre Tomt
-
Andrew Jones
-
Ben
-
Eduardo Schoedler
-
Gabriel Blanchard
-
Joe Greco
-
Jon Lewis
-
Laurent GUERBY
-
Michael McConnell
-
Nick Hilliard
-
Nick Khamis
-
Nikola Kolev
-
Phil Fagan
-
Ryan Gard
-
William Herrin
-
Zachary Giles