All the responses have been really helpful. Thanks to everyone for being friendly and for taking the time to answer in detail. I've asked a hardware provider to quote for a couple of x86 boxes and I'll look for suitable Intel NICs too. Jim: We're a very small ISP and have a full mix of packet sizes on the network but the vast majority is outbound on port 80 so hopefully that'll help. Any more input will of course be considered. I may post the NIC models for approval if I'm scratching my head again :) Thanks, Chris 2008/12/17 Jim Shankland <nanog@shankland.org>
Chris wrote:
Hi All, Sorry if this is a repeat topic. I've done a fair bit of trawling but can't find anything concrete to base decisions on.
I'm hoping someone can offer some advice on suitable hardware and kernel tweaks for using Linux as a router running bgpd via Quagga. We do this at the moment and our box manages under the 100Mbps level very effectively. Over the next year however we expect to push about 250Mbps outbound traffic with very little inbound (50Mbps simultaneously) and I'm seeing differing suggestions of what to do in order to move up to the 1Gbps level.
As somebody else said, it's more pps than bits you need to worry about. The Intel NICs can do a full gigabit without any difficulty, if packet size is large enough. But they buckle somewhere around 300Kpps. 300K 100-byte packets is only 240 Mb/s. On the other hand, you mentioned your traffic is mostly outbound, which makes me think you might be a content provider. In that case, you'll know what your average packet size is -- and it should be a lot bigger than 100 bytes. For that type of traffic, using a Linux router up to, say, 1.5-2 Gb/s is pretty trivial. You can do more than that, too, but have to start getting a lot more careful about hardware selection, tuning, etc.
The other issue is the number of concurrent flows. The actual route table size is unimportant -- it's the size of the route cache that matters. Unfortunately, I have no figures here. But I did once convert a router from limited routes (quagga, 10K routes) to full routes (I think about 200K routes at the time), with absolutely no measurable impact. There were only a few thousand concurrent flows, and that number did not change -- and that's the one that might have made a difference.
I hope this is helpful.
Jim