Have been discussing PCs for a bit but as yet not deployed one, as I understand it a *nix based PC running Zebra will work pretty fine but has the constraints that:
o) It has no features - not a problem for a lot of purposes
This isnt necessarily a problem for what I have in mind It depends. Zebra/Quagga has lots of features, it just may be that these aren't the features you want. At many cases, you can get a developer to implement the features you need for half the cost of a proprietary router.
I would add, more importantly for nanog audience: o) lack of unified tools to configure and manage: Your average PC router is configured at least by: * your distribution-based startup scripts * your routing protocol daemon (gated/zebra/etc) * linecard-specific tools (ethtool for linux) * protocol-specific tools (br2684 for rfc1483 encaps, for example) * eb/iptables to configure ACLs (or ipfw/ipf/pf) * tc to configure QoS (or ALTQ) Each of those tools has varied degrees of documentation, different configuration interface, vastly different 'status' interface, different support mailing lists, etc. It is much easier for a given organization to find a cisco/juniper/etc expert than to find someone who's experienced with Linux/FreeBSD network toolchain, and I don't foresee that changing anytime soon.
o) On a standard PCI but your limit is about 350Mb, you can increase that to a couple of Gb using 64-bit fancy thingies
For connecting to small IXPs, connecting customers, I dont need large amounts of throughput. 64/66 PCI hasn't been fancy for last 3 years.
On a single-processor P4/3ghz, I already can (and do:) route 400mbps of DoS-like traffic (one packet per flow, small packets, 400kpps). Of course, this is ridiculously low compared to current generation of high-end routers, however, it has its niche (and see below for possible scaling).
o) This may be fixed but I found it slow to update the kernel routing table which isnt designed to take 120000 routes being added at once
Icky, could perhaps cause issues if theres a major reconvergence due to an adjacent backbone router failing etc, might be okay tho Actually, considering the CPU on common desktop and CPU on a RE on common router (aka "you are reading this email on a machine with faster CPU than fastest RE in your network"), PCs can do BGP much faster than "hardware-based" routers (aka "forwarding ASICs don't run BGP"). As result, BGP Zebra/Linux can take 100k routes in <10 seconds (haven't measured exactly).
o) As its entirely process based it will hurt badly in a DoS attack
This is a show stopper. I need the box to stay up in an attack and be responsive to me whilst I attempt to find the source. Well, its not "process based", but it *is* "flow based". Yes, performance suffers as packets/flow rate decreases, however, it doesn't suffer as bad as other flow-based devices.
I'm not an expert in PC hardware, so I do struggle to work out the architecture that I need and I'm sure its possible to build boxes that are optimised for this purpose however I'm still not convinced that the box can keep up with the demands of day to day packet switching - I'd like to hear otherwise tho.. has anyone deployed a PC with Zebra that could switch a few Gbs, didnt suffer from latency or jitter or fail under a DoS? It is not gbps that kill you, it is the pps (and/or new-flows-per-second).
Getting to 1mpps on a single router today will probably be hard. However, I've been considering implementing a "clustered router" architecture, should scale pps more or less linearly based on number of "PCs" or "routing nodes" involved. I'm not sure if discussion of that is on-topic here, so maybe better to take it offline.