Hopefully this gets thru this time since the last time I sent it (Sept 30) I received: <nanog@merit.edu>: Command died with status 2: "/private/majordomo/wrapper resend -l nanog -h merit.edu -s nanog-outgoing". Command output: Can't locate getopts.pl in @INC (@INC contains: /usr/local/perl-5.004_04/lib/sun4-solaris/5.00404 /usr/local/perl-5.004_04/lib /usr/local/perl-5.004_04/lib/site_perl/sun4-solaris /usr/local/perl-5.004_04/lib/site_perl .) at /private/majordomo-1.94.4/resend line 74. and no one from Merit has responded yet to my email.
Alex Rudnev observed,
Folks, why all you are saying about the Gigabit traffic for the firewall?
Usially, firewall stand between intranet and internet, and it should proceed your upstream traffic, not more... And than, it's important to measure the throughput in packets/per_second, not in the gigabits...
Everything other is true - I suggess no one good firewall can proceed gigabit traffic at all, and only a few specially designed boxes can proceed 100Mbit traffic. But just again - it's a rare case when you does have 100Mbit upstream link.
Super Firewalls! http://www.data.com/issue/990521/firewalls.html Almost all hit 72Mb/sec or more whether NAT was disabled or enabled. To quote from the article: Once again we started with a baseline test. With no firewall on the test bed, we achieved TCP forwarding rates of 15.6 Mbyte/s over each four-minute run of the test scripts; this works out to nearly 125 Mbit/s. So why didnt we get the 200-Mbit/s theoretical maximum of a switched, full-duplex test bed? First and foremost, our measurements are taken at the application layer. Its likely that packet headers and the continuous opening and closing of SMTP connections took a bite out of the effective data rate. Second, clients offered lots of traffic that the rule sets expressly deniedso at least some of the time the wire was occupied carrying traffic that wouldnt be forwarded. Third, there may have been a firepower limitation in the amount of traffic our clients and servers could offer. We cant say for certain how much the degradation is due to application overhead, and how much to test platform limitations. I would think that 200Mb/sec is achievable with not much effort and perhaps the next testing round will prove it. -Hank