since the last time we cleared the firewall statistics on c.root-servers.net, 1895GB of udp/53 input has led to 6687GB of udp/53 output, but, and this is the important part now so pay attention, 185GB of input was dropped due to an RFC1918 source address. who needs DDOS when most network operators aren't filtering RFC1918 on output? (there's only been 4.2GB of udp/2002 and other wormy traffic, by comparison.) current winners of the "sustained input traffic over 100KBits/sec" award are 164.58.150.146, 200.52.12.131, and 195.146.194.12. c-root keeps on ignoring you, but you just never give up. congradulations, or something. (note that c-root's network operator has offered to filter RFC1918 on input from other AS's, but it's actually useful to keep on measuring it.)
to that end why doesnt bind ship with default zone files for rfc1918 space as well as 127.0.0.0 ? Steve On Mon, 7 Oct 2002, Paul Vixie wrote:
since the last time we cleared the firewall statistics on c.root-servers.net, 1895GB of udp/53 input has led to 6687GB of udp/53 output, but, and this is the important part now so pay attention, 185GB of input was dropped due to an RFC1918 source address.
who needs DDOS when most network operators aren't filtering RFC1918 on output? (there's only been 4.2GB of udp/2002 and other wormy traffic, by comparison.)
current winners of the "sustained input traffic over 100KBits/sec" award are 164.58.150.146, 200.52.12.131, and 195.146.194.12. c-root keeps on ignoring you, but you just never give up. congradulations, or something.
(note that c-root's network operator has offered to filter RFC1918 on input from other AS's, but it's actually useful to keep on measuring it.)
And to that end, I wonder how many of the bad queries are coming from MS DNS servers.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Stephen J. Wilcox Sent: Monday, October 07, 2002 7:05 PM To: Paul Vixie Cc: nanog@merit.edu Subject: Re: what's that smell?
to that end why doesnt bind ship with default zone files for rfc1918 space as well as 127.0.0.0 ?
Steve
On Mon, 7 Oct 2002, Paul Vixie wrote:
since the last time we cleared the firewall statistics on
1895GB of udp/53 input has led to 6687GB of udp/53 output, but, and this is the important part now so pay attention, 185GB of input was dropped due to an RFC1918 source address.
who needs DDOS when most network operators aren't filtering RFC1918 on output? (there's only been 4.2GB of udp/2002 and other wormy
c.root-servers.net, traffic, by comparison.)
current winners of the "sustained input traffic over
100KBits/sec" award are
164.58.150.146, 200.52.12.131, and 195.146.194.12. c-root keeps on ignoring you, but you just never give up. congradulations, or something.
(note that c-root's network operator has offered to filter RFC1918 on input from other AS's, but it's actually useful to keep on measuring it.)
Hope this doesn't come across as DNS-101, but is there some way to tell what DNS server one uses? Kinda like telnetting to port 80 or 25? I know if it is possible, it's just as possible for them to change the output, but chances are the brainiacs of the world who don't filter probably aren't smart enough to change what their DNS server 'appears' to be either.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Dan Hollis Sent: Monday, October 07, 2002 7:11 PM To: Jason Lixfeld Cc: 'Stephen J. Wilcox'; 'Paul Vixie'; nanog@merit.edu Subject: RE: what's that smell?
On Mon, 7 Oct 2002, Jason Lixfeld wrote:
And to that end, I wonder how many of the bad queries are coming from MS DNS servers.
to that end, i wonder how many of the bad queries are coming directly from microsoft campus.
-Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
-----BEGIN PGP SIGNED MESSAGE----- Hash: MD5 Hello Jason, Monday, October 7, 2002, 7:14:41 PM, you wrote: JL> Hope this doesn't come across as DNS-101, but is there some way to tell JL> what DNS server one uses? Kinda like telnetting to port 80 or 25? I JL> know if it is possible, it's just as possible for them to change the JL> output, but chances are the brainiacs of the world who don't filter JL> probably aren't smart enough to change what their DNS server 'appears' JL> to be either. This will work: dig @nameserver.tld chaos txt version.bind For BIND nameservers, but it is not a standard convention so it is not supported by all nameservers, and most administrators disable the output from the command at this point: datacenterwire.com /home/allan#dig @ns1.vbind.com chaos txt version.bind ; <<>> DiG 8.3 <<>> @ns1.vbind.com chaos txt version.bind ; (1 server found) ;; res options: init recurs defnam dnsrch ;; got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUERY SECTION: ;; version.bind, type = TXT, class = CHAOS ;; ANSWER SECTION: VERSION.BIND. 0S CHAOS TXT "DNS, we aint got no stinkin DNS" ;; Total query time: 0 msec ;; FROM: datacenterwire.com to SERVER: ns1.vbind.com 66.150.201.103 ;; WHEN: Mon Oct 7 17:37:39 2002 ;; MSG SIZE sent: 30 rcvd: 86 allan - -- Allan Liska allan@allan.org http://www.allan.org -----BEGIN PGP SIGNATURE----- Version: 2.6 iQCVAwUAPaIbPikg6TAvIBeFAQFFrgP/YxHLFuoYQ1xAV2lqrKjRPIbadTT2KwrS Xe0wK4Z/+oeYaK5HGXLXSMuZqRUvx1tLkZpN2j3Z5XAbKk5ALHXgtmonE4uZmxwd iOiUG4t8UlxWbrTirsWCTpl99Ugv7WP1PbtW2Dy33xS9i6aupUbIcMyqoANZOif7 sC/28CC6olE= =buSZ -----END PGP SIGNATURE-----
Jason, There're multiple answers depending on what you mean by "DNS server one uses." Whois on the domain will list the DNS servers of record. Some domains also spread load over RNS servers so a dig, per a previous answer, will give more specific announced servers currently in the zone files. If you're using a current Windows box ipconfig /all at a command prompt will show the actual DNS your machine is caching. There are similar *nix commands but I'm not at home right now... Best regards, _____________ Alan Rowland -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Jason Lixfeld Sent: Monday, October 07, 2002 4:15 PM To: 'Dan Hollis' Cc: 'Stephen J. Wilcox'; 'Paul Vixie'; nanog@merit.edu Subject: RE: what's that smell? Hope this doesn't come across as DNS-101, but is there some way to tell what DNS server one uses? Kinda like telnetting to port 80 or 25? I know if it is possible, it's just as possible for them to change the output, but chances are the brainiacs of the world who don't filter probably aren't smart enough to change what their DNS server 'appears' to be either.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Dan Hollis Sent: Monday, October 07, 2002 7:11 PM To: Jason Lixfeld Cc: 'Stephen J. Wilcox'; 'Paul Vixie'; nanog@merit.edu Subject: RE: what's that smell?
On Mon, 7 Oct 2002, Jason Lixfeld wrote:
And to that end, I wonder how many of the bad queries are coming from MS DNS servers.
to that end, i wonder how many of the bad queries are coming directly from microsoft campus.
-Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
On Tue, Oct 08, 2002 at 12:05:29AM +0100, Stephen J. Wilcox wrote:
to that end why doesnt bind ship with default zone files for rfc1918 space as well as 127.0.0.0 ?
won't help On Mon, 7 Oct 2002, Paul Vixie wrote: > the important part now so pay attention, 185GB of input was > dropped due to an RFC1918 source address. ^^^^^^^^^^^^^^ source address, not query destination FWIW, almost nobody filters rfc1918 packets outbound and a good percentage of ISP customers bleed these something terrible --cw
On Tue, 8 Oct 2002, Chris Wedgwood wrote:
FWIW, almost nobody filters rfc1918 packets outbound and a good percentage of ISP customers bleed these something terrible
Actually, that's a good thing. This makes it trivial to detect which peers aren't doing egress filtering. If people just filtered RFC 1918 space, everything would just look better, but the underlying problem wouldn't be solved: it would still be possible to launch very hard to trace or stop denial of service attacks from those networks.
Nope. As previously established, there are ISPs out there using RFC1918 networks in their infrastructure. Also, egress filtering is NOT easy, so even those ISPs doing it may not be able to do it universally. Plus, lots of attacks these days are mixing spoofed and legit traffic, or doing limited spoofing (i.e. picking random addresses on the LAN where they originate to make it past filters). Kelly J. On Tue, 8 Oct 2002, Iljitsch van Beijnum wrote:
On Tue, 8 Oct 2002, Chris Wedgwood wrote:
FWIW, almost nobody filters rfc1918 packets outbound and a good percentage of ISP customers bleed these something terrible
Actually, that's a good thing. This makes it trivial to detect which peers aren't doing egress filtering. If people just filtered RFC 1918 space, everything would just look better, but the underlying problem wouldn't be solved: it would still be possible to launch very hard to trace or stop denial of service attacks from those networks.
On Tuesday, Oct 8, 2002, at 10:21 Canada/Eastern, Kelly J. Cooper wrote:
Nope. As previously established, there are ISPs out there using RFC1918 networks in their infrastructure. Also, egress filtering is NOT easy,
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network? I kind of assumed that people weren't doing it because they were lazy. Joe
At 10:34 AM 08/10/2002 -0400, Joe Abley wrote:
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network?
I kind of assumed that people weren't doing it because they were lazy.
I am sure thats part of it. Also, it might be a CPU issue as well. ---Mike
I am sure thats part of it. Also, it might be a CPU issue as well.
Unicast RPF is affordable CPU-wise even in the most mediocre boxes people tend to have.
In more cases than not, especially now adays with lots of networks peering all over gods creation, RPF can have some pretty detrimental effects if your routing is somewhat asymmetrical.
On Tue, Oct 08, 2002 at 11:52:27AM -0400, Jason Lixfeld wrote:
I am sure thats part of it. Also, it might be a CPU issue as well.
Unicast RPF is affordable CPU-wise even in the most mediocre boxes people tend to have.
In more cases than not, especially now adays with lots of networks peering all over gods creation, RPF can have some pretty detrimental effects if your routing is somewhat asymmetrical.
A strict rpf can be detrimental in these cases, yes, that is a well known fact. The problem is when people do not apply the safe checks and leak these 1918 space out (as Paul originally pointed out how much traffic he is observing improperly sourced that they can't return). This is not complicated to enable the "any" check and you will not lose any valid traffic. I've seen at a public exchange point a significant amount of traffic that has been dropped that came from invalid/unreachable sources: (sh ip int x/y output) IP verify source reachable-via ANY 707032454 verification drops - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Tue, 8 Oct 2002, Jason Lixfeld wrote:
In more cases than not, especially now adays with lots of networks peering all over gods creation, RPF can have some pretty detrimental effects if your routing is somewhat asymmetrical.
actually RPF is extremely effective especially where its highly asymmetrical, eg at the edge. theres virtually no reason not to RPF dialup/isdn/cable/dsl/etc customers for example. -Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
On Tue, 8 Oct 2002, Jason Lixfeld wrote:
In more cases than not, especially now adays with lots of networks peering all over gods creation, RPF can have some pretty detrimental effects if your routing is somewhat asymmetrical.
actually RPF is extremely effective especially where its highly asymmetrical, eg at the edge. theres virtually no reason not to RPF dialup/isdn/cable/dsl/etc customers for example.
Sure, but to RPF so many customer facing edge ports in comparison to the far fewer number of egress ports makes the implementation procedure quite extensive. The more configuration, the more room for errors or "oops, forgot to configure that there", not to mention change management.
Those are reasons against. We in the technical community need to develop or modify our tools to make those tasks easier. Hire a lazy but smart admin! :) On Tue, Oct 08, 2002 at 02:45:22PM -0400, Jason Lixfeld wrote:
On Tue, 8 Oct 2002, Jason Lixfeld wrote:
In more cases than not, especially now adays with lots of networks peering all over gods creation, RPF can have some pretty detrimental effects if your routing is somewhat asymmetrical.
actually RPF is extremely effective especially where its highly asymmetrical, eg at the edge. theres virtually no reason not to RPF dialup/isdn/cable/dsl/etc customers for example.
Sure, but to RPF so many customer facing edge ports in comparison to the far fewer number of egress ports makes the implementation procedure quite extensive. The more configuration, the more room for errors or "oops, forgot to configure that there", not to mention change management.
Those are reasons against.
We in the technical community need to develop or modify our tools to make those tasks easier.
My point, exactly.
Hire a lazy but smart admin! :)
On Tue, Oct 08, 2002 at 02:45:22PM -0400, Jason Lixfeld wrote:
On Tue, 8 Oct 2002, Jason Lixfeld wrote:
In more cases than not, especially now adays with lots
peering all over gods creation, RPF can have some
of networks pretty detrimental
effects if your routing is somewhat asymmetrical.
actually RPF is extremely effective especially where its highly asymmetrical, eg at the edge. theres virtually no reason not to RPF dialup/isdn/cable/dsl/etc customers for example.
Sure, but to RPF so many customer facing edge ports in comparison to the far fewer number of egress ports makes the implementation procedure quite extensive. The more configuration, the more room for errors or "oops, forgot to configure that there", not to mention change management.
At 11:51 AM 10/8/02 -0700, John M. Brown wrote:
We in the technical community need to develop or modify our tools to make those tasks easier.
So right. I don't know what the fuss is all about. Not that our little ISP matters in the grand scheme of things... but we've always blocked RFC1918 sources the old fashioned way, even though it appears to be less than .05% (by packet) of our border traffic: (outgoing) Extended IP access list 101 deny ip 127.0.0.0 0.255.255.255 any deny ip 10.0.0.0 0.255.255.255 any (110170 matches) deny ip 172.16.0.0 0.15.255.255 any deny ip 192.168.0.0 0.0.255.255 any (130473 matches) permit ip any any (530422134 matches) We get just as much (.05%) RFC1918 coming _into_ our podunk network (that we also block). If that much is coming down my insignificant alley, I have no problem believing your 12-18% numbers at tier 1. Those packets are by definition junk or malicious junk packets. They have no business being on any pipe that is not a leaf enterprise. (incoming - abbreviated) Extended IP access list 100 deny ip 127.0.0.0 0.255.255.255 any (111 matches) deny ip 10.0.0.0 0.255.255.255 any (105016 matches) deny ip 172.16.0.0 0.15.255.255 any (27671 matches) deny ip 192.168.0.0 0.0.255.255 any (66627 matches) permit ip any any (475732704 matches) The big guys apparently have so much bandwidth to spare that these and other unverifiable, unrepliable packets don't matter to them. If DoS and other activity hurt them as much as it hurt folks like us, there would be fewer excuses and more solutions and implementations. ISPs bill customers for traffic on the edge. If you filter one hop from the edge (interior of the edge router - fewer interfaces that way too) or at your border, then you can have your cake (money from the customer) and eat it too (filter RFC1918). Of course you would then be charging customers for packets you don't pass. They'll never know, and I never met a bean counter that cared about such details anyway... if bean counters are making routing policies. ...Barb
--On Tuesday, October 8, 2002 2:56 PM -0600 Barb Dijker <barb@netrack.net> wrote: ...
ISPs bill customers for traffic on the edge. If you filter one hop from the edge (interior of the edge router - fewer interfaces that way too) or at your border, then you can have your cake (money from the customer) and eat it too (filter RFC1918). Of course you would then be charging customers for packets you don't pass. They'll never know, and I never met a bean counter that cared about such details anyway... if bean counters are making routing policies.
I count both packets and beans, and as a customer (though not Barb's), I think such a charge is entirely within reason -- because it is under my control. Have some more cake.
On Tue, 8 Oct 2002, Jason Lixfeld wrote:
Sure, but to RPF so many customer facing edge ports in comparison to the far fewer number of egress ports makes the implementation procedure quite extensive. The more configuration, the more room for errors or "oops, forgot to configure that there", not to mention change management.
For most RASen and routers its a single configuration statement, and far less complex than most all other RASen and router configurations. In many cases RPF is enabled by default (and imho should be enabled by default by many more vendors). Your job as a good netizen is not to allow shit to be injected at your edges into the backbones. As untraceable and RFC1918 packets increase, expect the community to become less and less tolerant of bad netizens. -Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
On Tue, 8 Oct 2002, Joe Abley wrote:
Also, egress filtering is NOT easy,
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network?
But what's the point? That's like complaining that the door isn't locked while the house has no walls.
On Tuesday, Oct 8, 2002, at 10:45 Canada/Eastern, Iljitsch van Beijnum wrote:
On Tue, 8 Oct 2002, Joe Abley wrote:
Also, egress filtering is NOT easy,
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network?
But what's the point?
Politeness, I guess. Seems rude to send traffic to peers when you absolutely know that the source address is inaccurate.
That's like complaining that the door isn't locked while the house has no walls.
Right. The no walls problem is far more usefully tackled by filtering inbound at the edge, not outbound. Joe
On Tue, 8 Oct 2002, Joe Abley wrote:
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network?
But what's the point?
Politeness, I guess. Seems rude to send traffic to peers when you absolutely know that the source address is inaccurate.
Politeness is good, truthfulness is usually better. If a peer isn't properly filtering, I'd rather find out sooner (some RFC 1918 packets) than later (DoS attack).
That's like complaining that the door isn't locked while the house has no walls.
Right. The no walls problem is far more usefully tackled by filtering inbound at the edge, not outbound.
No complaints from me if that is what people do.
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network? But what's the point?
rfc 1918 sec 3 Because private addresses have no global meaning, routing information about private networks shall not be propagated on inter-enterprise links, and packets with private source or destination addresses should not be forwarded across such links.
On Tue, 8 Oct 2002, Joe Abley wrote:
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network?
I kind of assumed that people weren't doing it because they were lazy.
I've checked the marketing stuff of several backbones, as far as I could tell only one makes the blanket statement about source address validation on their entire network. http://www.ipservices.att.com/backbone/techspecs.cfm AT&T has also implemented security features directly into the backbone. IP Source Address Assurance is implemented at every customer point-of-entry to guard against hackers. AT&T examines the source address of every inbound packet coming from customer connections to ensure it matches the IP address we expect to see on that packet. This means that the AT&T IP Backbone is RFC2267-compliant. What backbones do 100% source address validation? And how much of it is real, and how much is marketing? On single-homed or few-homed stub networks its "easy." But even a moderately complex transit network it becomes "difficult." Yes, I know about uRPF-like stuff, but the router vendors are still tweaking it. If there is a magic solution, I would love to hear about it. Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the "exceptions" needed to do 100% source address validation. Heck, the phone network still has trouble getting the correct Caller-ID end-to-end.
IMHO, it's not too bad if you do it at your edges. Explicit permits for valid source addrs is a well-known defense against source spoofing which of course also addresses the RFC1918 leakage issue to some degree. It's not that hard to incorporate this into customer installation and support processes. A lot more difficult to manage at the borders.
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu]On Behalf Of Sean Donelan Sent: Tuesday, October 08, 2002 10:09 AM To: Joe Abley Cc: Kelly J. Cooper; nanog@merit.edu Subject: Who does source address validation? (was Re: what's that smell?)
On Tue, 8 Oct 2002, Joe Abley wrote:
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network?
I kind of assumed that people weren't doing it because they were lazy.
I've checked the marketing stuff of several backbones, as far as I could tell only one makes the blanket statement about source address validation on their entire network.
http://www.ipservices.att.com/backbone/techspecs.cfm
AT&T has also implemented security features directly into the backbone. IP Source Address Assurance is implemented at every customer point-of-entry to guard against hackers. AT&T examines the source address of every inbound packet coming from customer connections to ensure it matches the IP address we expect to see on that packet. This means that the AT&T IP Backbone is RFC2267-compliant.
What backbones do 100% source address validation? And how much of it is real, and how much is marketing? On single-homed or few-homed stub networks its "easy." But even a moderately complex transit network it becomes "difficult." Yes, I know about uRPF-like stuff, but the router vendors are still tweaking it.
If there is a magic solution, I would love to hear about it. Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the "exceptions" needed to do 100% source address validation.
Heck, the phone network still has trouble getting the correct Caller-ID end-to-end.
On Tue, Oct 08, 2002 at 11:09:10AM -0400, Sean Donelan wrote:
If there is a magic solution, I would love to hear about it.
to drop the rfc1918 space, there is a close to magic solution. install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]: "ip verify unicast source reachable-via any" This will drop all packets on the interface that do not have a way to return them in your routing table.
Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the "exceptions" needed to do 100% source address validation.
Juniper has a somewhat viable solution to the 100% source validation for bgp customers. they will consider non-best paths in their unicast-rpf check on the customer interface. This means that even if 35.0.0.0/8 is best returned via your peer instead of via the provider the packet came in, but they are advertizing the prefix to you, you will not drop the packet.
Heck, the phone network still has trouble getting the correct Caller-ID end-to-end.
Uh, this is because it costs another 1/2 a cent a minute (or more) to provision a caller-id capable trunk (long distance) and people just don't want to pay the extra money and it's cheaper to not identify oneself. (This is why most telemarketers don't generate caller-id or if they can, they supress it). - jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.
On Tue, 8 Oct 2002, Jared Mauch wrote:
install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]:
"ip verify unicast source reachable-via any"
This will drop all packets on the interface that do not have a way to return them in your routing table.
Once again, which providers do this? If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers. If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets. So why doesn't c.root-servers.net provider or its peers implement this "simple" solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't. Does AT&T? Yes Does UUNET? ? Does Cable & Wireless? ? Does Level 3? ? Does Qwest? ? Does Genuity? ? Does Sprint? ?
So why doesn't c.root-servers.net provider or its peers implement this "simple" solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't.
My guess would be inertia. It tends to take quite some time to get people off their butts to do something. It is also a feature which protects others more than it protects you, and there are serious psychological hurdles many providers (cf peering) have to doing anything which might benefit someone else more than it benefits them, even if it will benefit everyone over the long term. Was uRPF even available 10 years ago?
On Tue, 8 Oct 2002, Sean Donelan wrote:
On Tue, 8 Oct 2002, Jared Mauch wrote:
install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]:
"ip verify unicast source reachable-via any"
This will drop all packets on the interface that do not have a way to return them in your routing table.
Once again, which providers do this?
If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers. If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets.
So why doesn't c.root-servers.net provider or its peers implement this "simple" solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't.
If you do it on ingress from customers then this is probably a good thing and makes your network compliant to RfC1918. But you need to accept the internet isnt RFC1918 compliant in the same way that we implement hacks in all kinds of applications to enable compatibility with other non-RFC compliant implementations. You try running an RFC822 compliant mail server as an example and see how many microsoft users complain they cant send email! Not all IP packets require a return, indeed only TCP requires it. It is quite possible to send data over the internet on UDP or ICMP with RFC1918 source addresses and for their to be no issue. Examples of this might be icmp fragments or UDP syslog which altho shouldnt according to RFC1918 be on these source addresses might be and if you block these on major backbone routes you may break something. So I guess you may argue block RFC1918 tcp inbound but icmp and udp .. you start to break things, perhaps that is why large providers dont do this on backbone links. Steve
Does AT&T? Yes Does UUNET? ? Does Cable & Wireless? ? Does Level 3? ? Does Qwest? ? Does Genuity? ? Does Sprint? ?
At 10:34 PM 10/8/02 +0100, Stephen J. Wilcox wrote:
Not all IP packets require a return, indeed only TCP requires it. It is quite possible to send data over the internet on UDP or ICMP with RFC1918 source addresses and for their to be no issue. Examples of this might be icmp fragments or UDP syslog which altho shouldnt according to RFC1918 be on these source addresses might be and if you block these on major backbone routes you may break something.
No. Filtering RFC1918 doesn't break anything. It merely shows you what was already broken and you didn't know it. If you have a box that is putting an RFC1918 source address in its packets destined for external nets, and it doesn't get NAT'd, your net config is broken. ...Barb
I believe the RFC states SHALL NOT propigate these out to the global net. SHOULD NOT != SHALL NOT On Tue, Oct 08, 2002 at 10:34:51PM +0100, Stephen J. Wilcox wrote:
On Tue, 8 Oct 2002, Sean Donelan wrote:
On Tue, 8 Oct 2002, Jared Mauch wrote:
install this on all your internal, upstream, downstream interfaces (cisco router) [cef required]:
"ip verify unicast source reachable-via any"
This will drop all packets on the interface that do not have a way to return them in your routing table.
Once again, which providers do this?
If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers. If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets.
So why doesn't c.root-servers.net provider or its peers implement this "simple" solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already. PSI wrote one of the original peering agreements that almost everyone else copied. If it was a concern, I imagine PSI could have included the requirement, most of their peers would have signed it 10 years ago. But they didn't.
If you do it on ingress from customers then this is probably a good thing and makes your network compliant to RfC1918.
But you need to accept the internet isnt RFC1918 compliant in the same way that we implement hacks in all kinds of applications to enable compatibility with other non-RFC compliant implementations. You try running an RFC822 compliant mail server as an example and see how many microsoft users complain they cant send email!
Not all IP packets require a return, indeed only TCP requires it. It is quite possible to send data over the internet on UDP or ICMP with RFC1918 source addresses and for their to be no issue. Examples of this might be icmp fragments or UDP syslog which altho shouldnt according to RFC1918 be on these source addresses might be and if you block these on major backbone routes you may break something.
So I guess you may argue block RFC1918 tcp inbound but icmp and udp .. you start to break things, perhaps that is why large providers dont do this on backbone links.
Steve
Does AT&T? Yes Does UUNET? ? Does Cable & Wireless? ? Does Level 3? ? Does Qwest? ? Does Genuity? ? Does Sprint? ?
sean@donelan.com (Sean Donelan) writes:
If c.root-servers.net provider did this, they wouldn't see any RFC1918 traffic because it would be dropped at their provider's border routers.
Right. But then I wouldn't be able to measure it, which would be bad.
If c.root-servers.net provider's peer did this, again c.root-servers.net provider wouldn't see the rfc1918 packets.
This is the single case where not being able to measure/complain would be OK, because the problem wouldn't be "in the core", it would be (correctly) stopped at the source-AS.
So why doesn't c.root-servers.net provider or its peers implement this "simple" solution? Its not a rhetorical question. If it was so simple, I assume they would have done it already.
C-root's provider is also C-root's owner, and they have offerred to shut this traffic off further upstream, as F-root's network operators were doing until yesterday, but I asked that it not be filtered anywhere except C-root itself (where I can measure it) or distant source-AS's (which is where it makes sense.) -- Paul Vixie
On Tue, 08 Oct 2002 11:09:10 EDT, Sean Donelan said:
http://www.ipservices.att.com/backbone/techspecs.cfm
AT&T has also implemented security features directly into the backbone. IP Source Address Assurance is implemented at every customer point-of-entry to guard against hackers. AT&T examines the source address of every inbound packet coming from customer connections to ensure it matches the IP address we expect to see on that packet. This means that the AT&T IP Backbone is RFC2267-compliant.
Thank you, AT&T.
It seems to reason that if people started filtering RFC-1918 on their edge, we would see a noticable amount of traffic go away. Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets. The percentage ranges scale with the size of the provider. Smaller providers, less impact, larger providers more impact. In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets. We have to start someplace. There is no magic answer for all cases. RFC-1918 is easy to admin, and easy to deploy, in relative terms compared to uRPF or similar methods. For large and small alike it can be a positive marketing tool, if properly implemented. john brown On Tue, Oct 08, 2002 at 11:09:10AM -0400, Sean Donelan wrote:
On Tue, 8 Oct 2002, Joe Abley wrote:
What is difficult about dropping packets sourced from RFC1918 addresses before they leave your network?
I kind of assumed that people weren't doing it because they were lazy.
I've checked the marketing stuff of several backbones, as far as I could tell only one makes the blanket statement about source address validation on their entire network.
http://www.ipservices.att.com/backbone/techspecs.cfm
AT&T has also implemented security features directly into the backbone. IP Source Address Assurance is implemented at every customer point-of-entry to guard against hackers. AT&T examines the source address of every inbound packet coming from customer connections to ensure it matches the IP address we expect to see on that packet. This means that the AT&T IP Backbone is RFC2267-compliant.
What backbones do 100% source address validation? And how much of it is real, and how much is marketing? On single-homed or few-homed stub networks its "easy." But even a moderately complex transit network it becomes "difficult." Yes, I know about uRPF-like stuff, but the router vendors are still tweaking it.
If there is a magic solution, I would love to hear about it. Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the "exceptions" needed to do 100% source address validation.
Heck, the phone network still has trouble getting the correct Caller-ID end-to-end.
On Tue, 8 Oct 2002, John M. Brown wrote:
It seems to reason that if people started filtering RFC-1918 on their edge, we would see a noticable amount of traffic go away.
Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets.
That is hard very to believe, unless you are referring to the load on the root nameservers. Since they obviously don't receive a reply, these resolvers will keep coming back.
In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets.
We have to start someplace. There is no magic answer for all cases.
RFC-1918 is easy to admin, and easy to deploy, in relative terms compared to uRPF or similar methods.
uRPF is easier: one configuration command per interface. A filter for RFC 1918 space is also one configuration command per interface, and some command to create the filter.
For large and small alike it can be a positive marketing tool, if properly implemented.
Sure. "We can't be bothered to do proper filtering, but since filter 0.39% of what we should, we are cool."
Why is it hard to believe that a large amount of RFC-1918 sourced traffic is floating around the net? Root name servers are just one "victim" of this trash. DOS, DDOS and other just stupid configurations contribute to the pile. My data is from various core servers, and various clients of ours We look at the ingres traffic and see these kinds of numbers. In the day of the InternetBoom (growth period) people wanted to see traffic and capacity used up. It helped fuel the need for more fiber growth, and thus spending. Now that we are in more "realistic" times, providers need to save money and reduce costs. Costs can be reduced in several areas: 1. Egress filtering, don't let RFC-1918 packets out of your network. 2. Spoof filtering. 3. Better tools to mitigate DOS/DDOS attacks. The technology exists for say, cable providers to reduce port scans and DOS type attacks. If 1 and 2 are done, this will reduce complaint calls from non-customers, which reduces man hour cycles. john brown On Tue, Oct 08, 2002 at 09:17:46PM +0200, Iljitsch van Beijnum wrote:
On Tue, 8 Oct 2002, John M. Brown wrote:
It seems to reason that if people started filtering RFC-1918 on their edge, we would see a noticable amount of traffic go away.
Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets.
That is hard very to believe, unless you are referring to the load on the root nameservers. Since they obviously don't receive a reply, these resolvers will keep coming back.
In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets.
We have to start someplace. There is no magic answer for all cases.
RFC-1918 is easy to admin, and easy to deploy, in relative terms compared to uRPF or similar methods.
uRPF is easier: one configuration command per interface. A filter for RFC 1918 space is also one configuration command per interface, and some command to create the filter.
For large and small alike it can be a positive marketing tool, if properly implemented.
Sure. "We can't be bothered to do proper filtering, but since filter 0.39% of what we should, we are cool."
On Tue, 8 Oct 2002, John M. Brown wrote:
Why is it hard to believe that a large amount of RFC-1918 sourced traffic is floating around the net?
Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too?
Root name servers are just one "victim" of this trash. DOS, DDOS and other just stupid configurations contribute to the pile.
So only allow proper source addresses, that's the first step towards getting rid of DoS.
Costs can be reduced in several areas:
1. Egress filtering, don't let RFC-1918 packets out of your network.
I'm not convinced this is (in general) a substantial amount of traffic.
2. Spoof filtering. 3. Better tools to mitigate DOS/DDOS attacks. The technology exists for say, cable providers to reduce port scans and DOS type attacks.
I would happily kick anyone doing anything that is conclusively abusive off the net. But access providers aren't going to do this because it costs them money. Being a good netizen doesn't do them any good. I'm reminded of the two guys walking over the Serengeti, and they spot a lion. One guy bends down to tie his shoe laces, and the other says: what are you doing, you can't outrun a lion! The first guy says: I don't have to, as long as I can outrun you. People aren't in any hurry to protect the common good, they just want to keep one step ahead of those who get in trouble for not doing enough.
If 1 and 2 are done, this will reduce complaint calls from non-customers, which reduces man hour cycles.
Don't count on it. Some people start calling when they're pinged.
2. Spoof filtering. 3. Better tools to mitigate DOS/DDOS attacks. The technology exists for say, cable providers to reduce port scans and DOS type attacks.
I would happily kick anyone doing anything that is conclusively abusive off the net. But access providers aren't going to do this because it costs them money. Being a good netizen doesn't do them any good. I'm reminded of the two guys walking over the Serengeti, and they spot a lion. One guy bends down to tie his shoe laces, and the other says: what are you doing, you can't outrun a lion! The first guy says: I don't have to, as long as I can outrun you. People aren't in any hurry to protect the common good, they just want to keep one step ahead of those who get in trouble for not doing enough.
I guess you are describing the result of the bean counters' vision of an Ideal World colliding with the engineer's concept of poor technical practice. I can't buy the above reasoning, though, for two reasons. First, I just don't think there are bean counters clueful enough to sit around calculating return-on-investment (or lack thereof) on source- address filtering. And insofar as that is true, it is a mighty good thing, as it prolongs the time when engineering practice is still within the purview of engineers. Second, I think there are still enough people around who remember how Agis was hounded out of business for being spam-friendly. Nobody wants the same thing to happen to them, and to avoid it, will avoid even the perception of irresponsible operation.
On Tue, 08 Oct 2002 22:06:12 +0200, Iljitsch van Beijnum said:
Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too?
How much you want to bet that *all* the internal backbone traffic from these sites is pouring out into the Internet, and they've had to upgrade from a T1 to a DS3 and are looking at a OC3, and the service provider is keeping their mouth shut because they can just catch an OC3's worth of packets and drop most of them on the floor (because they don't have a route to the 1918 destination address - only the random stuff with actual valid destinations like a root nameserver gets forwarded). Oh, and since 90% of their traffic is dropped on the floor, they can provision an OC3 to the customer and still only need to provision a DS3 upstream. If 20% of your customers do this, you can just label it "cash cow".. ;) If you thought there was disincentive for people selling transit to filter, this is even worse... ;) -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
On Tue, 8 Oct 2002 Valdis.Kletnieks@vt.edu wrote:
Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too?
How much you want to bet that *all* the internal backbone traffic from these sites is pouring out into the Internet, and they've had to upgrade from a T1 to a DS3 and are looking at a OC3, and the service provider is keeping their mouth shut because they can just catch an OC3's worth of packets and drop most of them on the floor
Ok, but how do you generate megabits worth of traffic for which there is no return traffic? At some level, someone or something must be trying to do something _really hard_ but keep failing every time. It just doesn't make sense.
On Tue, 8 Oct 2002, Iljitsch van Beijnum wrote:
Ok, but how do you generate megabits worth of traffic for which there is no return traffic?
spammers... smurfers... attackers...
At some level, someone or something must be trying to do something _really hard_ but keep failing every time.
spammers... smurfers... attackers...
It just doesn't make sense.
Yes, it doesnt make sense to not filter. -Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
On Tue, 08 Oct 2002 22:57:42 +0200, Iljitsch van Beijnum said:
Ok, but how do you generate megabits worth of traffic for which there is no return traffic? At some level, someone or something must be trying to do something _really hard_ but keep failing every time. It just doesn't make sense.
Imagine if you will the following config: (pipe to ISP) +------+ DMZ 10.1.1/24 +-----+ internal 192.68.1/22 ===============|router|----------------| NAT |------- +------+ +-----+ Now give the router a default route to the ISP - and then screw the NAT config up so 198.68.1 packets show up on the DMZ. Or have something catch a broken RIP announcement.. or any number of stupid things. Whoosh, instant money for the ISP.. ;) Last April (2001), while worrying about the NTP buffer overflow, we ran a trace to see where NTP packets were going. In a 10 minute span, we caught no less than 6 packets looking for an address that had been a stratum-2 server - 11 years previously. They've probably generated megabits of data for so long that they don't even realize there's a problem. The perpetrators have retired or moved on, and the incumbent admins don't see anything anomalous since it's always been that way. Remember - the sort of admin that's not clued enough to get his NAT to behave is probably the sort that wouldn't know how to run a network monitor on his outbound pipe either. Lots of unclued admins out there... -- Valdis Kletnieks Computer Systems Senior Engineer Virginia Tech
On Tue, 8 Oct 2002, Iljitsch van Beijnum wrote:
Ok, but how do you generate megabits worth of traffic for which there is no return traffic? At some level, someone or something must be trying to do something _really hard_ but keep failing every time. It just doesn't make sense.
I could show you VOLUMES of name server logs for people doing things that could never possibly succeed, over and over and over again. My favorite are the people who try to use my authoritative name servers as resolvers. No one at my company can recall a time that our auth. name servers EVER allowed recursion. My point is simply that we shouldn't underestimate the stupidity of the masses, and anything that can be done to improve things, should be. Of course, the problem in this thread is the varying definitions of "improve." Doug
Why is it hard to believe that a large amount of RFC-1918 sourced traffic is floating around the net? Because if 20% of all people generate this crap (which is a huge number) it must be 90% of their traffic to get at 18%. How can someone generate so much useless traffic and keep doing it, too?
funny question from someone who reads this mailing list :-)
In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets.
If the backbone providers bill their customers for traffic, then filtering out those packets would let them bill less. Since their costs are fixed, and the amount of billable traffic decreases, the break-even price per meg goes up, not down. They wont filter up until it would be more expensive not to filter. Alex
On Tue, 8 Oct 2002 alex@yuriev.com wrote:
They wont filter up until it would be more expensive not to filter.
Gross/Willfull negligence lawsuits? Im sure one of these days a large corporation like ebay/m$/etc will be annoyed enough at backbone providers spoof-DOSing them to file a lawsuit. Then it will suddenly become more expensive not to filter. Im rather suprised such lawsuits havent already happened. -Dan -- [-] Omae no subete no kichi wa ore no mono da. [-]
On Tue, 8 Oct 2002, John M. Brown wrote:
Simulation models I've been running show that an average of 12 to 18 percent of a providers traffic would disappear if they filtered RFC-1918 sourced packets. The percentage ranges scale with the size of the provider. Smaller providers, less impact, larger providers more impact.
In addition to the bandwidth savings, there is also a support cost reduction and together, I believe backbone providers can see this on the bottom line of their balance sheets.
Testing a couple of years ago on a widely used router vendor's implementation of uRPF showed in certain pathalogical cases a 50% throughput hit when uRPF was turned on. Even a single line access list permit ip any any had a throughput hit on certain platforms. http://www.nc-itec.org/archive/URPF/Unicast%20RPF%20Test%20Results%20Summary... Whether this is still true, the legend lives on. A 20% throughput hit won't be offset by a 12 to 18 percent bandwidth savings. Especially on heavily loaded circuits. Some network engineers are reluctant to do any type of packet filtering (uRPF or ACL based) because of the belief it will hurt performance (latency, throughput, etc). While I think its a good idea, and generally do it on any network I design from scratch; so far you really haven't given me much ammo to convince people to change what is already working for them. Going back to the IBM/Ahmdal mainframe days, the traditional requirement to get people to change was it needed to be 30% cheaper or 30% better. Anything less, and it was usually wasn't worth the effort of making the change, especially if the current system didn't have a visible problem.
Sean Donelan <sean@donelan.com> writes:
Whether this is still true, the legend lives on. A 20% throughput hit won't be offset by a 12 to 18 percent bandwidth savings. Especially on heavily loaded circuits. Some network engineers are reluctant to do any type of packet filtering (uRPF or ACL based) because of the belief it will hurt performance (latency, throughput, etc).
Some network operators got burned by broken ACL implementations, too. -- Florian Weimer Weimer@CERT.Uni-Stuttgart.DE University of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/ RUS-CERT fax +49-711-685-5898
sean@donelan.com (Sean Donelan) writes:
If there is a magic solution, I would love to hear about it.
Unfortunately, the only solutions I've seen involve considerable work and resources to implement and maintain all the "exceptions" needed to do 100% source address validation.
I had no idea this was so hard. I guess the people who maintain AS3557 (or AS6461 for that matter) do such a job of making this _look_ easy that I just naturally thought it _was_ easy. Forgive my simple minded approach, if it really is simple minded, but... any given interface or peering session or whatever is either customer facing, peer/transit facing, or a trunk which leads ultimately to more customer AND more peer/transit facing interfaces elsewhere in the network. On customer-facing connections, there's a short list of things they should be allowed to say as IP source addresses. (They might be multihomed but chances are low that you want them giving transit to other parts of the network through you, no matter whether you do usage sensitive billing or not.) On transit/peer facing connections, there's a short list of things they should NOT be allowed to send from (your own customers, chiefly) and a short list of things you should NOT be allowed to send them from (RFC1918 being the big example.) Because F-root's network operator was filtering out inbound RFC1918-sourced packets, I could only see them at C-root. Now, F-root can also see them, so I can once again collect stats from (and complain about stats from) both. RFC1918 routes are allowed to float around inside AS3557, by the way, since "customers" use them for VPN purposes. So we don't filter out ingress 1918 from customer-facing interfaces; instead we filter out egress 1918 toward our peers/transits. Like I said, I had no idea this was generally thought to be so complicated. -- Paul Vixie
On Tue, 8 Oct 2002, Kelly J. Cooper wrote:
Also, egress filtering is NOT easy,
I don't care. And it doesn't have to be egress filtering as such, filtering packets you receive from your customers will work just as well.
Plus, lots of attacks these days are mixing spoofed and legit traffic, or doing limited spoofing (i.e. picking random addresses on the LAN where they originate to make it past filters).
What's your point? That because someone can do bad thing #1 that can't be prevented, we should allow them to do bad thing #2 that can? If they use (semi-) legitmate addresses, at the very least I can track them and with some effort I can filter them. If they spoof then I can't do anything. This is not acceptable.
participants (25)
-
Al Rowland
-
alex@yuriev.com
-
Allan Liska
-
Barb Dijker
-
bdragon@gweep.net
-
Chris Wedgwood
-
Dan Hollis
-
Doug Barton
-
Florian Weimer
-
Iljitsch van Beijnum
-
Jared Mauch
-
Jason Lixfeld
-
Jim Hickstein
-
Joe Abley
-
John M. Brown
-
Kelly J. Cooper
-
Mark Borchers
-
Mike Tancsa
-
Paul Vixie
-
Paul Vixie
-
Petri Helenius
-
Randy Bush
-
Sean Donelan
-
Stephen J. Wilcox
-
Valdis.Kletnieks@vt.edu