Filter NTP traffic by packet size?
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple packets back forth between me and the server depending on the number of peers it has? Would I be breaking something important if I started to filter NTP packets
200 bytes into my network?
On 2/20/2014 12:41 PM, Edward Roels wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple packets back forth between me and the server depending on the number of peers it has?
Would I be breaking something important if I started to filter NTP packets
200 bytes into my network?
If your equipment supports this, and you're seeing reflected NTP attacks, then it is an effective stopgap to block nearly all of the inbound attack traffic to affected hosts. Some still comes through from NTP servers running on nonstandard ports, but not much. Standard IPv4 NTP response packets are 76 bytes (plus any link-level headers), based on my testing. I have been internally filtering packets of other sizes against attack targets for some time now with no ill-effect. -John
On Feb 20, 2014, at 3:51 PM, John Weekes <jw@nuclearfallout.net> wrote:
On 2/20/2014 12:41 PM, Edward Roels wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple packets back forth between me and the server depending on the number of peers it has?
Would I be breaking something important if I started to filter NTP packets
200 bytes into my network?
If your equipment supports this, and you're seeing reflected NTP attacks, then it is an effective stopgap to block nearly all of the inbound attack traffic to affected hosts. Some still comes through from NTP servers running on nonstandard ports, but not much.
Standard IPv4 NTP response packets are 76 bytes (plus any link-level headers), based on my testing. I have been internally filtering packets of other sizes against attack targets for some time now with no ill-effect.
You can filter packets that are 440 bytes in size and it will do a lot to help the problem, but make sure you conjoin these with protocol udp and port=123 rules to avoid collateral damage. You may also want to look at filtering UDP/80 outright as well, as that is commonly used as an "I'm going to attack port 80" by attackers that don't quite understand the difference between UDP and TCP. Next up, we will see the proto=0 and proto=255 attacks again.. - Jared
On Thu, Feb 20, 2014 at 1:03 PM, Jared Mauch <jared@puck.nether.net> wrote:
On 2/20/2014 12:41 PM, Edward Roels wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple
On Feb 20, 2014, at 3:51 PM, John Weekes <jw@nuclearfallout.net> wrote: packets
back forth between me and the server depending on the number of peers it has?
If your equipment supports this, and you're seeing reflected NTP attacks, then it is an effective stopgap to block nearly all of the inbound attack traffic to affected hosts. Some still comes through from NTP servers running on nonstandard ports, but not much.
Also, don't forget to ask those sending the attack traffic to trace where the spoofed packets ingressed their networks.
Standard IPv4 NTP response packets are 76 bytes (plus any link-level headers), based on my testing. I have been internally filtering packets of other sizes against attack targets for some time now with no ill-effect.
You can filter packets that are 440 bytes in size and it will do a lot to help the problem, but make sure you conjoin these with protocol udp and port=123 rules to avoid collateral damage.
Preferably just source-port 123. You may also want to look at filtering UDP/80 outright as well, as that is
commonly used as an "I'm going to attack port 80" by attackers that don't quite understand the difference between UDP and TCP.
Please don't filter UDP/80. It's used by QUIC ( http://en.wikipedia.org/wiki/QUIC). Damian
Type Enforcement in the OS Kernel is the place to do that. Todd On 2/20/2014 2:12 PM, Damian Menscher wrote:
On Thu, Feb 20, 2014 at 1:03 PM, Jared Mauch <jared@puck.nether.net> wrote:
On 2/20/2014 12:41 PM, Edward Roels wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple
On Feb 20, 2014, at 3:51 PM, John Weekes <jw@nuclearfallout.net> wrote: packets
back forth between me and the server depending on the number of peers it has? If your equipment supports this, and you're seeing reflected NTP attacks, then it is an effective stopgap to block nearly all of the inbound attack traffic to affected hosts. Some still comes through from NTP servers running on nonstandard ports, but not much.
Also, don't forget to ask those sending the attack traffic to trace where the spoofed packets ingressed their networks.
Standard IPv4 NTP response packets are 76 bytes (plus any link-level headers), based on my testing. I have been internally filtering packets of other sizes against attack targets for some time now with no ill-effect.
You can filter packets that are 440 bytes in size and it will do a lot to help the problem, but make sure you conjoin these with protocol udp and port=123 rules to avoid collateral damage.
Preferably just source-port 123.
You may also want to look at filtering UDP/80 outright as well, as that is
commonly used as an "I'm going to attack port 80" by attackers that don't quite understand the difference between UDP and TCP.
Please don't filter UDP/80. It's used by QUIC ( http://en.wikipedia.org/wiki/QUIC).
Damian
-- ------------- Personal Email - Disclaimers Apply
On Thu, Feb 20, 2014 at 2:12 PM, Damian Menscher <damian@google.com> wrote:
On Thu, Feb 20, 2014 at 1:03 PM, Jared Mauch <jared@puck.nether.net> wrote:
On 2/20/2014 12:41 PM, Edward Roels wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple
On Feb 20, 2014, at 3:51 PM, John Weekes <jw@nuclearfallout.net> wrote: packets
back forth between me and the server depending on the number of peers it has?
If your equipment supports this, and you're seeing reflected NTP attacks, then it is an effective stopgap to block nearly all of the inbound attack traffic to affected hosts. Some still comes through from NTP servers running on nonstandard ports, but not much.
Also, don't forget to ask those sending the attack traffic to trace where the spoofed packets ingressed their networks.
Standard IPv4 NTP response packets are 76 bytes (plus any link-level headers), based on my testing. I have been internally filtering packets of other sizes against attack targets for some time now with no ill-effect.
You can filter packets that are 440 bytes in size and it will do a lot to help the problem, but make sure you conjoin these with protocol udp and port=123 rules to avoid collateral damage.
Preferably just source-port 123.
You may also want to look at filtering UDP/80 outright as well, as that is
commonly used as an "I'm going to attack port 80" by attackers that don't quite understand the difference between UDP and TCP.
Please don't filter UDP/80. It's used by QUIC ( http://en.wikipedia.org/wiki/QUIC).
Damian
The folks at QUIC have been advised to not use UDP for a new protocol, and they would be very well advised to not use UDP:80 since that is a well known target port used in the DDoS reflection attacks. As Jared noted, UDP:80 is a cesspool today. Attempting to use it for legit traffic is not smart. CB
On Fri, Feb 21, 2014 at 1:22 PM, Cb B <cb.list6@gmail.com> wrote:
On Thu, Feb 20, 2014 at 2:12 PM, Damian Menscher <damian@google.com> wrote:
On Thu, Feb 20, 2014 at 1:03 PM, Jared Mauch <jared@puck.nether.net> wrote: You may also want to look at filtering UDP/80 outright as well, as that is
commonly used as an "I'm going to attack port 80" by attackers that don't quite understand the difference between UDP and TCP.
Please don't filter UDP/80. It's used by QUIC ( http://en.wikipedia.org/wiki/QUIC).
The folks at QUIC have been advised to not use UDP for a new protocol, and they would be very well advised to not use UDP:80 since that is a well known target port used in the DDoS reflection attacks.
Please suggest which protocol has less blocking on the internet today (keeping in mind the full end-to-end stack of CPE, various ISPs, country-level proxies, backbone providers, etc). Damian
On Feb 22, 2014 5:30 AM, "Damian Menscher" <damian@google.com> wrote:
On Fri, Feb 21, 2014 at 1:22 PM, Cb B <cb.list6@gmail.com> wrote:
On Thu, Feb 20, 2014 at 2:12 PM, Damian Menscher <damian@google.com>
On Thu, Feb 20, 2014 at 1:03 PM, Jared Mauch <jared@puck.nether.net> wrote: You may also want to look at filtering UDP/80 outright as well, as
wrote: that is
commonly used as an "I'm going to attack port 80" by attackers that don't quite understand the difference between UDP and TCP.
Please don't filter UDP/80. It's used by QUIC ( http://en.wikipedia.org/wiki/QUIC).
The folks at QUIC have been advised to not use UDP for a new protocol, and they would be very well advised to not use UDP:80 since that is a well known target port used in the DDoS reflection attacks.
Please suggest which protocol has less blocking on the internet today (keeping in mind the full end-to-end stack of CPE, various ISPs, country-level proxies, backbone providers, etc).
Damian
Tcp. But the actual answer is , if you want a new transport protocol, create a new transport protocol with a new protocol number. Overloading the clearly polluted UDP pool will have problems. Happy eyeballs negotiation may be required for L4. QUIC can do what it wants. Like anyone else, they pay their money and take their chances. But, the data point that UDP is polluted is clearly documented with several folks on this list suggesting tactical fixes that involve limiting UDP, especially udp:80
On (2014-02-21 14:37 -0800), Cb B wrote:
QUIC can do what it wants. Like anyone else, they pay their money and take their chances. But, the data point that UDP is polluted is clearly documented with several folks on this list suggesting tactical fixes that involve limiting UDP, especially udp:80
Seth has good point, UDP:80 is HTTP. If we want new L4 protocol which works today, we must first ride on top of UDP, since that will work on lot more people day 1, this will avoid chicken-egg problem (kit won't be fixed,as no one uses new L4, no one uses new L4 as lot of kit drops it) I'm surprised MinimaLT and QUIC have have not put transport area people in high gear towards standardization of new PKI based L4 protocol, I think its elegant solution to many practical reoccurring problem, solution which has become practical only rather recently. -- ++ytti
On 22 Feb 2014, at 08:47, Saku Ytti <saku@ytti.fi> wrote:
I'm surprised MinimaLT and QUIC have have not put transport area people in high gear towards standardization of new PKI based L4 protocol, I think its elegant solution to many practical reoccurring problem, solution which has become practical only rather recently.
Oh, the transport area people *are* in their high gear. Their frantic movements may just seem static to you as they operate on more drawn-out time scales. (The last transport protocol I worked on became standards-track 16 years after I started working on it.) At this IETF, there will be a “Transport Services” BOF to help find out what exactly the services are that a new transport protocol should provide to the applications. Research platforms such as QUIC, tcpcrypt, MINION etc. are very much in the focus of attention. This time, it would be nice if the operations people got to have a say early on in what gets standardized. (Just be careful not to try to “fight yesterday’s war”.) Grüße, Carsten
On Sat, Feb 22, 2014 at 12:38 AM, Carsten Bormann <cabo@tzi.org> wrote:
On 22 Feb 2014, at 08:47, Saku Ytti <saku@ytti.fi> wrote:
I'm surprised MinimaLT and QUIC have have not put transport area people in high gear towards standardization of new PKI based L4 protocol, I think its elegant solution to many practical reoccurring problem, solution which has become practical only rather recently.
Oh, the transport area people *are* in their high gear. Their frantic movements may just seem static to you as they operate on more drawn-out time scales. (The last transport protocol I worked on became standards-track 16 years after I started working on it.)
At this IETF, there will be a "Transport Services" BOF to help find out what exactly the services are that a new transport protocol should provide to the applications. Research platforms such as QUIC, tcpcrypt, MINION etc. are very much in the focus of attention.
This time, it would be nice if the operations people got to have a say early on in what gets standardized. (Just be careful not to try to "fight yesterday's war".)
Grüße, Carsten
yesterday's war = don't bring up that operators are having a real problem with UDP, and that operators have and will continue to block it? Because, i think that is what this thread is about. i did bring yesterday's war to the IETF RTCWweb group and got the expected answer My concern: https://www.ietf.org/mail-archive/web/rtcweb/current/msg11425.html Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP https://www.ietf.org/mail-archive/web/rtcweb/current/msg11477.html CB
(Just be careful not to try to "fight yesterday's war”.)
yesterday's war = don't bring up that operators are having a real problem with UDP,
No, you don’t. You are having a problem with applications that enable strongly amplified reflection. (Yes, after the days of smurf passed, these are all on UDP, because it is hard to make that mistake with TCP, and nothing else is deployable. Still, your problem is not “with UDP”, but with those applications.) The obvious solution for a new protocol is to make sure that it doesn’t have that problem, whether it is layered on UDP or something else. (In yesterday’s network, it *only* can be layered on UDP, because nothing else goes through NATs.) Also, note that the NTP issue we are seeing right now is not a protocol problem at all, it is all about shoddy implementation. The next problem is that the hammers you have to fix this at the network level really aren’t that good for fixing the rust on those implementations. The QUIC people tell us they are able to talk UDP to about 93 % of the people they can talk TCP to. So a part of the network will be stuck with running their applications on today’s TCP. But that doesn’t mean that we can’t layer useful new stuff on UDP, it just will be less universally available. (With those new applications coming online, blanket filtering of UDP will be exposed even more as the low-ball networking that it is, so I expect the workability of UDP to go up over time, not down.) Grüße, Carsten
The obvious solution for a new protocol is to make sure that it doesn’t have that problem, whether it is layered on UDP or something else.
i'll settle for configured by default not to welcome amplification queries with open arms. let's not throw the baby out with the bathwater (excuse the yank idiom) randy
On 22/02/2014 09:07, Cb B wrote:
Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP
udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon. It's worth bearing in mind that any open tcp service will send out several acks before giving up. In other words, any standard open tcp socket will provide a level of amplification worth using even if UDP were to be switched off tomorrow. Sure, not as good as the 230x amplification that ntp monlist will give, but it's still a problem. In the long term, it would be more useful to spent time and effort building automated tools to track down the sources of the spoofed packets than trying to deprecate UDP. Nick
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2/22/2014 7:06 AM, Nick Hilliard wrote:
On 22/02/2014 09:07, Cb B wrote:
Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP
udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon.
Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p - - ferg - -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2 -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/ iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
Has anyone talked about policing ntp everywhere. Normal traffic levels are extremely low but the ddos traffic is very high. It would be really cool if peering exchanges could police ntp on their connected members.
On Feb 22, 2014, at 8:05, "Paul Ferguson" <fergdawgster@mykolab.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 2/22/2014 7:06 AM, Nick Hilliard wrote:
On 22/02/2014 09:07, Cb B wrote: Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP
udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon.
Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p
- - ferg
- -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
Brocade demonstrated how peering exchanges can selectively filter large NTP reflection flows using the sFlow monitoring and hybrid port OpenFlow capabilities of their MLXe switches at last week's Network Field Day event. http://blog.sflow.com/2014/02/nfd7-real-time-sdn-and-nfv-analytics_1986.html On Sat, Feb 22, 2014 at 4:43 PM, Chris Laffin <claffin@peer1.com> wrote:
Has anyone talked about policing ntp everywhere. Normal traffic levels are extremely low but the ddos traffic is very high. It would be really cool if peering exchanges could police ntp on their connected members.
On Feb 22, 2014, at 8:05, "Paul Ferguson" <fergdawgster@mykolab.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 2/22/2014 7:06 AM, Nick Hilliard wrote:
On 22/02/2014 09:07, Cb B wrote: Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP
udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon.
Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p
- - ferg
- -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
On Feb 22, 2014, at 19:23, "Peter Phaal" <peter.phaal@gmail.com> wrote:
Brocade demonstrated how peering exchanges can selectively filter large NTP reflection flows using the sFlow monitoring and hybrid port OpenFlow capabilities of their MLXe switches at last week's Network Field Day event.
http://blog.sflow.com/2014/02/nfd7-real-time-sdn-and-nfv-analytics_1986.html
On Sat, Feb 22, 2014 at 4:43 PM, Chris Laffin <claffin@peer1.com> wrote: Has anyone talked about policing ntp everywhere. Normal traffic levels are extremely low but the ddos traffic is very high. It would be really cool if peering exchanges could police ntp on their connected members.
On Feb 22, 2014, at 8:05, "Paul Ferguson" <fergdawgster@mykolab.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 2/22/2014 7:06 AM, Nick Hilliard wrote:
On 22/02/2014 09:07, Cb B wrote: Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP
udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon.
Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p
- - ferg
- -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
On Sun, 23 Feb 2014, Chris Laffin wrote:
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
If only there was more focus on the BCP38 offenders who are the real root cause of this problem, I would be more happy. I would be more impressed if the IXes would start to use their sFlow capabilities to find out what IX ports the NTP queries are coming to backtrace the traffic to the BCP38 offendors than try to block the NTP packets resulting from these src address forged queries. -- Mikael Abrahamsson email: swmike@swm.pp.se
What is the business model for the IX? Unauthorized filtering of incoming traffic risks collateral damage and outing exchange members seems problematic. The business model seems clearer when offering filtering as a service to downstream networks, the effects are narrowly scoped, and members have control over the traffic they accept from the exchange, e.g. I don't want to accept NTP traffic to any destination that exceeds 1Gbit/s, or is sourced from an NTP server on my blacklist. Giving policy control to the downstream allows them to protect their networks and make business decisions about how they want to prioritize services and customers when resources are constrained. Would exchange members pay for this type of control? DDoS mitigation appears to be less of a technical problem than an issue of misaligned costs and benefits. How do you create incentives for upstream providers to invest in solutions when the benefits accrue downstream? On Sun, Feb 23, 2014 at 7:14 AM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
On Sun, 23 Feb 2014, Chris Laffin wrote:
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
If only there was more focus on the BCP38 offenders who are the real root cause of this problem, I would be more happy.
I would be more impressed if the IXes would start to use their sFlow capabilities to find out what IX ports the NTP queries are coming to backtrace the traffic to the BCP38 offendors than try to block the NTP packets resulting from these src address forged queries.
-- Mikael Abrahamsson email: swmike@swm.pp.se
The business model seems clearer when offering filtering as a service to downstream networks, the effects are narrowly scoped, and members have control over the traffic they accept from the exchange, e.g. I don't want to accept NTP traffic to any destination that exceeds 1Gbit/s, or is sourced from an NTP server on my blacklist. Giving policy control to the downstream allows them to protect their networks and make business decisions about how they want to prioritize services and customers when resources are constrained.
Would exchange members pay for this type of control?
Speaking only for myself: No. The L2 IXes I connect to should use their resources for packet switching, not filtering. Way too many things that could go wrong if we go down the filtering path... Steinar Haug, AS 2116
On 23 Feb 2014, at 18:29, sthaug@nethelp.no wrote:
Speaking only for myself: No. The L2 IXes I connect to should use their resources for packet switching, not filtering. Way too many things that could go wrong if we go down the filtering path…
Indeed. Most of the L2 IXes run on very “cost-optimized” solutions just to switch as fast as they can without going in details of what actually is being switched - at least in Europe. To do some additional checks would require extensive testing, platforms capable of doing this in predictable manner (stability, performance) and obviously - a lot more work than it costs today. -- "There's no sense in being precise when | Łukasz Bromirski you don't know what you're talking | jid:lbromirski@jabber.org about." John von Neumann | http://lukasz.bromirski.net
On Sun, 23 Feb 2014, Lukasz Bromirski wrote:
To do some additional checks would require extensive testing, platforms capable of doing this in predictable manner (stability, performance) and obviously - a lot more work than it costs today.
A lot of IXes already do sFlow so all the work I proposed would be on the sFlow collector side which has nothing to do with the packet forwarding performance of the IX. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Feb 23, 2014, at 9:50 AM, Lukasz Bromirski <lukasz@bromirski.net> wrote:
To do some additional checks would require extensive testing, platforms capable of doing this in predictable manner (stability, performance) and obviously - a lot more work than it costs today.
What are the costs and stability impacts of the DDOS that are running now? Everyone is asserting it's someone else's problem. Which in a sense it is. But what goes around will come around. If you are not BCP 38 you are sourcing problems. If you are transiting or IXPing someone who isn't BCP 38 you are enabling problems. Is what we are doing now good enough? Probably not. It would take fewer IXP and transit providers adding analysis capability to backtrack than endpoints. So the enablers are more capable of effecting change. They are less to blame in the first place, but not blameless. To assert blamelessness is a form of Tragedy of the Commons. If it's crossing your link or switch, you ARE in the responsibility chain. The last thing I would like to see is large orgs starting to retreat away from open interconnect because of DDOS coming in from less well managed parts of the net. Perhaps BCP 38 implementation will rise fast enough that these things will not become real, but we have been hearing that for 15 plus years now... At some point, the "38 will work by itself!" line approaches "Look at the Emperors' fine new clothes!". -george william herbert george.herbert@gmail.com Sent from Kangphone
Newb question ... other than retrofitting, what stands in the way of making BCP38 a condition of peering? Royce
On Sun, Feb 23, 2014 at 10:48 AM, Royce Williams <royce@techsolvency.com> wrote:
Newb question ... other than retrofitting, what stands in the way of making BCP38 a condition of peering?
In other words ... if it's a problem of awareness, could upstreams automate warning their downstreams? What about teaching RADb to periodically test for BCP38 compliance, send soft warnings (with links to relevant pages on www.bcp38.info), and publish stats? Continuing my naïveté ...what if upstreams required BCP38 compliance before updating BGP filters? This would require a soft rollout -- we'd have to give them a few months' warning to not interfere with revenue streams -- but it sounds like nothing's going to change until it starts hitting the pocketbooks. Royce
On 2/23/14, 12:11 PM, Royce Williams wrote:
On Sun, Feb 23, 2014 at 10:48 AM, Royce Williams <royce@techsolvency.com> wrote:
Newb question ... other than retrofitting, what stands in the way of making BCP38 a condition of peering?
Peering is frequently but harldy exclusively on a best effort basis, e.g. you agree to exchange traffic, but also agree to hold each other harmless if something bad happens. that's any easy enough contract for most entities to enter into
In other words ... if it's a problem of awareness, could upstreams automate warning their downstreams? What about teaching RADb to periodically test for BCP38 compliance, send soft warnings (with links to relevant pages on www.bcp38.info), and publish stats?
Continuing my naïveté ...what if upstreams required BCP38 compliance before updating BGP filters?
my upstreams adjust their filters when I update radb.
This would require a soft rollout -- we'd have to give them a few months' warning to not interfere with revenue streams -- but it sounds like nothing's going to change until it starts hitting the pocketbooks.
Royce
Dear All I released a bit of a blog article last week about filtering NTP request traffic via packet size which might be of interest ! So far I known of an unknown tool makes a default request packet of 50 bytes in size ntpdos.py makes a default request packet of 60 bytes in size ntp_monlist.py makes a default request packet of 234 bytes in size monlist from ntpdc makes a default request packet of 234 bytes in size In contrast a normal NTP request for a time sync is about 90 bytes in size More information and some graphs can be found here http://www.micron21.com/ddos-ntp.php Kindest Regards James Braunegg P: 1300 769 972 | M: 0488 997 207 | D: (03) 9751 7616 E: james.braunegg@micron21.com | ABN: 12 109 977 666 W: www.micron21.com/ddos-protection T: @micron21 This message is intended for the addressee named above. It may contain privileged or confidential information. If you are not the intended recipient of this message you must not use, copy, distribute or disclose it to anyone other than the addressee. If you have received this message in error please return the message to the sender by replying to it and then delete the message from your computer. -----Original Message----- From: joel jaeggli [mailto:joelja@bogus.com] Sent: Monday, February 24, 2014 7:31 AM To: Royce Williams; nanog@nanog.org Subject: Re: Filter NTP traffic by packet size? On 2/23/14, 12:11 PM, Royce Williams wrote:
On Sun, Feb 23, 2014 at 10:48 AM, Royce Williams <royce@techsolvency.com> wrote:
Newb question ... other than retrofitting, what stands in the way of making BCP38 a condition of peering?
Peering is frequently but harldy exclusively on a best effort basis, e.g. you agree to exchange traffic, but also agree to hold each other harmless if something bad happens. that's any easy enough contract for most entities to enter into
In other words ... if it's a problem of awareness, could upstreams automate warning their downstreams? What about teaching RADb to periodically test for BCP38 compliance, send soft warnings (with links to relevant pages on www.bcp38.info), and publish stats?
Continuing my naïveté ...what if upstreams required BCP38 compliance before updating BGP filters?
my upstreams adjust their filters when I update radb.
This would require a soft rollout -- we'd have to give them a few months' warning to not interfere with revenue streams -- but it sounds like nothing's going to change until it starts hitting the pocketbooks.
Royce
On Feb 23, 2014, at 4:39 PM, James Braunegg <james.braunegg@micron21.com> wrote:
Dear All
I released a bit of a blog article last week about filtering NTP request traffic via packet size which might be of interest !
So far I known of an unknown tool makes a default request packet of 50 bytes in size ntpdos.py makes a default request packet of 60 bytes in size ntp_monlist.py makes a default request packet of 234 bytes in size monlist from ntpdc makes a default request packet of 234 bytes in size
In contrast a normal NTP request for a time sync is about 90 bytes in size
More information and some graphs can be found here http://www.micron21.com/ddos-ntp.php
Kindest Regards
James Braunegg
Do these .py's do anything else different to the query packets than "normal" ntp clients? (254TTL instead of the more common 63TTL for "normal" clients.)
Hi Royce, Le 23/02/2014 20:48, Royce Williams a écrit :
Newb question ... other than retrofitting, what stands in the way of making BCP38 a condition of peering?
Good point ! And simple answer : most peers wouldn't support the hassle yet, thus reducing peering density and interest. I operate a small IXP in southern France and none of my members is currently BCP38 compliant. Of 16 members only one is known to work on the issue. Funny thing beeing that most active members are also switching to Juniper routers and all had been contributing as NTP reflectors because of JunOS bugs. I'd rather consider implementing ACLs on member ports to filter-out illegitimate prefixes (cannot do OpenFlow on cheap L2 switches :( ) rather than making BCP38 compliance mandatory. Best regards, -- Jérôme Nicolle +33 6 19 31 27 14
On Sun, 23 Feb 2014, Peter Phaal wrote:
What is the business model for the IX? Unauthorized filtering of incoming traffic risks collateral damage and outing exchange members seems problematic.
I never proposed for them to filter. I was talking about *finding out* who are the sources of these ddos attacks (ie don't do BCP38 filtering) and publish this data. -- Mikael Abrahamsson email: swmike@swm.pp.se
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
i have talked to fiber providers and they have refused to take action. perhaps if requests came from hundreds of the unclued zombies they would take it seriously. randy
We have had pretty good success in identifying offenders with simple monitoring flow data for NTP flows destined for our address space with packet counts higher than 100; we disable them and notify to correct the configuration on the host. Granted we only service about 1,000 different customers. In cases where a large amount of incoming traffic was generated, we have been able to temporarily blackhole offenders to not saturate smaller downstream connections until traffic levels die down; unfortunately it takes a few days for that to happen, and many service providers outside the US don't seem to be very responsive to their published abuse address. I prefer targeted, temporary, and communicated filtering for actual incidents over blanket filtering for potential incidents. On Sun, Feb 23, 2014 at 7:35 PM, Randy Bush <randy@psg.com> wrote:
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
i have talked to fiber providers and they have refused to take action. perhaps if requests came from hundreds of the unclued zombies they would take it seriously.
randy
-- Ray Patrick Soucy Network Engineer University of Maine System T: 207-561-3526 F: 207-561-3531 MaineREN, Maine's Research and Education Network www.maineren.net
I talked to one of our upstream IP transit providers and was able to negotiate individual policing levels on NTP, DNS, SNMP, and Chargen by UDP port within our aggregate policer. As mentioned, the legitimate traffic levels of these services are near 0. We gave each service many times the amount to satisfy subscribers, but not enough to overwhelm network links during an attack. --Blake Chris Laffin wrote the following on 2/23/2014 8:58 AM:
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
On Feb 22, 2014, at 19:23, "Peter Phaal" <peter.phaal@gmail.com> wrote:
Brocade demonstrated how peering exchanges can selectively filter large NTP reflection flows using the sFlow monitoring and hybrid port OpenFlow capabilities of their MLXe switches at last week's Network Field Day event.
http://blog.sflow.com/2014/02/nfd7-real-time-sdn-and-nfv-analytics_1986.html
On Sat, Feb 22, 2014 at 4:43 PM, Chris Laffin <claffin@peer1.com> wrote: Has anyone talked about policing ntp everywhere. Normal traffic levels are extremely low but the ddos traffic is very high. It would be really cool if peering exchanges could police ntp on their connected members.
On Feb 22, 2014, at 8:05, "Paul Ferguson" <fergdawgster@mykolab.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 2/22/2014 7:06 AM, Nick Hilliard wrote:
On 22/02/2014 09:07, Cb B wrote: Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon. Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p
- - ferg
- -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
Why wouldn't you just block chargen entirely? Is it actually still being used these days for anything legitimate? Malcolm Staudinger Information Security Analyst | EIS EarthLink E: mstaudinger@corp.earthlink.com -----Original Message----- From: Blake Hudson [mailto:blake@ispn.net] Sent: Tuesday, February 25, 2014 8:58 AM To: nanog@nanog.org Subject: Re: Filter NTP traffic by packet size? I talked to one of our upstream IP transit providers and was able to negotiate individual policing levels on NTP, DNS, SNMP, and Chargen by UDP port within our aggregate policer. As mentioned, the legitimate traffic levels of these services are near 0. We gave each service many times the amount to satisfy subscribers, but not enough to overwhelm network links during an attack. --Blake Chris Laffin wrote the following on 2/23/2014 8:58 AM:
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
On Feb 22, 2014, at 19:23, "Peter Phaal" <peter.phaal@gmail.com> wrote:
Brocade demonstrated how peering exchanges can selectively filter large NTP reflection flows using the sFlow monitoring and hybrid port OpenFlow capabilities of their MLXe switches at last week's Network Field Day event.
http://blog.sflow.com/2014/02/nfd7-real-time-sdn-and-nfv-analytics_19 86.html
On Sat, Feb 22, 2014 at 4:43 PM, Chris Laffin <claffin@peer1.com> wrote: Has anyone talked about policing ntp everywhere. Normal traffic levels are extremely low but the ddos traffic is very high. It would be really cool if peering exchanges could police ntp on their connected members.
On Feb 22, 2014, at 8:05, "Paul Ferguson" <fergdawgster@mykolab.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 2/22/2014 7:06 AM, Nick Hilliard wrote:
On 22/02/2014 09:07, Cb B wrote: Summary IETF response: The problem i described is already solved by bcp38, nothing to see here, carry on with UDP udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon. Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p
- - ferg
- -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
As an ISP in the USA, we try to follow the FCC's guidelines on a policy of non blocking. Not just because the FCC says so, but because we think it's in our and our customer's best interests. We don't dictate what our customer's can do with their internet connection as long as they're not breaking the law or negatively affecting the service for others. --Blake Staudinger, Malcolm wrote the following on 2/25/2014 11:22 AM:
Why wouldn't you just block chargen entirely? Is it actually still being used these days for anything legitimate?
Malcolm Staudinger Information Security Analyst | EIS EarthLink
E: mstaudinger@corp.earthlink.com
-----Original Message----- From: Blake Hudson [mailto:blake@ispn.net] Sent: Tuesday, February 25, 2014 8:58 AM To: nanog@nanog.org Subject: Re: Filter NTP traffic by packet size?
I talked to one of our upstream IP transit providers and was able to negotiate individual policing levels on NTP, DNS, SNMP, and Chargen by UDP port within our aggregate policer. As mentioned, the legitimate traffic levels of these services are near 0. We gave each service many times the amount to satisfy subscribers, but not enough to overwhelm network links during an attack.
--Blake
On Feb 25, 2014, at 12:22 PM, Staudinger, Malcolm <mstaudinger@corp.earthlink.com> wrote:
Why wouldn't you just block chargen entirely? Is it actually still being used these days for anything legitimate?
More politely stated, it’s not the responsibility of the operator to decide what belongs on the network and what doesn’t. Users can run any services that’s not illegal or even reuse ports for other applications. That being said commonly exploited ports (TCP 25 for example) are often blocked. This is usually done to block or protect an application though not to single out a particular port number.
Malcolm Staudinger Information Security Analyst | EIS EarthLink
E: mstaudinger@corp.earthlink.com
-----Original Message----- From: Blake Hudson [mailto:blake@ispn.net] Sent: Tuesday, February 25, 2014 8:58 AM To: nanog@nanog.org Subject: Re: Filter NTP traffic by packet size?
I talked to one of our upstream IP transit providers and was able to negotiate individual policing levels on NTP, DNS, SNMP, and Chargen by UDP port within our aggregate policer. As mentioned, the legitimate traffic levels of these services are near 0. We gave each service many times the amount to satisfy subscribers, but not enough to overwhelm network links during an attack.
--Blake
Chris Laffin wrote the following on 2/23/2014 8:58 AM:
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
On Feb 22, 2014, at 19:23, "Peter Phaal" <peter.phaal@gmail.com> wrote:
Brocade demonstrated how peering exchanges can selectively filter large NTP reflection flows using the sFlow monitoring and hybrid port OpenFlow capabilities of their MLXe switches at last week's Network Field Day event.
http://blog.sflow.com/2014/02/nfd7-real-time-sdn-and-nfv-analytics_19 86.html
On Sat, Feb 22, 2014 at 4:43 PM, Chris Laffin <claffin@peer1.com> wrote: Has anyone talked about policing ntp everywhere. Normal traffic levels are extremely low but the ddos traffic is very high. It would be really cool if peering exchanges could police ntp on their connected members.
On Feb 22, 2014, at 8:05, "Paul Ferguson" <fergdawgster@mykolab.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
> On 2/22/2014 7:06 AM, Nick Hilliard wrote: > > On 22/02/2014 09:07, Cb B wrote: > Summary IETF response: The problem i described is already solved > by bcp38, nothing to see here, carry on with UDP udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon. Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p
- - ferg
- -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley <no.spam@comcast.net> wrote:
More politely stated, it’s not the responsibility of the operator to decide what belongs on the network and what doesn’t. Users can run any services that’s not illegal or even reuse ports for other applications. That being said commonly exploited ports (TCP 25 for example) are often blocked. This is usually done to block or protect an application though not to single out a particular port number.
Don't most residential ISPs already block port 25 outbound? http://www.postcastserver.com/help/Port_25_Blocking.aspx Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
----- Original Message -----
From: "Brandon Galbraith" <brandon.galbraith@gmail.com>
On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley <no.spam@comcast.net> wrote:
More politely stated, it’s not the responsibility of the operator to decide what belongs on the network and what doesn’t. Users can run any services that’s not illegal or even reuse ports for other applications.
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
All of these conversations are variants of "how easy is it to set up a default ACL for loops, and then manage exceptions to it?". Assuming your gear permits it, I don't personally see all that much Bad Actorliness in setting a relatively tight bidirectional ACL for Random Edge Customers, and opening up -- either specific ports, or just "to a less-/un-filtered ACL" on specific request. The question is -- as it is with BCP38 -- *can the edge gear handle it*? And if not: why not? (Protip: because buyers of that gear aren't agitating for it) Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
It depends on how many customers you have and what sort of contract you have with them if any. A significant amount of attack traffic comes from residential networks where a “one-size-fits-all” policy is definitely best. On Feb 26, 2014, at 4:01 PM, Jay Ashworth <jra@baylink.com> wrote:
----- Original Message -----
From: "Brandon Galbraith" <brandon.galbraith@gmail.com>
On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley <no.spam@comcast.net> wrote:
More politely stated, it’s not the responsibility of the operator to decide what belongs on the network and what doesn’t. Users can run any services that’s not illegal or even reuse ports for other applications.
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
All of these conversations are variants of "how easy is it to set up a default ACL for loops, and then manage exceptions to it?".
Assuming your gear permits it, I don't personally see all that much Bad Actorliness in setting a relatively tight bidirectional ACL for Random Edge Customers, and opening up -- either specific ports, or just "to a less-/un-filtered ACL" on specific request.
The question is -- as it is with BCP38 -- *can the edge gear handle it*?
And if not: why not? (Protip: because buyers of that gear aren't agitating for it)
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
I'm wondering how many operators don't have systems in place to quickly and efficiently filter problem host systems. I see a lot of talk of ACL usage, but not much about uRPF and black hole filtering. There are a few white papers that are worth a read: http://www.cisco.com/c/dam/en/us/products/collateral/security/ios-network-fo... http://www.cisco.com/web/about/security/intelligence/urpf.pdf If you have uRPF enabled on all your access routers then you can configure routing policy such that advertising a route for a specific host system will trigger uRPF to drop the traffic at the first hop, in hardware. This prevents you from having to maintain ACLs or even give out access to routers. Instead, you can use a small router or daemon that disables hosts by advertising them as a route (for example, we just use a pair of small ISR 1841 routers for this); this in turn can be tied into IPS or a web UI allowing your NOC to disable a problem host at the first hop and prevent its traffic from propagating throughout the network without having to know the overall architecture of the network or determine the best place to apply an ACL. I've seen a lot of talk on trying to filter specific protocols, or rate-limit, etc. but I really feel that isn't the appropriate action to take. I think disabling a system that is a problem and notifying its maintainer that they need to correct the issue is much more sustainable. There are also limitations on how much can be done through the use of ACLs. uRPF and black hole routing scale much better, especially in response to a denial of service attack. When the NTP problems first started popping up, we saw incoming NTP of several Gb, without the ability to quickly identify and filter this traffic a lot of our users would have been dead in the water because the firewalls they use just can't handle that much traffic; our routers, on the other hand, have no problem throwing those packets out. I only comment on this because one of the comments made to me was "Can't we just use a firewall to block it?". It took me over an hour to explain that the firewalls in use didn't have the capacity to handle this level of traffic -- and when I tried to discuss hardware vs. software filtering, I got a deer-in-the-headlights look. :-) On Thu, Feb 27, 2014 at 8:57 PM, Keegan Holley <no.spam@comcast.net> wrote:
It depends on how many customers you have and what sort of contract you have with them if any. A significant amount of attack traffic comes from residential networks where a "one-size-fits-all" policy is definitely best.
On Feb 26, 2014, at 4:01 PM, Jay Ashworth <jra@baylink.com> wrote:
----- Original Message -----
From: "Brandon Galbraith" <brandon.galbraith@gmail.com>
On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley <no.spam@comcast.net> wrote:
More politely stated, it's not the responsibility of the operator to decide what belongs on the network and what doesn't. Users can run any services that's not illegal or even reuse ports for other applications.
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
All of these conversations are variants of "how easy is it to set up a default ACL for loops, and then manage exceptions to it?".
Assuming your gear permits it, I don't personally see all that much Bad Actorliness in setting a relatively tight bidirectional ACL for Random Edge Customers, and opening up -- either specific ports, or just "to a less-/un-filtered ACL" on specific request.
The question is -- as it is with BCP38 -- *can the edge gear handle it*?
And if not: why not? (Protip: because buyers of that gear aren't agitating for it)
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
-- Ray Patrick Soucy Network Engineer University of Maine System T: 207-561-3526 F: 207-561-3531 MaineREN, Maine's Research and Education Network www.maineren.net
You mean, like Bcp38(.info)? On February 28, 2014 9:02:03 AM EST, Ray Soucy <rps@maine.edu> wrote:
I'm wondering how many operators don't have systems in place to quickly and efficiently filter problem host systems. I see a lot of talk of ACL usage, but not much about uRPF and black hole filtering.
There are a few white papers that are worth a read:
http://www.cisco.com/c/dam/en/us/products/collateral/security/ios-network-fo...
http://www.cisco.com/web/about/security/intelligence/urpf.pdf
If you have uRPF enabled on all your access routers then you can configure routing policy such that advertising a route for a specific host system will trigger uRPF to drop the traffic at the first hop, in hardware.
This prevents you from having to maintain ACLs or even give out access to routers. Instead, you can use a small router or daemon that disables hosts by advertising them as a route (for example, we just use a pair of small ISR 1841 routers for this); this in turn can be tied into IPS or a web UI allowing your NOC to disable a problem host at the first hop and prevent its traffic from propagating throughout the network without having to know the overall architecture of the network or determine the best place to apply an ACL.
I've seen a lot of talk on trying to filter specific protocols, or rate-limit, etc. but I really feel that isn't the appropriate action to take. I think disabling a system that is a problem and notifying its maintainer that they need to correct the issue is much more sustainable. There are also limitations on how much can be done through the use of ACLs. uRPF and black hole routing scale much better, especially in response to a denial of service attack.
When the NTP problems first started popping up, we saw incoming NTP of several Gb, without the ability to quickly identify and filter this traffic a lot of our users would have been dead in the water because the firewalls they use just can't handle that much traffic; our routers, on the other hand, have no problem throwing those packets out.
I only comment on this because one of the comments made to me was "Can't we just use a firewall to block it?". It took me over an hour to explain that the firewalls in use didn't have the capacity to handle this level of traffic -- and when I tried to discuss hardware vs. software filtering, I got a deer-in-the-headlights look. :-)
On Thu, Feb 27, 2014 at 8:57 PM, Keegan Holley <no.spam@comcast.net> wrote:
It depends on how many customers you have and what sort of contract you have with them if any. A significant amount of attack traffic comes from residential networks where a "one-size-fits-all" policy is definitely best.
On Feb 26, 2014, at 4:01 PM, Jay Ashworth <jra@baylink.com> wrote:
----- Original Message -----
From: "Brandon Galbraith" <brandon.galbraith@gmail.com>
On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley <no.spam@comcast.net> wrote:
More politely stated, it's not the responsibility of the operator to decide what belongs on the network and what doesn't. Users can run any services that's not illegal or even reuse ports for other applications.
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
All of these conversations are variants of "how easy is it to set up a default ACL for loops, and then manage exceptions to it?".
Assuming your gear permits it, I don't personally see all that much Bad Actorliness in setting a relatively tight bidirectional ACL for Random Edge Customers, and opening up -- either specific ports, or just "to a less-/un-filtered ACL" on specific request.
The question is -- as it is with BCP38 -- *can the edge gear handle it*?
And if not: why not? (Protip: because buyers of that gear aren't agitating for it)
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
-- Ray Patrick Soucy Network Engineer University of Maine System
T: 207-561-3526 F: 207-561-3531
MaineREN, Maine's Research and Education Network www.maineren.net
-- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
When I was looking at the website before I didn't really see any mention of uRPF, just the use of ACLs, maybe I missed it, but it's not encouraging if I can't spot it quickly. I just tried a search and the only thing that popped up was a how-to for a Cisco 7600 VXR. http://www.bcp38.info/index.php/HOWTO:CISCO:7200VXR On Fri, Feb 28, 2014 at 9:04 AM, Jay Ashworth <jra@baylink.com> wrote:
You mean, like Bcp38(.info)?
On February 28, 2014 9:02:03 AM EST, Ray Soucy <rps@maine.edu> wrote:
I'm wondering how many operators don't have systems in place to quickly and efficiently filter problem host systems. I see a lot of talk of ACL usage, but not much about uRPF and black hole filtering.
There are a few white papers that are worth a read:
http://www.cisco.com/c/dam/en/us/products/collateral/security/ios-network-fo...
http://www.cisco.com/web/about/security/intelligence/urpf.pdf
If you have uRPF enabled on all your access routers then you can configure routing policy such that advertising a route for a specific host system will trigger uRPF to drop the traffic at the first hop, in hardware. This prevents you from having to maintain ACLs or even give out access to routers. Instead, you can use a small router or daemon that disables hosts by advertising them as a route (for example, we just use a pair of small ISR 1841 routers for this); this in turn can be tied into IPS or a web UI allowing your NOC to disable a problem host at the first hop and prevent its traffic from propagating throughout the network without having to know the overall architecture of the network or determine the best place to apply an ACL.
I've seen a lot of talk on trying to filter specific protocols, or rate-limit, etc. but I really feel that isn't the appropriate action to take. I think disabling a system that is a problem and notifying its maintainer that they need to correct the issue is much more sustainable. There are also limitations on how much can be done through the use of ACLs. uRPF and black hole routing scale much better, especially in response to a denial of service attack.
When the NTP problems first started popping up, we saw incoming NTP of several Gb, without the ability to quickly identify and filter this traffic a lot of our users would have been dead in the water because the firewalls they use just can't handle that much traffic; our routers, on the other hand, have no problem throwing those packets out.
I only comment on this because one of the comments made to me was "Can't we just use a firewall to block it?". It took me over an hour to explain that the firewalls in use didn't have the capacity to handle this level of traffic -- and when I tried to discuss hardware vs. software filtering, I got a deer-in-the-headlights look. :-)
On Thu, Feb 27, 2014 at 8:57 PM, Keegan Holley <no.spam@comcast.net> wrote:
It depends on how many customers you have and what sort of contract you have with them if any. A significant amount of attack traffic comes from residential networks where a "one-size-fits-all" policy is definitely best.
On Feb 26, 2014, at 4:01 PM, Jay Ashworth <jra@baylink.com> wrote:
----- Original Message -----
From: "Brandon Galbraith" <brandon.galbraith@gmail.com>
On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley <no.spam@comcast.net> wrote:
More politely stated, it's not the responsibility of the operator to decide what belongs on the network and what doesn't. Users can run any services that's not illegal or even reuse ports for other applications.
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
All of these conversations are variants of "how easy is it to set up a default ACL for loops, and then manage exceptions to it?".
Assuming your gear permits it, I don't personally see all that much Bad Actorliness in setting a relatively tight bidirectional ACL for Random Edge Customers, and opening up -- either specific ports, or just "to a less-/un-filtered ACL" on specific request .
The question is -- as it is with BCP38 -- *can the edge gear handle it*?
And if not: why not? (Protip: because buyers of that gear aren't agitating for it)
Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
-- Sent from my Android phone with K-9 Mail. Please excuse my brevity.
-- Ray Patrick Soucy Network Engineer University of Maine System T: 207-561-3526 F: 207-561-3531 MaineREN, Maine's Research and Education Network www.maineren.net
----- Original Message -----
From: "Ray Soucy" <rps@maine.edu>
When I was looking at the website before I didn't really see any mention of uRPF, just the use of ACLs, maybe I missed it, but it's not encouraging if I can't spot it quickly. I just tried a search and the only thing that popped up was a how-to for a Cisco 7600 VXR.
Well, I do mention it, right there on the home page: """ BCP38 filtering to block these packets is most easily handled right at the very edge of the Internet: where customer links terminate in the first piece of provider 'aggregation' gear, like a router, DSLAM, or CMTS. Much to most of this gear already has a 'knob' which can be turned on, which simply drops these packets on the floor as they come in from the customer's PC. """ I simply didn't *name* the knob, cause the detail seemed out-of-scope for that context. Where it would get named would be on the "information for Audience" pages relevant to access providers, which I have not written because -- not being a provider -- I have insufficient background to be accurate. We welcome contributions from people in those positions... you, perhaps? Be bold! :-) Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
On Fri, Feb 28, 2014 at 9:02 AM, Ray Soucy <rps@maine.edu> wrote:
If you have uRPF enabled on all your access routers then you can configure routing policy such that advertising a route for a specific host system will trigger uRPF to drop the traffic at the first hop, in hardware.
note that 'in hardware' is dependent upon the model used... note that stuffing 2k (or 5 or 10 or...) extra routes into your edge device could make it super unhappy. your points are valid for your designed network... they may not work everywhere. making the features you point out work better or be more widely known seems like a great idea though :)
+1 in my experience uRPF get’s enabled, breaks something or causes confusion (usually related to multi-homing) and then get’s disabled. On Feb 28, 2014, at 11:49 AM, Christopher Morrow <morrowc.lists@gmail.com> wrote:
On Fri, Feb 28, 2014 at 9:02 AM, Ray Soucy <rps@maine.edu> wrote:
If you have uRPF enabled on all your access routers then you can configure routing policy such that advertising a route for a specific host system will trigger uRPF to drop the traffic at the first hop, in hardware.
note that 'in hardware' is dependent upon the model used... note that stuffing 2k (or 5 or 10 or...) extra routes into your edge device could make it super unhappy.
your points are valid for your designed network... they may not work everywhere. making the features you point out work better or be more widely known seems like a great idea though :)
On Mar 1, 2014, at 9:14 AM, Keegan Holley <no.spam@comcast.net> wrote:
+1 in my experience uRPF get’s enabled, breaks something or causes confusion (usually related to multi-homing) and then get’s disabled.
Enabling loose-check - even with allow-default - is useful solely for S/RTBH, if nothing else. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Luck is the residue of opportunity and design. -- John Milton
On Wed, 26 Feb 2014 11:44:55 -0600, Brandon Galbraith said:
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
What systems are (a) still have chargen enabled and (b) common enough to make it a viable DDoS vector? Just wondering if I need to go around and find users of mine that need to be smacked around with a large trout....
On Feb 26, 2014, at 5:33 PM, Valdis.Kletnieks@vt.edu wrote:
On Wed, 26 Feb 2014 11:44:55 -0600, Brandon Galbraith said:
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
What systems are (a) still have chargen enabled and (b) common enough to make it a viable DDoS vector? Just wondering if I need to go around and find users of mine that need to be smacked around with a large trout....
First, if you didn't see this excellent paper, check it out: http://www.internetsociety.org/doc/amplification-hell-revisiting-network-pro... a) Yes - printers and other devices have it. b) yes. I only ran the scan once, but had ~130k devices respond. http://chargenscan.org/chargenip2asn.txt - Jared
On 2/27/2014 8:09 AM, Randy Bush wrote:
I only ran the scan once, but had ~130k devices respond.
is there any modern utility in chargen?
I know of none, maybe I'm too young. So we could conclude we don't need that service running. But some folk use ports for services other than the intended - like tcp:443 for VPN ;-) So if we can get enough abusable end-systems fixed (hope so *), and we get enough source address validation (bcp38) to reduce sources of badness (hope so *), then the network won't need to block that port and someone can make inventive use of it ;-) (*) and working on it. Frank PS: - seems something going on already, had one outside complain about traffic from our IP udp:19 - better start scanning proactively
On Wed, Feb 26, 2014 at 11:09 PM, Randy Bush <randy@psg.com> wrote:
I only ran the scan once, but had ~130k devices respond. is there any modern utility in chargen?
Does ne'er-do-wells hitting IRC users with "DCC CHAT" requests targeted to trick the victim into connecting to port 19/tcp count as a modern use? I remember, that was a dirty trick in the late '90s, that would today be called a DoS, since the result was to crash desktop chat software ----- nonetheless, it's the only thing I heard of anyone using chargen for until recently. Well, if you enable chargen on a large number of hostst and directed broadcasts: an artificially created chargen storm could be one way to stres-test a WAN link, or to help validate QoS prioritization. Chargen's supposed to be a useful measurement and debugging tool, for developing a TCP/IP stack. I think it has little use nowadays, and there are some more sophisticated tools around today. I would say chargen may have some utility, but it should not be a service turned on, provided, or offered outside the secure confines of a testing lab. In other words: chargen for testeing in a lab, sure. Chargen on production devices, when connected to the public internet: bad idea -- -JH
* randy@psg.com (Randy Bush) [Thu 27 Feb 2014, 06:10 CET]:
is there any modern utility in chargen?
No. But as we're not Apple, we don't get to decide what's good for the end user. Who knows, when CGNs become commonplace we'll start to run out of ephemeral ports and we'll have to start using ports < 1024 too. Would be a shame if their use were impeded by old ACLs lying around. -- Niels. -- "It's amazing what people will do to get their name on the internet, which is odd, because all you really need is a Blogspot account." -- roy edroso, alicublog.blogspot.com
is there any modern utility in chargen? Who knows, when CGNs become commonplace we'll start to run out of ephemeral ports and we'll have to start using ports < 1024 too. Would be a shame if their use were impeded by old ACLs lying around.
woah! i did not suggest acls. i was assuming that one just disables the 'service'. randy
is there any modern utility in chargen? Who knows, when CGNs become commonplace we'll start to run out of ephemeral ports and we'll have to start using ports < 1024 too. Would be a shame if their use were impeded by old ACLs lying around.
* randy@psg.com (Randy Bush) [Fri 28 Feb 2014, 17:23 CET]:
woah! i did not suggest acls. i was assuming that one just disables the 'service'.
Oh, I'm sorry! I honestly thought this thread was about filtering as a way of mitigating abuse. Yes, of course one should not run the service, especially not UDP. -- Niels.
On 2/26/2014 5:33 PM, Valdis.Kletnieks@vt.edu wrote:
On Wed, 26 Feb 2014 11:44:55 -0600, Brandon Galbraith said:
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities. What systems are (a) still have chargen enabled and (b) common enough to make it a viable DDoS vector? Just wondering if I need to go around and find users of mine that need to be smacked around with a large trout.... I would do it. I scanned all my public and private networks and found a few. I've added it to our customer acls to stop it. There were also a couple of internal routers that someone had turned or left it on that were missed. Those are now fixed.
nmap -T4 -oG chargen_scan.txt -sS -sU -p 19 <your netblocks here>
On Feb 26, 2014, at 12:44 PM, Brandon Galbraith <brandon.galbraith@gmail.com> wrote:
On Wed, Feb 26, 2014 at 6:56 AM, Keegan Holley <no.spam@comcast.net> wrote:
More politely stated, it’s not the responsibility of the operator to decide what belongs on the network and what doesn’t. Users can run any services that’s not illegal or even reuse ports for other applications. That being said commonly exploited ports (TCP 25 for example) are often blocked. This is usually done to block or protect an application though not to single out a particular port number.
Don't most residential ISPs already block port 25 outbound? http://www.postcastserver.com/help/Port_25_Blocking.aspx
Blocking chargen at the edge doesn't seem to be outside of the realm of possibilities.
As I said, SMTP is blocked because it’s the default port for a commonly run and often misconfigured application. Blocking the chargen port is definitely reasonable, but it’s not a popular application. Most people use the port as an clever non-default port for some other service like ssh.
On Tue, Feb 25, 2014 at 11:22 AM, Staudinger, Malcolm < mstaudinger@corp.earthlink.com> wrote:
Why wouldn't you just block chargen entirely? Is it actually still being used these days for anything legitimate?
Long term blocking based on port number is sure to result in problems. It's more appropriate to block chargen to a source shown to be subject to abuse. Simply blocking port 19 globally could very well be interfering with other use and disrupting connectivity for other applications not related to chargen, that just so happen to use Port # 19 as an endpoint. Thanks to the wonder that is SRV records, users MAY and, are technically quite free to, and sometimes do locate critical services on arbitrary --- alternative port numbers, such as perhaps port 19, using the DNS SRV response; instead of having clients locate the port number by relying upon a well-known port registration with IANA. In this case, policing or discarding port 19 traffic to hosts that do not use port 19 for chargen, is a connectivity disruption. Among known hosts that agree to communicate on port #19 without requiring a port registration, port number 19 may be used for any purpose, not necessarily chargen related. The same goes for port 123, 25, etc; both UDP and TCP. Although the port is not in the traditional ephemeral range, nothing precludes its use as an ephmeral port for various application functions, either. The "well known port" assignments are advisory or recommended, for use by other unknown processes. the purpose of well known port assignments is for service location; the port number is not a sequence of application identification bits. The QUIC protocol using port 80/udp, was a great example of a different application using a well-known port address, besides the one that would appear as the well-known port registration.
Malcolm Staudinger Information Security Analyst | EIS EarthLink
E: mstaudinger@corp.earthlink.com
-- -JH
On 2/26/2014 11:03 PM, Jimmy Hess wrote:
The "well known port" assignments are advisory or recommended, for use by other unknown processes. the purpose of well known port assignments is for service location; the port number is not a sequence of application identification bits.
The QUIC protocol using port 80/udp, was a great example of a different application using a well-known port address, besides the one that would appear as the well-known port registration.
Sometimes bypassing IANA for port registration works in your favor, sometimes it doesn't. Of course there should be a way to setup connections that aren't listed in IANA, but using well-known low ports isn't safe. It's biting us and we've got to counter it. UDP doesn't do enough setup on a connection for you to really figure out if it's chargen or some new traffic type. Even if you have the luxury of putting a stateful firewall in a place and filtering based on what traffic is there, the only valid choice for an ISP would be to say "permit only the registered service chargen on port 19, oh, and block it anyway because nobody should be using chargen." Taking the high road about blocking services was an option 10 years ago. The gear couldn't do it and most internet users were still somewhat tech savvy. The landscape has changed. I can't convince my cousin not to click on ransomware. I think my only viable option is to filter residential customers for their own good, and if someone actually wants/needs one of these ports opened then we can work with them.* * ISPs have also reduced their abuse staffing by blocking port 25. It's either that or just acknowledge that you won't be able to process all your abuse emails because there are too many people spamming/too many compromised machines. So in some ways it's a financial need for us to block even more aggressively than big ISPs because we can't afford to staff abuse for things that are automatically fixable.
On Tue, Feb 25, 2014 at 8:58 AM, Blake Hudson <blake@ispn.net> wrote:
I talked to one of our upstream IP transit providers and was able to negotiate individual policing levels on NTP, DNS, SNMP, and Chargen by UDP port within our aggregate policer. As mentioned, the legitimate traffic levels of these services are near 0. We gave each service many times the amount to satisfy subscribers, but not enough to overwhelm network links during an attack.
--Blake
Blake, What you have done is common and required to keep the network up at this time. It is perfectly appropriate to have a baseline and enforce some multiple of the baseline with a policer. People who say this is the wrong thing to do are not running a network of significant size, end of story. CB
Chris Laffin wrote the following on 2/23/2014 8:58 AM:
Ive talked to some major peering exchanges and they refuse to take any action. Possibly if the requests come from many peering participants it will be taken more seriously?
On Feb 22, 2014, at 19:23, "Peter Phaal" <peter.phaal@gmail.com> wrote:
Brocade demonstrated how peering exchanges can selectively filter large NTP reflection flows using the sFlow monitoring and hybrid port OpenFlow capabilities of their MLXe switches at last week's Network Field Day event.
http://blog.sflow.com/2014/02/nfd7-real-time-sdn-and-nfv-analytics_1986.html
On Sat, Feb 22, 2014 at 4:43 PM, Chris Laffin <claffin@peer1.com> wrote: Has anyone talked about policing ntp everywhere. Normal traffic levels are extremely low but the ddos traffic is very high. It would be really cool if peering exchanges could police ntp on their connected members.
On Feb 22, 2014, at 8:05, "Paul Ferguson" <fergdawgster@mykolab.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
> On 2/22/2014 7:06 AM, Nick Hilliard wrote: > > On 22/02/2014 09:07, Cb B wrote: > Summary IETF response: The problem i described is already solved > by bcp38, nothing to see here, carry on with UDP
udp is here to stay. Denying this is no more useful than trying to push the tide back with a teaspoon.
Yes, udp is here to stay, and I quote Randy Bush on this, "I encourage my competitors to block udp." :-p
- - ferg
- -- Paul Ferguson VP Threat Intelligence, IID PGP Public Key ID: 0x54DC85B2
-----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (MingW32) Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
iF4EAREIAAYFAlMIynoACgkQKJasdVTchbJsqQD/ZVz5vYaIAEv/z2kbU6kEM+KS OQx2XcSkU7r02wNDytoBANVkgZQalF40vhQED+6KyKv7xL1VfxQg1W8T4drh+6/M =FTxg -----END PGP SIGNATURE-----
Hi Chris, Le 23/02/2014 01:43, Chris Laffin a écrit :
It would be really cool if peering exchanges could police ntp on their connected members.
Well, THIS looks like the worst idea ever. Wasting ASIC ressources on IXP's dataplanes is a wet-dream for anyone willing to kill the network. IXP's neutrality is a key factor to maintain reasonable interconnexion density. Instead, IXPs _could_ enforce BCP38 too. Mapping the route-server's received routes to ingress _and_ egress ACLs on IXP ports would mitigate the role of BCP38 offenders within member ports. It's almost like uRPF in an intelligent and useable form. A noticeable side-effect is that members would be encouraged to announce their entire customer-cones to ensure egress trafic from a non-exchanged prefix would not be dropped on the IX's port. By the way, would anyone know how to generate OpenFlow messages to push such filters to member ports ? Would there be any smat way to do that on non-OpenFlow enabled dataplanes (C6k...) ? Best regards, -- Jérôme Nicolle +33 6 19 31 27 14
----- Original Message -----
From: "Jérôme Nicolle" <jerome@ceriz.fr>
Le 23/02/2014 01:43, Chris Laffin a écrit :
It would be really cool if peering exchanges could police ntp on their connected members.
Well, THIS looks like the worst idea ever. Wasting ASIC ressources on IXP's dataplanes is a wet-dream for anyone willing to kill the network. IXP's neutrality is a key factor to maintain reasonable interconnexion density.
Instead, IXPs _could_ enforce BCP38 too. Mapping the route-server's received routes to ingress _and_ egress ACLs on IXP ports would mitigate the role of BCP38 offenders within member ports. It's almost like uRPF in an intelligent and useable form.
Interesting. Are you doing this? Planning it? Or at least researching how well it would work?
A noticeable side-effect is that members would be encouraged to announce their entire customer-cones to ensure egress trafic from a non-exchanged prefix would not be dropped on the IX's port.
Don't they do this already? If you get something practical implemented on this topic, we'd be more than pleased to see it show up on bcp38.info; exchange points are the one major construct I hadn't included there, cause I didn't think it was actually practical to do it there. But then, I don't run one. Cheers, -- jra -- Jay R. Ashworth Baylink jra@baylink.com Designer The Things I Think RFC 2100 Ashworth & Associates http://www.bcp38.info 2000 Land Rover DII St Petersburg FL USA BCP38: Ask For It By Name! +1 727 647 1274
Le 28/02/2014 17:00, Jay Ashworth a écrit :
From: "Jérôme Nicolle" <jerome@ceriz.fr> Instead, IXPs _could_ enforce BCP38 too. Mapping the route-server's received routes to ingress _and_ egress ACLs on IXP ports would mitigate the role of BCP38 offenders within member ports. It's almost like uRPF in an intelligent and useable form.
Interesting. Are you doing this? Planning it? Or at least researching how well it would work?
Juste seriously considering it on TOUIX. I'd propose it to Lyonix and France-IX too.
A noticeable side-effect is that members would be encouraged to announce their entire customer-cones to ensure egress trafic from a non-exchanged prefix would not be dropped on the IX's port.
Don't they do this already?
Not to my knowledge. Some members are only announcing regional prefixes on smaller IXs. They could however exchange trafic originaing from any region of their networks. Best would be to differentiate announced prefixes from legitimately announcable prefixes, as registered to RIPEdb (as far as we're concerned). In a more global perspective, the extended best-practice could be to set ACLs as we generate prefix-lists, route-maps and route-filters for BGP downlinks and PNIs too.
If you get something practical implemented on this topic, we'd be more than pleased to see it show up on bcp38.info; exchange points are the one major construct I hadn't included there, cause I didn't think it was actually practical to do it there. But then, I don't run one.
I think the idea worth investigating, but I run a very small IXP and will most certainly be unable to fully investigate every potential side-effects on my own. I'll be reaching out to bigger ones in my next email. -- Jérôme Nicolle +33 6 19 31 27 14
On 28/02/2014 15:42, Jérôme Nicolle wrote:
Instead, IXPs _could_ enforce BCP38 too. Mapping the route-server's received routes to ingress _and_ egress ACLs on IXP ports would mitigate the role of BCP38 offenders within member ports. It's almost like uRPF in an intelligent and useable form.
this will break horribly as soon as you have an IXP member which provides transit to other multihomed networks. Nick
On Feb 28, 2014, at 11:52 , Nick Hilliard <nick@foobar.org> wrote:
On 28/02/2014 15:42, Jérôme Nicolle wrote:
Instead, IXPs _could_ enforce BCP38 too. Mapping the route-server's received routes to ingress _and_ egress ACLs on IXP ports would mitigate the role of BCP38 offenders within member ports. It's almost like uRPF in an intelligent and useable form.
this will break horribly as soon as you have an IXP member which provides transit to other multihomed networks.
Or to anyone who doesn't announce their full downstream table to the route servers. Or to people who don't use route servers. Or to someone who does traffic engineering. Or .... -- TTFN, patrick
Le 28/02/2014 17:52, Nick Hilliard a écrit :
this will break horribly as soon as you have an IXP member which provides transit to other multihomed networks.
It could break if filters are based on announced prefixes. That's preciselly why uRPF is often useless. On the other hand, if a member provides transit, he will add its customer prefixes to RaDB / RIPEdb with appropriate route objects and the ACL will be updated accordingly. Shouldn't break there. -- Jérôme Nicolle +33 6 19 31 27 14
On the other hand, if a member provides transit, he will add its customer prefixes to RaDB / RIPEdb with appropriate route objects and the ACL will be updated accordingly. Shouldn't break there.
And that's a really nice side effect. However in case of transit providers the problem is that RaDB /RIPE lists what prefixes you are allowed to advertise. But that does not necessarily fully match with what source IPs can leave your network. I mean ISP-A can have a customer that uses PA range of other ISP-B and only has a static route towards ISP-A for some TE purposes. I'm not well versed with RIPE myself so I'm not sure whether there's a way to handle this situation. adam -----Original Message----- From: Jérôme Nicolle [mailto:jerome@ceriz.fr] Sent: Friday, February 28, 2014 6:03 PM To: Nick Hilliard; nanog@nanog.org Subject: Re: Filter on IXP Le 28/02/2014 17:52, Nick Hilliard a écrit :
this will break horribly as soon as you have an IXP member which provides transit to other multihomed networks.
It could break if filters are based on announced prefixes. That's preciselly why uRPF is often useless. On the other hand, if a member provides transit, he will add its customer prefixes to RaDB / RIPEdb with appropriate route objects and the ACL will be updated accordingly. Shouldn't break there. -- Jérôme Nicolle +33 6 19 31 27 14
On 02/03/2014 12:45, Vitkovský Adam wrote:
On the other hand, if a member provides transit, he will add its customer prefixes to RaDB / RIPEdb with appropriate route objects and the ACL will be updated accordingly. Shouldn't break there.
And that's a really nice side effect.
and it only works for leaf networks. The moment your ixp supports larger networks, it will break things horribly. It also assumes that: - all your IXP members use route servers (not generally true) - the IXP kit can filter layer 3 traffic on all supported port configurations (including .1q / LAGs) for both IPv4 and IPv6 for both native layer 2 and VPLS (not generally true) - the IXP port ASICs can handle large L2 access lists (not generally true) - there is an automatic mechanism in place to take RS prefixes and installed them on edge L2 ports (troublesome to implement and maintain) - there is a fail-safe mechanism to prevent this from causing breakage (difficult to implement) - the IXP participants keep their IRRDB information fully up-to-date (not generally true) - the IXP operators put in mechanisms to stop both route-leakages and incorrect IRRDB as-set additions from causing things to explode. Last but not least: - there is a mandate from the ixp community to get the IXP operators into the business of filtering layer 3 data (not generally the case) There are many places where automated RPF makes a lot of sense. An IXP is not one of them. Nick
On Sun, Mar 2, 2014 at 4:00 AM, Nick Hilliard <nick@foobar.org> wrote:
There are many places where automated RPF makes a lot of sense. An IXP is not one of them.
That make sense. Everyone is rightly resistant to automated filtering. But could we automate getting the word out instead? Can obvious BCP38 cluelessness be measured? Maybe as a ratio of advertised to unadvertised egressing ASes, etc. ? If so, then if your downstream/peer is even *partially* BCP38, give them a pass. They are at least somewhat aware of the problem. Otherwise: - Visually red-flag their BCP38 stats/percentage in RADb; - Send them an automatic email once a week; - Let upstreams *optionally* not automatically update their routes via RADb until they call to ask why; etc. Can we combat the awareness problem in bulk -- without *filtering* in bulk? Royce
- the IXP participants keep their IRRDB information fully up-to-date Geez anything else but the fully up-to-date IRRDB please. That just won't fly. That's why I said that an up to date IRRDB would have been a nice side effect of IXP filtering.
- the IXP operators put in mechanisms to stop both route-leakages and incorrect IRRDB as-set additions from causing things to explode. Yes this is a valid point
I think the technicalities of how to implement this kind of filtering at the IXP equipment is out of scope for this discussion. Anyways I believe IXPs should not be responsible for this mess and that BCP38 should be implemented by the participants of an IXP. As Saku Ytti mentioned several times Tier2 ISPs are in the best position for a successful BCP38 filtering at their network boundaries. adam -----Original Message----- From: Nick Hilliard [mailto:nick@foobar.org] Sent: Sunday, March 02, 2014 2:01 PM To: Vitkovský Adam; Jérôme Nicolle; nanog@nanog.org Subject: Re: Filter on IXP On 02/03/2014 12:45, Vitkovský Adam wrote:
On the other hand, if a member provides transit, he will add its customer prefixes to RaDB / RIPEdb with appropriate route objects and the ACL will be updated accordingly. Shouldn't break there.
And that's a really nice side effect.
and it only works for leaf networks. The moment your ixp supports larger networks, it will break things horribly. It also assumes that: - all your IXP members use route servers (not generally true) - the IXP kit can filter layer 3 traffic on all supported port configurations (including .1q / LAGs) for both IPv4 and IPv6 for both native layer 2 and VPLS (not generally true) - the IXP port ASICs can handle large L2 access lists (not generally true) - there is an automatic mechanism in place to take RS prefixes and installed them on edge L2 ports (troublesome to implement and maintain) - there is a fail-safe mechanism to prevent this from causing breakage (difficult to implement) - the IXP participants keep their IRRDB information fully up-to-date (not generally true) - the IXP operators put in mechanisms to stop both route-leakages and incorrect IRRDB as-set additions from causing things to explode. Last but not least: - there is a mandate from the ixp community to get the IXP operators into the business of filtering layer 3 data (not generally the case) There are many places where automated RPF makes a lot of sense. An IXP is not one of them. Nick
On (2014-02-22 09:38 +0100), Carsten Bormann wrote:
Oh, the transport area people *are* in their high gear. Their frantic movements may just seem static to you as they operate on more drawn-out time scales. (The last transport protocol I worked on became standards-track 16 years after I started working on it.)
This seems to be common problem when things mature. Established companies take years to produce new product, and then they are so committed on the new product that no one dares to call it failure and they keep on supporting it. People tend to think that future can be predicted with enough work, I don't think that is true. I suspect amount work put in product has rapidly diminishing returns. It seems much more effective to release often, release early and fail early. I think Joel Jaeggli once said 'maturing Internet infrastructure resists innovation', I really like that quote. Is this a problem that should be remedied? Should we move faster? Could transport area endorse and release new pre-standard document for L4 every 4months? 6months? 12months? Guaranteeing no particular compatibility between this and next pre-standard release? If there would be newL4 directory in linux kernel and someone saw the trouble to write it, it seems like barrier of entry for someone else to later update it to reflect latest pre-standard changes is pretty small. The initial work has high barrier of entry. -- ++ytti
Filtering will always break something. Filtering 'abusive' network traffic is intentionally difficult - you either just let it be, or you filter it along with the 'good' network traffic that it's pretending to be. How can you even tell it's NTP traffic - maybe by the port numbers? What if someone is running OpenVPN on those ports? What about IP options? Maybe some servers return extra data? This is really not a network operator problem, it's an application problem if anything. While it makes sense to temporarily filter a large flood to keep the rest of your customers online, it's a very blunt instrument, as the affected customer is usually still taken offline - but I'm talking about specific targeted filters anyway. Doing blanket filtering based on packet sizes is sure to generate some really hard to debug failure cases that you didn't account for. Unfortunately, as long as Facebook loads, most of the users are happy, and so these kinds of practices will likely be implemented in many places, with some people opting to completely filter NTP or UDP. Maybe it will buy you a little peace and quiet today, but tomorrow it's just going to be happening on a different port/protocol that you can't inspect deeply, and you don't dare block. I can imagine 10 years from now, where we're writing code that fragments replies into 100 byte packets to get past this, and everyone loses. Your filter is circumvented, the application performs slower, and the 'bad guys' found another way that you can't filter. When all that's left is TCP port 443, that's what all the 'abuse' traffic will be using too. Laszlo On Feb 20, 2014, at 8:41 PM, Edward Roels <edwardroels@gmail.com> wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple packets back forth between me and the server depending on the number of peers it has?
Would I be breaking something important if I started to filter NTP packets
200 bytes into my network?
On Feb 20, 2014, at 4:05 PM, Laszlo Hanyecz <laszlo@heliacal.net> wrote:
Filtering will always break something. Filtering 'abusive' network traffic is intentionally difficult - you either just let it be, or you filter it along with the 'good' network traffic that it's pretending to be. How can you even tell it's NTP traffic - maybe by the port numbers? What if someone is running OpenVPN on those ports? What about IP options? Maybe some servers return extra data?
This is really not a network operator problem, it's an application problem if anything. While it makes sense to temporarily filter a large flood to keep the rest of your customers online, it's a very blunt instrument, as the affected customer is usually still taken offline - but I'm talking about specific targeted filters anyway. Doing blanket filtering based on packet sizes is sure to generate some really hard to debug failure cases that you didn't account for.
Unfortunately, as long as Facebook loads, most of the users are happy, and so these kinds of practices will likely be implemented in many places, with some people opting to completely filter NTP or UDP. Maybe it will buy you a little peace and quiet today, but tomorrow it's just going to be happening on a different port/protocol that you can't inspect deeply, and you don't dare block. I can imagine 10 years from now, where we're writing code that fragments replies into 100 byte packets to get past this, and everyone loses. Your filter is circumvented, the application performs slower, and the 'bad guys' found another way that you can't filter. When all that's left is TCP port 443, that's what all the 'abuse' traffic will be using too.
Laszlo
On Feb 20, 2014, at 8:41 PM, Edward Roels <edwardroels@gmail.com> wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple packets back forth between me and the server depending on the number of peers it has?
Would I be breaking something important if I started to filter NTP packets
200 bytes into my network?
While filtering NTP packets may be a work-around, for any network with firewall isolation from the general Internet it would make more sense to: 1. Establish an internal peer group of NTP Server instances. As noted, a distributed group of four is the absolute minimum, six is more than sufficient. 2. Default restrict noquery on all internal NTP servers. 2. Use a common list of external NTP servers for all internal servers. 3. Provide that list of external NTP servers to the firewall engineer to add to a permit ACL (deny all others) James R. Cutler - james.cutler@consultant.com PGP keys at http://pgp.mit.edu
On 2/20/14, 3:41 PM, "Edward Roels" <edwardroels@gmail.com> wrote:
Curious if anyone else thinks filtering out NTP packets above a certain packet size is a good or terrible idea.
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
If I query a server for it's list of peers (ntpq -np <ip>) I've seen packets as large as 522 bytes in a single packet in response to a 54 byte query. I'll admit I'm not 100% clear of the what is happening protocol-wise when I perform this query. I see there are multiple packets back forth between me and the server depending on the number of peers it has?
Would I be breaking something important if I started to filter NTP packets
200 bytes into my network?
We are filtering a range of packet sizes for UDP/123 at the edge and it has definitely helped thwart some of the NTP attacks. I hate to do blanket ACLs blocking traffic but multi-Gbps of attack traffic (not counting the reflected traffic) is hard to ignore and it's worth the risk of blocking a minute amount of legitimate traffic. Phil
On Feb 21, 2014, at 3:41 AM, Edward Roels <edwardroels@gmail.com> wrote:
From my brief testing it seems 90 bytes for IPv4 and 110 bytes for IPv6 are typical for a client to successfully synchronize to an NTP server.
Correct. 90 bytes = 76 bytes + Ethernet framing. Filtering out packets this size from UDP/anything to UDP/123 allows time-sync requests and responses to work, but squelches both the level-6/-7 commands used to trigger amplification as well as amplified attack traffic. Operators are using this size-based filtering to effect without breaking the world. Be sure to pilot this first, and understand whether packet-size classification on your hardware of choice includes framing or not. Also, note that this filtering should be utilized to mitigate attacks, not as a permanent policy. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Luck is the residue of opportunity and design. -- John Milton
On Feb 21, 2014, at 9:55 AM, Dobbins, Roland <rdobbins@arbor.net> wrote:
Filtering out packets this size from UDP/anything to UDP/123 allows time-sync requests and responses to work, but squelches both the level-6/-7 commands used to trigger amplification as well as amplified attack traffic.
Also, the reverse - UDP/123 - UDP/anything, for the amplified attack traffic. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Luck is the residue of opportunity and design. -- John Milton
On Feb 21, 2014, at 9:55 AM, Dobbins, Roland <rdobbins@arbor.net> wrote:
Filtering out packets this size from UDP/anything to UDP/123 allows time-sync requests and responses to work, but squelches both the level-6/-7 commands used to trigger amplification as well as amplified attack traffic.
That should read, filtering out packets **** NOT **** that size. Lack of sleep, apologies. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Luck is the residue of opportunity and design. -- John Milton
On Feb 21, 2014, at 11:40 AM, Harlan Stenn <stenn@ntp.org> wrote:
As a reality check, with this filtering in place does "ntptrace" still work?
No, it will not. In order to minimize overblocking of this nature, filtering of this nature should be used with the highest possible degree of granularity, and the minimal necessary scope. One way to accomplish this is to divert traffic towards destinations in question into a mitigation/center sinkhole, applying this filtering on the coreward interfaces of the mitigation center/sinkhole gateway (some re-injection mechanism such as GRE, VRF, selective filtering of the diversion route announcements coupled w/PBR, etc. must be used to re-inject non-matching traffic towards the destinations in question) or via other mitigation mechanisms. In emergencies, the concept of partial service recovery may dictate temporary filtering of coarser granularity in order to preserve overall network availability; we've run into situations in the past week-and-a-half where networks were experiencing severe strain due to the sheer volume of ntp reflection/amplification attack traffic, and it was necessary to start out with more general filtering, then work towards more specific filtering once the network was stabilized. But you raise a very important point which should be re-emphasized - general filtering of traffic is to be avoided whenever possible in order to avoid breaking applications/services. However, the converse notion that emergency situations sometimes entail necessary restrictions should also be taken into account. Operators should use their best judgement as to the scope of any filtering, and should always pilot any proposed mitigation methodologies prior to wider deployment. ----------------------------------------------------------------------- Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Luck is the residue of opportunity and design. -- John Milton
participants (42)
-
Blake Hudson
-
Brandon Galbraith
-
Carsten Bormann
-
Cb B
-
Chris Laffin
-
Christopher Morrow
-
Damian Menscher
-
Dobbins, Roland
-
Edward Roels
-
Frank Habicht
-
George William Herbert
-
Harlan Stenn
-
James Braunegg
-
James R Cutler
-
Jared Mauch
-
Jay Ashworth
-
Jimmy Hess
-
joel jaeggli
-
John Weekes
-
Jérôme Nicolle
-
Keegan Holley
-
Laszlo Hanyecz
-
Lukasz Bromirski
-
Mikael Abrahamsson
-
Nick Hilliard
-
Niels Bakker
-
Patrick W. Gilmore
-
Paul Ferguson
-
Peter Phaal
-
Phil Bedard
-
Randy Bush
-
Ray Soucy
-
Robert Drake
-
Royce Williams
-
Saku Ytti
-
Seth Mattinen
-
sjt5atra
-
Staudinger, Malcolm
-
sthaug@nethelp.no
-
TGLASSEY
-
Valdis.Kletnieks@vt.edu
-
Vitkovský Adam