On 3/18/2020 4:46 PM, Damian Menscher via NANOG wrote:
> On Wed, Mar 18, 2020 at 8:45 AM Steven Sommars
> <stevesommarsntp@gmail.com <mailto:stevesommarsntp@gmail.com>> wrote:
>
> The various NTP filters (rate limits, packet size limits) are
> negatively affecting the NTP Pool, the new secure NTP protocol
> (Network Time Security) and other clients. NTP filters were
> deployed several years ago to solve serious DDoS issues, I'm not
> second guessing those decisions. Changing the filters to instead
> block NTP mode 7, which cover monlist and other diagnostics, would
> improve NTP usability.
>
> http://www.leapsecond.com/ntp/NTP_Suitability_PTTI2020_Revised_Sommars.pdf
>
>
> I've advocated a throttle (not a hard block) on udp/123 packets with 468
> Bytes/packet (the size of a full monlist response). In your paper you
> mention NTS extensions can be 200+ bytes. How large do those packets
> typically get, in practice? And how significant is packet loss for them
> (if there's high packet loss during the occasional attack, does that
> pose a problem)?
I expect to see NTP UDP packets that would approach the MTU limit, in
some cases.
If a packet is "too big" for some pathway, then are we talking about a
fractional packet loss or are we talking about 100% packet loss (dropped
mid-way due to size)?
I implement it as a throttle... but that effectively means "0% packet loss most of the time, and significant packet loss when there's a large attack". Doing it as a complete block (which Sommars showed some networks do) is unnecessary.
So my question is whether occasional bursts of hard-block are a problem. If you only need large packets during the initialization phase, but the ongoing communication is done with smaller packets, then my approach may be acceptable, and we just need to convince other network operators to throttle rather than hard-block the large packets. But if you need large packets all the time, and occasional breakage of large packets causes problems, then I'll need to re-think my approach.
Damian