
1.1.1.1 was mis-announced by tata. see https://www.thousandeyes.com/blog/cloudflare-outage-analysis-july-14-2025 i would be interested in analyses of the blast radius, i.e. to/through what extents of the topology it propagated before it hit rov. of course, that question naïvely assumes there is a roa. randy

Am 15.07.2025 um 12:12:28 Uhr schrieb Randy Bush via NANOG:
1.1.1.1 was mis-announced by tata. see
Didn't RPKI and IR avoid any damage? If not, are there still relevant AS border routers that just accept anything? -- Gruß Marco Send unsolicited bulk mail to 1752574348muell@cartoonies.org

If reporting is accurate from RIPE, Tata has 4,976 IP route entries and of that only 2,554 of those entries have valid RPKI. [1] Most likely in order to continue serving those who are unwilling or unable to deploy RPKI they work with their upstreams to exempt their announcements from being filtered due to invalid or missing RPKI. Unfortunately as long as companies continue to make exceptions as to who is exempt from RPKI route filtering the risk of someone announcing a bad route will persist with us. It would be awesome if every single IP resource was covered under RPKI but according to Cloudflare Radar, worldwide we’re only halfway there at 56.8% valid and 42.1% unknown/missing. [2] Fortunately we will never have another AS7007-like incident [3] but as yesterday proved can still be quite impactful! [1] https://stat.ripe.net/resource/AS4755#tab=routing [2] https://radar.cloudflare.com/routing [3] https://en.wikipedia.org/wiki/AS_7007_incident
On Jul 15, 2025, at 15:18, Marco Moock via NANOG <nanog@lists.nanog.org> wrote:
Am 15.07.2025 um 12:12:28 Uhr schrieb Randy Bush via NANOG:
1.1.1.1 was mis-announced by tata. see
Didn't RPKI and IR avoid any damage? If not, are there still relevant AS border routers that just accept anything?
-- Gruß Marco
Send unsolicited bulk mail to 1752574348muell@cartoonies.org _______________________________________________ NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/XUNM4WIH...

There's also ranges like 44net where RPKI is infeasible, but that is indeed a hobbyist setup in the extreme. Ampr.org They do provide ROA though, so they're not entirely head in the sand, but it's only via RADB, not via RIR. Current word on the street, however, is that this was not a tata hijack, but a leak after the fact when cloudflare went offline. Likely some ancient test configurations or other similar example material/default setups from 20 years ago..... Even so, just ROA enforcement would protect against this. -----Original Message----- From: Francis Booth via NANOG <nanog@lists.nanog.org> Sent: Tuesday, July 15, 2025 3:49 PM To: North American Network Operators Group <nanog@lists.nanog.org> Cc: Francis Booth <boothf@caramelfox.net> Subject: Re: 1.1.1.1 If reporting is accurate from RIPE, Tata has 4,976 IP route entries and of that only 2,554 of those entries have valid RPKI. [1] Most likely in order to continue serving those who are unwilling or unable to deploy RPKI they work with their upstreams to exempt their announcements from being filtered due to invalid or missing RPKI. Unfortunately as long as companies continue to make exceptions as to who is exempt from RPKI route filtering the risk of someone announcing a bad route will persist with us. It would be awesome if every single IP resource was covered under RPKI but according to Cloudflare Radar, worldwide we’re only halfway there at 56.8% valid and 42.1% unknown/missing. [2] Fortunately we will never have another AS7007-like incident [3] but as yesterday proved can still be quite impactful! [1] https://stat.ripe.net/resource/AS4755#tab=routing [2] https://radar.cloudflare.com/routing [3] https://en.wikipedia.org/wiki/AS_7007_incident
On Jul 15, 2025, at 15:18, Marco Moock via NANOG <nanog@lists.nanog.org> wrote:
Am 15.07.2025 um 12:12:28 Uhr schrieb Randy Bush via NANOG:
1.1.1.1 was mis-announced by tata. see
Didn't RPKI and IR avoid any damage? If not, are there still relevant AS border routers that just accept anything?
-- Gruß Marco
Send unsolicited bulk mail to 1752574348muell@cartoonies.org _______________________________________________ NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/XU NM4WIHDMECHNOOKIJX5VL66WE5TQGB/
_______________________________________________ NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/XMYD6ZQH...

On 15/07/2025 21:18, Marco Moock via NANOG wrote:
Didn't RPKI and IR avoid any damage?
Yes - these route leaks didn't actually propagate very far. The only reason they even appeared is because the actual route announced by CF disappeared. All 1.1.1.1 related prefixes (v6 included) were withdrawn around the same time. RIPE's BGPlay tool [0] shows the massive withdrawal spike quite nicely. [0] https://stat.ripe.net/bgplay

Now that everyone has gotten the RPKI rage out of their system, Cloudflare is taking responsibility for this event. Explicitly stated it wasn't a hijack, but their own mistake. https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/ On Tue, Jul 15, 2025 at 3:56 PM Noah van der Aa via NANOG < nanog@lists.nanog.org> wrote:
On 15/07/2025 21:18, Marco Moock via NANOG wrote:
Didn't RPKI and IR avoid any damage?
Yes - these route leaks didn't actually propagate very far.
The only reason they even appeared is because the actual route announced by CF disappeared. All 1.1.1.1 related prefixes (v6 included) were withdrawn around the same time. RIPE's BGPlay tool [0] shows the massive withdrawal spike quite nicely.
[0] https://stat.ripe.net/bgplay
_______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/FP2NP3F4...

Any amount of redundancy can be fixed by automation. On Wed, 16 Jul 2025 at 17:15, Tom Beecher via NANOG <nanog@lists.nanog.org> wrote:
Now that everyone has gotten the RPKI rage out of their system, Cloudflare is taking responsibility for this event. Explicitly stated it wasn't a hijack, but their own mistake.
https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/
On Tue, Jul 15, 2025 at 3:56 PM Noah van der Aa via NANOG < nanog@lists.nanog.org> wrote:
On 15/07/2025 21:18, Marco Moock via NANOG wrote:
Didn't RPKI and IR avoid any damage?
Yes - these route leaks didn't actually propagate very far.
The only reason they even appeared is because the actual route announced by CF disappeared. All 1.1.1.1 related prefixes (v6 included) were withdrawn around the same time. RIPE's BGPlay tool [0] shows the massive withdrawal spike quite nicely.
[0] https://stat.ripe.net/bgplay
_______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/FP2NP3F4...
_______________________________________________ NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/7NDDV33O...
-- ++ytti

On Wed, 16 Jul 2025 18:24:55 +0300, Saku Ytti via NANOG wrote:
Any amount of redundancy can be fixed by automation.
:-) This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing? I'm not talking about customers of the particular cloud services - you would expect a well-run DNS system as part of the service offer. But for anyone else? As Saku (implicitly) stated: these services are likely managed all in the same manner with automation/scripts. I assume the underlying software is the same too on the distributed servers behind one particular anycast address (I'm not saying Google and CF use the same software). So how redundant is the DNS system then in reality? On the other hand, having some well-funded/well-staffed organizations dealing with all the problems of security, attacks and other "nonsense" is a benefit of using public DNS. Personally I tend to run "unbound" for recursive resolving and close it against outside use. But I may miss an important point - any reasoning that points to the one or the other solution as being better? (my setups/domains are for private use only these days, nothing big, nothing important, so what do I know ... but I'm happy to learn & improve) Best regards, Marc
On Wed, 16 Jul 2025 at 17:15, Tom Beecher via NANOG <nanog@lists.nanog.org> wrote:
Now that everyone has gotten the RPKI rage out of their system, Cloudflare is taking responsibility for this event. Explicitly stated it wasn't a hijack, but their own mistake.
https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/

Public DNS is just another basic service, no different than commercially-sold DNS services such as Amazon Route 53 or Cisco Umbrella. Yes, commercial DNS usually offers additional security and monitoring, which is really why you buy it, but is fails just as often as public DNS, although probably less often than the average internal resolver. Other than the right to yell at somebody, I haven’t seen any qualitative difference in DNS services of all stripes. -mel
On Jul 17, 2025, at 7:40 AM, Marc Binderberger via NANOG <nanog@lists.nanog.org> wrote:
On Wed, 16 Jul 2025 18:24:55 +0300, Saku Ytti via NANOG wrote: Any amount of redundancy can be fixed by automation.
:-)
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
I'm not talking about customers of the particular cloud services - you would expect a well-run DNS system as part of the service offer. But for anyone else?
As Saku (implicitly) stated: these services are likely managed all in the same manner with automation/scripts. I assume the underlying software is the same too on the distributed servers behind one particular anycast address (I'm not saying Google and CF use the same software).
So how redundant is the DNS system then in reality?
On the other hand, having some well-funded/well-staffed organizations dealing with all the problems of security, attacks and other "nonsense" is a benefit of using public DNS.
Personally I tend to run "unbound" for recursive resolving and close it against outside use. But I may miss an important point - any reasoning that points to the one or the other solution as being better? (my setups/domains are for private use only these days, nothing big, nothing important, so what do I know ... but I'm happy to learn & improve)
Best regards, Marc
On Wed, 16 Jul 2025 at 17:15, Tom Beecher via NANOG <nanog@lists.nanog.org> wrote:
Now that everyone has gotten the RPKI rage out of their system, Cloudflare is taking responsibility for this event. Explicitly stated it wasn't a hijack, but their own mistake.
https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/
NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/ELUIQH7I...

On Thu, 17 Jul 2025 14:46:18 +0000 Mel Beckman via NANOG wrote:
Public DNS is just another basic service
From a user’s perspective, maybe — yes. But not so much from the operator’s side. These days, there are an awful lot of knobs to turn when it comes to DNS resolving or running a DNS resolver service. And more and more ISPs no longer seem to master the craftsmanship required to do it properly. See for example; https://www.ripe.net/publications/docs/ripe-823/ https://kindns.org/ (The full list of possible features has expanded since) -- Marco 🐾

On Thu, Jul 17, 2025 at 11:40 AM Marc Binderberger via NANOG <nanog@lists.nanog.org> wrote:
On Wed, 16 Jul 2025 18:24:55 +0300, Saku Ytti via NANOG wrote:
Any amount of redundancy can be fixed by automation.
:-)
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
According to BCP-140, no, not a good thing. Rubens

This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing. That BCP is from 2015... Running a safe and robust recursive service for large numbers of users or a business is not trivial. The reality is that most SMB don't have anyone with the expertise to do this well. For those folks, or folks that don't like/trust their ISP at home, using the quad-X (1.1.1.1, 8.8.8.8, 9.9.9.9) is a much better and safer experience than trying to run their own. Yes, there are some performance and privacy tradeoffs. But the folks running the quad-X are far more likely to be current on DNS trends, not using 2015 habits in a 2025 world.

On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ? Rubens

rubensk> According to BCP-140, no, not a good thing. ebersman> That BCP is from 2015... rubensk> RFC 1035 is still what defines DNS, hasn't been obsoleted and rubensk> is from 1987. Perhaps age is not the main factor in defining rubensk> obsolescence ? Blind adherence to 10+ year old BCPs rather than current experience and competence is a bad idea. And 1035 has had literally hundreds of more current RFCs to clarify... Also, when someone asks for a serious "is this still a good idea" and someone just goes "BCP blah says", you're not adding to the discussion.

RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
With RFCs, no. With BCP, the middle letter is generally relevant to the discussion. On Thu, Jul 17, 2025 at 2:40 PM Rubens Kuhl via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
Rubens _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IPQKD6S4...

On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
With RFCs, no. With BCP, the middle letter is generally relevant to the discussion.
are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ? Well, it's both, a BCP and RFC - which statement above wins? ... ;-) Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid document to consider. But "to consider" does not mean "it's the law". Everyone who is willfully running into these known problems (by setting up a public DNS, I mean) simply has to assign the necessary resources to handle the problems. And I assume Google, CF & Co do this. In any case, my original question was not with BCP-140 in mind (but thanks to Rubens pointing it out!). I was wondering why one should or should not use these DNS servers. Thanks for all the comments, I am always surprised how complex even "basic" things like DNS turn out to be. And yes, I was wondering if the redundancy - or centralization - of the Internet is something to consider. My personal read on all the comments is that the N.N.N.N public servers are good backup forwarder solutions but for the sake of a de-centralized, robust Internet one should implement a better "Plan A". And don't forget BCP-140 when you implement the plan ;-) Regards, Marc On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
With RFCs, no.
With BCP, the middle letter is generally relevant to the discussion.
On Thu, Jul 17, 2025 at 2:40 PM Rubens Kuhl via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
Rubens _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IPQKD6S4... _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/PZ6X3FIC...

A situation I’ve seen often with SMBs is when they have two or more ISPs using WAN failover or load balancing mechanisms built into their firewall. This requires either running your own local caching resolver that queries root name servers, paying for a third party DNS services, or somehow ensuring DNS requests get routed to the appropriate ISP’s name servers, because “crossing the streams” will fail every time. Or one can just use a public DNS server, the minimal-effort “free” solution. As we all know, public DNS isn’t really free. You’re giving up your DNS eyeball information in exchange, which the public DNS operator happily sells to the highest bidder. And then there is the NXDOMAIN concession, in which you tacitly agree to accept ads in place of name-not-found responses. As long as there is a “free” solution that doesn’t cause the implementer pain (ignoring user impacts) , it will be popular :) -mel
On Jul 18, 2025, at 7:03 AM, Marc Binderberger via NANOG <nanog@lists.nanog.org> wrote:
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
With RFCs, no. With BCP, the middle letter is generally relevant to the discussion.
are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ?
Well, it's both, a BCP and RFC - which statement above wins? ... ;-)
Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid document to consider.
But "to consider" does not mean "it's the law". Everyone who is willfully running into these known problems (by setting up a public DNS, I mean) simply has to assign the necessary resources to handle the problems. And I assume Google, CF & Co do this.
In any case, my original question was not with BCP-140 in mind (but thanks to Rubens pointing it out!). I was wondering why one should or should not use these DNS servers. Thanks for all the comments, I am always surprised how complex even "basic" things like DNS turn out to be.
And yes, I was wondering if the redundancy - or centralization - of the Internet is something to consider. My personal read on all the comments is that the N.N.N.N public servers are good backup forwarder solutions but for the sake of a de-centralized, robust Internet one should implement a better "Plan A". And don't forget BCP-140 when you implement the plan ;-)
Regards, Marc
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
With RFCs, no.
With BCP, the middle letter is generally relevant to the discussion.
On Thu, Jul 17, 2025 at 2:40 PM Rubens Kuhl via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
Rubens _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IPQKD6S4... _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/PZ6X3FIC... _______________________________________________ NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/MEUDCZZA...

As long as there is a “free” solution that doesn’t cause the implementer pain (ignoring user impacts) , it will be popular :)
Even when it does cause more user impact it sometimes still works out that way. Say a company has two options for a given service : 1. Run it themselves. Cost $100K a year, 5 9's uptime. ( 5 down mins / year ) 2. Use As A Service. Cost $50K a year, 4 9's uptime. ( 53 down mins/year. ) Say that every minute this company is down, they lose $1K in revenue. That means : Option 1 total cost : $105K ( run + 5 mins lost rev ) Option 2 total cost : $103K ( run + 53 mins lost rev ) The technical person is going to say 'these costs are basically the same, let's take the higher uptime of option 1'. The MBA is going to say 'I can reduce our costs by 1.9% with option 2.' And we know who wins in most places. On Fri, Jul 18, 2025 at 10:31 AM Mel Beckman via NANOG < nanog@lists.nanog.org> wrote:
A situation I’ve seen often with SMBs is when they have two or more ISPs using WAN failover or load balancing mechanisms built into their firewall. This requires either running your own local caching resolver that queries root name servers, paying for a third party DNS services, or somehow ensuring DNS requests get routed to the appropriate ISP’s name servers, because “crossing the streams” will fail every time.
Or one can just use a public DNS server, the minimal-effort “free” solution.
As we all know, public DNS isn’t really free. You’re giving up your DNS eyeball information in exchange, which the public DNS operator happily sells to the highest bidder. And then there is the NXDOMAIN concession, in which you tacitly agree to accept ads in place of name-not-found responses.
As long as there is a “free” solution that doesn’t cause the implementer pain (ignoring user impacts) , it will be popular :)
-mel
Perhaps age is not the main factor in defining obsolescence ?
With RFCs, no.
With BCP, the middle letter is generally relevant to the discussion.
On Thu, Jul 17, 2025 at 2:40 PM Rubens Kuhl via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from
On Jul 18, 2025, at 7:03 AM, Marc Binderberger via NANOG < nanog@lists.nanog.org> wrote:
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
With RFCs, no. With BCP, the middle letter is generally relevant to the discussion.
are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ?
Well, it's both, a BCP and RFC - which statement above wins? ... ;-)
Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid document to consider.
But "to consider" does not mean "it's the law". Everyone who is willfully running into these known problems (by setting up a public DNS, I mean) simply has to assign the necessary resources to handle the problems. And I assume Google, CF & Co do this.
In any case, my original question was not with BCP-140 in mind (but thanks to Rubens pointing it out!). I was wondering why one should or should not use these DNS servers. Thanks for all the comments, I am always surprised how complex even "basic" things like DNS turn out to be.
And yes, I was wondering if the redundancy - or centralization - of the Internet is something to consider. My personal read on all the comments is that the N.N.N.N public servers are good backup forwarder solutions but for the sake of a de-centralized, robust Internet one should implement a better "Plan A". And don't forget BCP-140 when you implement the plan ;-)
Regards, Marc
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from
Perhaps age is not the main factor in defining obsolescence ?
Rubens _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IPQKD6S4...
_______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/PZ6X3FIC...
_______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/MEUDCZZA... _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/GZG3GUGA...

You have to also account for user's downtime and their inability to work. That depends on the number of users, what kind of work they do, etc. for my personal use at home, my setup looks like this: https://postimg.cc/vgf2r1GM I would just ramp up the hardware for a big org. Regarding cloudflare and google and any other providers, if the product is free, you are the product. On Fri, Jul 18, 2025 at 11:51 AM Tom Beecher via NANOG <nanog@lists.nanog.org> wrote:
As long as there is a “free” solution that doesn’t cause the implementer pain (ignoring user impacts) , it will be popular :)
Even when it does cause more user impact it sometimes still works out that way.
Say a company has two options for a given service : 1. Run it themselves. Cost $100K a year, 5 9's uptime. ( 5 down mins / year ) 2. Use As A Service. Cost $50K a year, 4 9's uptime. ( 53 down mins/year. )
Say that every minute this company is down, they lose $1K in revenue. That means : Option 1 total cost : $105K ( run + 5 mins lost rev ) Option 2 total cost : $103K ( run + 53 mins lost rev )
The technical person is going to say 'these costs are basically the same, let's take the higher uptime of option 1'. The MBA is going to say 'I can reduce our costs by 1.9% with option 2.'
And we know who wins in most places.
On Fri, Jul 18, 2025 at 10:31 AM Mel Beckman via NANOG < nanog@lists.nanog.org> wrote:
A situation I’ve seen often with SMBs is when they have two or more ISPs using WAN failover or load balancing mechanisms built into their firewall. This requires either running your own local caching resolver that queries root name servers, paying for a third party DNS services, or somehow ensuring DNS requests get routed to the appropriate ISP’s name servers, because “crossing the streams” will fail every time.
Or one can just use a public DNS server, the minimal-effort “free” solution.
As we all know, public DNS isn’t really free. You’re giving up your DNS eyeball information in exchange, which the public DNS operator happily sells to the highest bidder. And then there is the NXDOMAIN concession, in which you tacitly agree to accept ads in place of name-not-found responses.
As long as there is a “free” solution that doesn’t cause the implementer pain (ignoring user impacts) , it will be popular :)
-mel
Perhaps age is not the main factor in defining obsolescence ?
With RFCs, no.
With BCP, the middle letter is generally relevant to the discussion.
On Thu, Jul 17, 2025 at 2:40 PM Rubens Kuhl via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
> This raises my question: are public DNS like 1.1.1.1 or Google's > 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from
On Jul 18, 2025, at 7:03 AM, Marc Binderberger via NANOG < nanog@lists.nanog.org> wrote:
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
With RFCs, no. With BCP, the middle letter is generally relevant to the discussion.
are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ?
Well, it's both, a BCP and RFC - which statement above wins? ... ;-)
Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid document to consider.
But "to consider" does not mean "it's the law". Everyone who is willfully running into these known problems (by setting up a public DNS, I mean) simply has to assign the necessary resources to handle the problems. And I assume Google, CF & Co do this.
In any case, my original question was not with BCP-140 in mind (but thanks to Rubens pointing it out!). I was wondering why one should or should not use these DNS servers. Thanks for all the comments, I am always surprised how complex even "basic" things like DNS turn out to be.
And yes, I was wondering if the redundancy - or centralization - of the Internet is something to consider. My personal read on all the comments is that the N.N.N.N public servers are good backup forwarder solutions but for the sake of a de-centralized, robust Internet one should implement a better "Plan A". And don't forget BCP-140 when you implement the plan ;-)
Regards, Marc
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from
Perhaps age is not the main factor in defining obsolescence ?
Rubens _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IPQKD6S4...
_______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/PZ6X3FIC...
_______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/MEUDCZZA... _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/GZG3GUGA...
NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IBITGNZE...

are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ?
Well, it's both, a BCP and RFC - which statement above wins? ... ;-)
Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid document to consider.
My stance on these (which I think mirrors the IETF's definitions) is that BCPs are a subset of RFCs, and are guidance , not standards. Which, over time, the 'current' part of the guidance is relevant to consider. With respect to BCP-140, sure, it's still generally applicable today. I didn't mean to imply it wasn't in my response. I was more generally commenting on the age of document bits, and that wasn't clear. So sorry about the confusion there.
And yes, I was wondering if the redundancy - or centralization - of the Internet is something to consider. My personal read on all the comments is that the N.N.N.N public servers are good backup forwarder solutions but for the sake of a de-centralized, robust Internet one should implement a better "Plan A". And don't forget BCP-140 when you implement the plan ;-)
Everyone should be considering the impacts of centralization (not just with DNS) but a lot of people just don't. This recent CF event is a perfect example. These companies generally are very good at what they're doing, but the occasional catastrophic mistake happens. If someone wants to put all their eggs in one basket, they get to enjoy the pain when it bites them. On Fri, Jul 18, 2025 at 8:20 AM Marc Binderberger <marc+lists@sniff.es> wrote:
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
With RFCs, no. With BCP, the middle letter is generally relevant to the discussion.
are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ?
Well, it's both, a BCP and RFC - which statement above wins? ... ;-)
Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid document to consider.
But "to consider" does not mean "it's the law". Everyone who is willfully running into these known problems (by setting up a public DNS, I mean) simply has to assign the necessary resources to handle the problems. And I assume Google, CF & Co do this.
In any case, my original question was not with BCP-140 in mind (but thanks to Rubens pointing it out!). I was wondering why one should or should not use these DNS servers. Thanks for all the comments, I am always surprised how complex even "basic" things like DNS turn out to be.
And yes, I was wondering if the redundancy - or centralization - of the Internet is something to consider. My personal read on all the comments is that the N.N.N.N public servers are good backup forwarder solutions but for the sake of a de-centralized, robust Internet one should implement a better "Plan A". And don't forget BCP-140 when you implement the plan ;-)
Regards, Marc
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from
Perhaps age is not the main factor in defining obsolescence ?
With RFCs, no.
With BCP, the middle letter is generally relevant to the discussion.
On Thu, Jul 17, 2025 at 2:40 PM Rubens Kuhl via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote: 1987. 1987.
Perhaps age is not the main factor in defining obsolescence ?
Rubens _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IPQKD6S4...
_______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/PZ6X3FIC...

On Fri, Jul 18, 2025 at 9:02 AM Marc Binderberger via NANOG <nanog@lists.nanog.org> wrote:
are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ? Well, it's both, a BCP and RFC - which statement above wins? ... ;-)
Some RFCs are proposed standards, and some are just publication of Informational documents and merely suggested Best Current Practices to aid in implementation of the standards. The language at the header of the BCPs always say the request discussion and suggestions for improvements. Their publication doesn't mean the recommendations are perfect.
Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid
The BCP-140 is perfectly valid, and a good implementor should still take the issues discussed into consideration before developing DNS software and when planning the implementation of DNS services. The recommendations don't necessarily apply to every implementation; the SHOULDs might be infeasible, or not necessarily the best option, or an option. It is Not necessarily the case that you want to actually follow every recommendation. You can consider scenarios and problems discussed in the BCP; make some determinations about whether they apply, and the risks your implementation poses, then design and build other solutions. For example, You can likely detect a possible attempted amplification attack from a remote recursor by maintaining per IP address byte rate counters for each querier and the response size against your recursor network. Force TCP-only to an IP or cap UDP response sizes and throttle UDP packets for a period of time after a certain query volume has been exceeded. You might also drop the first DNS request packet, and look for possible query retry request packets to fingerprint. Ampther type of alternate solution for a Public DNS provider whose DNS service is inherently external. You can also begin to deprecate direct UDP DNS access in general in favor of DoH/DoT. You can also through Anycast publication of your numerous public DNS instances Identify _which_ of your DNS sites a specified query source IP address should actually reach based on internet topology and historic legitimate query data assuming the source IP address is not spoofed for amplification purposes, and implement a protocol where the DNS queries are ignored and discarded If received by the incorrect site until you see a TCP exchange proofing a legitimate query from that source, instead of the correct region Anycast site peered with the provider of that source IP address. -- -JA

marc> Joking aside, I don't see why this BCP would not be relevant marc> today. If you run an open recursive DNS in the Internet, this marc> still seems to me a valid document to consider. Read more carefully. "open resolver" in that context is one that anyone can use however they want, with no limits, restrictions, or monitoring of the activity. The quad-X servers are all very carefully monitored, abusers of it are found and blocked. Not the same at all.

On Thu, Jul 17, 2025 at 6:18 PM Paul Ebersman via NANOG < nanog@lists.nanog.org> wrote:
Running a safe and robust recursive service for large numbers of users or a business is not trivial. The reality is that most SMB don't have anyone with the expertise to do this well. For those folks, or folks that don't like/trust their ISP at home, using the quad-X (1.1.1.1, 8.8.8.8, 9.9.9.9) is a much better and safer experience than trying to run their own.
By 2025, through decades of hard work and dedication, we reached the point where: * running your own email is too hard because of more and more rules and arbitrary restrictions from the big providers - better outsource it to gmail or else you risk not being able to deliver your customers' mails * running your own web servers without a CDN in front of them is really not wise, because only the big providers can defend against DDoS attacks, and if your business depends on availability you have not choice but to comply. Otherwise you're out in the "toxic wasteland" as Geoff put it * running your own DNS is too hard - see above, better outsource it to one of the few key players I'd like to believe this is reversible, but I fear in reality we're heading further down the path of centralisation. Robert

On Thu, Jul 17, 2025 at 9:40 AM Marc Binderberger via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
Overall I would say the services' existence is mostly a good thing, And you could mitigate most redundancy issues on the client by setting a different public DNS provider as a second or tertiary resolver. But there are definitely some disadvantages, and outages are not the only risk created by global centralization in one provider. For example: By centralizing in a few public rDNS providers; You are creating a single entity who can be easily served by governmental entities or large conglomerates with blanket censorship or blocking orders due to sites hosting content related to sensitive social issues or legal disputes, plus subpoenas or warrants exposing user data. By running your own recursive resolver you are guaranteeing that the interests of the person hosting your resolver servers are aligned with your interests, and they aren't going to block your access to resources some company doesn't want you to see.
Personally I tend to run "unbound" for recursive resolving and close it against outside use. But I may miss an important point - any reasoning that
In its simplest config: You lose out on a privacy benefit by running your own recursive nameserver. When using 1.1.1.1 with your browser: requests and responses can be exchanged using DNS over HTTPS; which means that a passive eavesdropper, such as your own Internet service provider with their DNS monetization program cannot capture and log your queries for resale to data brokers. You are reducing the number of parties you have to entrust with the privacy of DNS queries you make and their answers. However, authoritative Nameservers have no equivalent encrypted transport, so you cannot obtain that privacy when you are running your own recursive resolution. You may perform DNSSEC validation, but TCP Port 53 or UDP DNS traffic is still unencrypted, and authoritative nameservers rarely or never offer an encrypted transport to secure your recursive resolver against passive spying.
points to the one or the other solution as being better? (my setups/domains are for private use only these days, nothing big, nothing
I think the best solution may be have your own DNSSEC-validating resolver, but operate it in a query forwarding mode towards multiple different DNS resolver providers for redundancy using DoH; DNS or HTTPS or DNS over TLS. -- -JA

On 7/17/2025 4:58 PM, Jay Acuna via NANOG wrote:
When using 1.1.1.1 with your browser: requests and responses can be exchanged using DNS over HTTPS; which means that a passive eavesdropper, such as your own Internet service provider with their DNS monetization program cannot capture and log your queries for resale to data brokers. You are reducing the number of parties you have to entrust with the privacy of DNS queries you make and their answers.
This is just like the HTTPS-everywhere nonsense for websites. It's just making the surveillance data that Cloudflare collects more valuable because only they can collect it and not the ISPs along the way, due to this encryption. Do you guys remember when we had SSL accelerator cards in servers? Now we waste that kind of energy on every web request to lie to users and tell them that it's end to end encrypted (is Cloudflare's spy proxy the end?). The public DNS services are clearly not good for privacy, and neither is pretending to encrypt website traffic, giving users a false sense of security while all of their sensitive information is visible in plain text at CF. They are literally doing a MITM attack and they can even generate certs that don't warn in browsers, showing how worthless that system is for users (but great for those selling certs). Do you trust those people with all your DNS queries and browsing history? At least you still have the choice to not use their resolver, but no way to opt out of the HTTPS-breaking proxy services (and CAPTCHAs) if the website operator implemented it. It's not a good situation for freedom and privacy, and the DNS resolvers are just the tip of the iceberg here.

On Thu, 17 Jul 2025 at 12:37, Laszlo H via NANOG <nanog@lists.nanog.org> wrote:
On 7/17/2025 4:58 PM, Jay Acuna via NANOG wrote:
When using 1.1.1.1 with your browser: requests and responses can be exchanged using DNS over HTTPS; which means that a passive eavesdropper, such as your own Internet service provider with their DNS monetization program cannot capture and log your queries for resale to data brokers. You are reducing the number of parties you have to entrust with the privacy of DNS queries you make and their answers.
This is just like the HTTPS-everywhere nonsense for websites. It's just making the surveillance data that Cloudflare collects more valuable because only they can collect it and not the ISPs along the way, due to this encryption. Do you guys remember when we had SSL accelerator cards in servers? Now we waste that kind of energy on every web request to lie to users and tell them that it's end to end encrypted (is Cloudflare's spy proxy the end?).
I completely agree, and, the worst part, is that it also: 1. prohibits older devices from still being useful for reading purposes and the general information access; for example, with TLSv1.0, you can still Google Search on an older device, and shop on Amazon, but Wikipedia will not let you access the "free" information, because reasons™; 2. prohibits proxing and caching of public resources that don't even change all that frequently; Both of these widen the digital divide, since it's those less fortunate that would be most affected. But, of course, blocking http access, and deprecating TLSv1.0 on Wikipedia, are done with the best of intentions, as is always! For people who run their own home or corporate networks, the prevalence of HTTPS also limits their ability to detect threats, do security research, and ensure no funny traffic is exchanged; ad-blocking on a network level at home would also be more effective without HTTPS being in the way; but, of course, the HTTPS proponents describe all of these "bugs" as "features", nevermind the extra impact of having to run ad blockers on every device wasting more resources and shortening the planned obsolescence cycles, plus the ever changing API of the browsers that make it more and more difficult to effectively block all of these resource hogs that hide within https.
The public DNS services are clearly not good for privacy, and neither is pretending to encrypt website traffic, giving users a false sense of security while all of their sensitive information is visible in plain text at CF. They are literally doing a MITM attack and they can even generate certs that don't warn in browsers, showing how worthless that system is for users (but great for those selling certs). Do you trust those people with all your DNS queries and browsing history? At least you still have the choice to not use their resolver, but no way to opt out of the HTTPS-breaking proxy services (and CAPTCHAs) if the website operator implemented it. It's not a good situation for freedom and privacy, and the DNS resolvers are just the tip of the iceberg here.
I'm interested in fighting back. One way to fight back is ensuring your non-commercial websites do NOT support HTTPS. If somehow you do support HTTPS, ensure you do NOT support HSTS, and, also, do NOT redirect from HTTP to HTTPS. Another way to fight back, may be to implement DNS delays specifically for Cloudflare's 1.1.1.1, since Cloudflare is well known for wasting our time as users with the mandatory ad-viewing of their captcha pages on so many different web properties all across. Does anyone know of any dual-horizon "delay" patches for NSD to target the Cloudflare's resolver? The person running archive.today used to expressly limits 1.1.1.1's access to their DNS in its entirety because of these known issues with Cloudflare: * https://news.ycombinator.com/item?id=21155056 Cheers, Constantine.

When using 1.1.1.1 with your browser: requests and responses can be exchanged using DNS over HTTPS; which means that a passive eavesdropper, such as your own Internet service provider with their DNS monetization program cannot capture and log your queries for resale to data brokers. You are reducing the number of parties you have to entrust with the privacy of DNS queries you make and their answers.
When using DoH, your ISP can't see your DNS requests, but they can absolutely still see the IP of the thing you try to connect to right after making that DNS request, not to mention probably exposed in the TLS SNI, so it's not like you're gaining that much privacy anyways. On Thu, Jul 17, 2025 at 1:00 PM Jay Acuna via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 9:40 AM Marc Binderberger via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
Overall I would say the services' existence is mostly a good thing, And you could mitigate most redundancy issues on the client by setting a different public DNS provider as a second or tertiary resolver.
But there are definitely some disadvantages, and outages are not the only risk created by global centralization in one provider.
For example: By centralizing in a few public rDNS providers; You are creating a single entity who can be easily served by governmental entities or large conglomerates with blanket censorship or blocking orders due to sites hosting content related to sensitive social issues or legal disputes, plus subpoenas or warrants exposing user data.
By running your own recursive resolver you are guaranteeing that the interests of the person hosting your resolver servers are aligned with your interests, and they aren't going to block your access to resources some company doesn't want you to see.
Personally I tend to run "unbound" for recursive resolving and close it against outside use. But I may miss an important point - any reasoning that
In its simplest config: You lose out on a privacy benefit by running your own recursive nameserver.
When using 1.1.1.1 with your browser: requests and responses can be exchanged using DNS over HTTPS; which means that a passive eavesdropper, such as your own Internet service provider with their DNS monetization program cannot capture and log your queries for resale to data brokers. You are reducing the number of parties you have to entrust with the privacy of DNS queries you make and their answers.
However, authoritative Nameservers have no equivalent encrypted transport, so you cannot obtain that privacy when you are running your own recursive resolution. You may perform DNSSEC validation, but TCP Port 53 or UDP DNS traffic is still unencrypted, and authoritative nameservers rarely or never offer an encrypted transport to secure your recursive resolver against passive spying.
points to the one or the other solution as being better? (my setups/domains are for private use only these days, nothing big, nothing
I think the best solution may be have your own DNSSEC-validating resolver, but operate it in a query forwarding mode towards multiple different DNS resolver providers for redundancy using DoH; DNS or HTTPS or DNS over TLS.
-- -JA _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/QA2ENGWX...

On Thu, Jul 17, 2025 at 2:05 PM Tom Beecher <beecher@beecher.cc> wrote:
not to mention probably exposed in the TLS SNI, so it's not like you're gaining that much privacy anyways. Older versions of TLS have this weakness, yes.
That does not mean you stop trying to mitigate other points where your data may be leaking; especially in regards to DNS packets which are easily analyzed, because the protocol is so simple, and because there is such a smaller number of DNS packets traversing a network it becomes low-hanging fruit to capture, record, and analyze all the DNS packets, and is entirely feasible for any ISP to do. On the other hand capturing, saving, and analyzing every TCP port 443 packet for a large ISP network would require an insane amount of storage and computation power - hopefully costing a much greater number of dollars than the possible profit value an ISP could expect to generate by violating the privacy of all their subscribers. My understanding is about half of internet traffic is HTTP/3. And the protocol as designed specifically to encrypts headers and metadata such that a 3rd party cannot analyze the packets anymore to figure out the actual domain name requested for that very purpose. And TLS 1.3 as well has added an extension for Encrypted SNI, so if domains you are visiting have implemented ESNI, then a 3rd party cannot identify the domain or server name being requested over HTTPS.
When using DoH, your ISP can't see your DNS requests, but they can absolutely still see the IP of the thing you try to connect to right after making that DNS request,
In theory, but feeding off DNS packets is a much smaller volume of traffic for an ISP to sniff packets from -- it is extremely easy and much lower cost, since the volume of DNS packets is going to be miniscule compared to the volume of HTTPS packets traversing their networks. With DNS the ISP just places a small inexpensive box on the network sold by one of the companies that specializes in messing with your customers' DNS traffic -- probably handles auto-redirecting "non-existent domains" to Ad-supported search pages as well. On the other hand sniffing every single port 443 packet and deconstructing the headers is a much higher amount of computation, so at least you are making privacy invasion more expensive. Hopefully expensive enough that they give it up. Also; same issue as with just using Netflow to track customer surfing: a single web server IP address often hosts many websites. You can be tunneling your HTTPS connections through a Proxy, another privacy service, or a VPN, and your DNS requests are simply leaking through your main connection, which is common. You can be hitting websites behind a Cloudflare reverse proxy IP, and there are hundreds or thousands of domains virtually hosted on the same IP address. -- -JA

On Wed, Jul 16, 2025 at 10:14:02AM -0400, Tom Beecher via NANOG <nanog@lists.nanog.org> wrote a message of 21 lines which said:
Now that everyone has gotten the RPKI rage out of their system, Cloudflare is taking responsibility for this event. Explicitly stated it wasn't a hijack, but their own mistake.
https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/
Yes. See also: https://anuragbhatia.com/post/2025/07/cloudflare-dns-outage/ for a lot of details.

https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/ https://anuragbhatia.com/post/2025/07/cloudflare-dns-outage/
nice analyses of causes. but my interest was in the effect of rov on constraining propagation within the topology. randy

Hi Randy AS4755 does not do ROV. AS6453 does but only for peers, not downstream side yet. From measurements, I see not all but quite a few invalids in their full table. E.g invalid.rpki.isbgpsafeyet.com - AS13335 - intentionally kept with AS0 ROA and on downstream side of AS6453. Lookup via their looking glass: [image: image.png] Comparing it with an invalid coming from their peering side: 2a02:26f0:128::/48 - originated by AS6762 (peer of AS6453): [image: image.png] Some of this confusion comes from incorrectly shown status of AS6453 on https://isbgpsafeyet.com. Thanks. On Wed, Jul 16, 2025 at 11:12 PM Randy Bush via NANOG <nanog@lists.nanog.org> wrote:
https://blog.cloudflare.com/cloudflare-1-1-1-1-incident-on-july-14-2025/ https://anuragbhatia.com/post/2025/07/cloudflare-dns-outage/
nice analyses of causes. but my interest was in the effect of rov on constraining propagation within the topology.
randy _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/7QEM2JWC...
-- Anurag Bhatia anuragbhatia.com

anureg,
AS4755 does not do ROV. AS6453 does but only for peers, not downstream side yet.
these years, this seems ill-advised. back in the day, when we had less experience with rov, it might have been reasonably conservative; "they're paying me to accept." imiho, today it would seem to fall into a similar class as not in as-set: etc. but i gather opinions vary. randy
participants (19)
-
Anurag Bhatia
-
Constantine A. Murenin
-
Francis Booth
-
Gary Sparkes
-
Javier J
-
Jay Acuna
-
Laszlo H
-
Marc Binderberger
-
Marco Davids (Private)
-
Marco Moock
-
Mel Beckman
-
Noah van der Aa
-
Paul Ebersman
-
Randy Bush
-
Robert Kisteleki
-
Rubens Kuhl
-
Saku Ytti
-
Stephane Bortzmeyer
-
Tom Beecher