
A situation I’ve seen often with SMBs is when they have two or more ISPs using WAN failover or load balancing mechanisms built into their firewall. This requires either running your own local caching resolver that queries root name servers, paying for a third party DNS services, or somehow ensuring DNS requests get routed to the appropriate ISP’s name servers, because “crossing the streams” will fail every time. Or one can just use a public DNS server, the minimal-effort “free” solution. As we all know, public DNS isn’t really free. You’re giving up your DNS eyeball information in exchange, which the public DNS operator happily sells to the highest bidder. And then there is the NXDOMAIN concession, in which you tacitly agree to accept ads in place of name-not-found responses. As long as there is a “free” solution that doesn’t cause the implementer pain (ignoring user impacts) , it will be popular :) -mel
On Jul 18, 2025, at 7:03 AM, Marc Binderberger via NANOG <nanog@lists.nanog.org> wrote:
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
With RFCs, no. With BCP, the middle letter is generally relevant to the discussion.
are we talking about BCP-140, aka RFC5358 ("Preventing Use of Recursive Nameservers in Reflector Attacks") ?
Well, it's both, a BCP and RFC - which statement above wins? ... ;-)
Joking aside, I don't see why this BCP would not be relevant today. If you run an open recursive DNS in the Internet, this still seems to me a valid document to consider.
But "to consider" does not mean "it's the law". Everyone who is willfully running into these known problems (by setting up a public DNS, I mean) simply has to assign the necessary resources to handle the problems. And I assume Google, CF & Co do this.
In any case, my original question was not with BCP-140 in mind (but thanks to Rubens pointing it out!). I was wondering why one should or should not use these DNS servers. Thanks for all the comments, I am always surprised how complex even "basic" things like DNS turn out to be.
And yes, I was wondering if the redundancy - or centralization - of the Internet is something to consider. My personal read on all the comments is that the N.N.N.N public servers are good backup forwarder solutions but for the sake of a de-centralized, robust Internet one should implement a better "Plan A". And don't forget BCP-140 when you implement the plan ;-)
Regards, Marc
On Thu, 17 Jul 2025 15:03:01 -0400, Tom Beecher via NANOG wrote:
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
With RFCs, no.
With BCP, the middle letter is generally relevant to the discussion.
On Thu, Jul 17, 2025 at 2:40 PM Rubens Kuhl via NANOG <nanog@lists.nanog.org> wrote:
On Thu, Jul 17, 2025 at 1:18 PM Paul Ebersman via NANOG <nanog@lists.nanog.org> wrote:
This raises my question: are public DNS like 1.1.1.1 or Google's 8.8.8.8 actually a good thing?
rubensk> According to BCP-140, no, not a good thing.
That BCP is from 2015...
RFC 1035 is still what defines DNS, hasn't been obsoleted and is from 1987. Perhaps age is not the main factor in defining obsolescence ?
Rubens _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/IPQKD6S4... _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/PZ6X3FIC... _______________________________________________ NANOG mailing list https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/MEUDCZZA...