Do we actually know this wrt the tools referred to in "the total loss of
DNS broke many of the tools we’d normally use to investigate and resolve
outages like this."? Those tools aren't necessarily located in any of
the remote data centers, and some of them might even refer to resources
outside the facebook network.
Yea; that's kinda the thinking here. Specifics are scarce, but there were notes re: the OOB for instance also being unusable. The questions are how much that was due to dependence of the OOB network on the production side, and how much DNS being notionally available might have supported getting things back off the ground (if it would just provide mgt addresses for key devices, or if perhaps there was a AAA dependency that also rode on DNS). This isn't to say there aren't other design considerations in play to make that fly (e.g. if DNS lives in edge POPs, and such an edge POP gets isolated from the FB network but still has public Internet peering, how do we ensure that edge POP does not continue exporting the DNS prefix into the DFZ and serving stale records?), but perhaps also still solvable
I'm sure they'll learn from this and in the future have some better things in place to account for such a scenario.
100%
I think we can say with some level of confidence that there is going to be a lot of discussion and re-evaluation of inter-service dependencies.