On Jun 14, 2011, at 5:50 PM, Ricky Beam wrote:
On Tue, 14 Jun 2011 18:16:10 -0400, Owen DeLong <owen@delong.com> wrote:
The point of /64 is to support automatic configuration and incredibly sparse host addressing. It is not intended to create stupidly large broadcast domains.
Several IETF (and NANOG) discussions say otherwise. While current hardware doesn't handle thousands of hosts, the protocol was designed for a future where that's not true. (there's a future where *everything* is network enabled... microwave oven, doorbell, weed whacker, everything.)
Sure, but, that future still doesn't need stupidly large numbers of hosts on a common link.
A /22 is probably about the upper limit of a sane broadcast domain, but, even with a /22 or 1022 nodes max, each sending a packet every 10 seconds you don't get to 100s of PPS, you get 102.2pps.
As I said, DHCP isn't the only source of traffic. Setup a 1000 node network today (just IPv4), and you will see a great deal of broadcast traffic (unless those nodes aren't doing anything.) With IPv6, it's all multicast (v6 doesn't have a "broadcast address") hinged on switches filtering the traffic away from where it doesn't need to be. The all-too-common Best Buy $20 white box ethernet switch does no multicast filtering at all. Pretty much all wireless hardware sucks at multicast - period. These are not things that can be fixed with a simple software update... if the silicon doesn't do it, *it doesn't do it*.
Depends on a number of factors. Yes, there are lots of issues. However, they aren't caused by the small number of additional packets from DHCP. Owen