You know, I've felt the same irritation before, but one thing I am wondering and perhaps some folks around here have been around long enough to know - what was the original thinking behind doing those /8s?
Read your network history. In the beginning all allocations were /8s, in fact the slash notation hadn't been invented yet. Network numbers were 8 bits and there was a 24 bit host id appended. Then someone realised that the net was growing really fast, so they invented class A, B and C addresses in which the network numbers were 8, 16 or 24 bits respectively. You could tell which class by looking at the first two bits of the address. In that time period only very big organizations got class A allocations. Mid-sized ones got class B and small ones got class C. In fact what happened was that some smaller organizations got multiple non-aggregatable class C blocks (and aggregation didn't exist anyway). Later on some clever folks invented VLSM for the routers which allowed network ops folks to invent CIDR. That was when people really got interested in justifying the size of an allocation, and working based on 3 months, or 6 months requirements. This is when ARIN was created so that the community had some input into how things were done. But nobody could really unroll the past, just clean up the bits where people were changing things around anyway. For instance this is how Stanford's /8 ended up being returned. Lots of folks believed that VLSM and CIDR were only stopgap measures so around the same time they invented IPv6. It was released into network operations around 10 years ago which is why most of your network equipment and servers already support it. But that's all water under the bridge. It's too late to do anything about IPv4. The ROI just isn't there any more, and it doesn't escape the need to invest in IPv6. The network industry has now reached consensus that IPv6 is the way forward, and you have to catch the wave, or you will drown in the undertow. --Michael Dillon