I am not sure whether the danger in opening up the B space for /17 blocks is particularly bad, but lacking a single consistent policy body with sufficient clue about both the Tier-1 backbone issues and the address allocation issues, it's hard to fault any given ISP for insisting on /16s in B space.
Sounds good, but what exactly does that mean? Does any end network capable of justifying a /24 then get a routable chunk, thus blowing up the tables? What if you could do it based upon traffic generation? That would be difficult to verify, and the definition for 'large' amounts of traffic is ever changing. So, if we say that a /20 is a sufficiently large amount of space to get a routable chunk, then they would be able to get it from ARIN anyway, and we're back to square one. In the far term as space becomes scarce we will need to find a solution to wasted B space, but that is several years out. Perhaps by that time routers will have so much memory and CPU as to make an extra ~4 million possible routes negligible. Austin