I think the key is that the failures described in the paper are caused by overload rather than other things - too much demand for power blows out the generator, and without it, the grid tries to get the power from the next nearest generators, which overload and fail, and try to pull an even large amount from the _next_ nearest, etc. So the bit about heterogeneity is probably referring to the fact that some nodes are bigger or better-connected than others, and are more likely to blow out a bunch of their neighbors when they fail and shed a big load. That's not really how Internet systems usually fail. Overload can cause problems, and we've seen congestion collapse in the past, but TCP is usually tuned to discourage it; when a system is overloaded, well-behaved applications (which is most of them) back off, gradually or rapidly, but unless the load is weird enough to blow out router CPUs or crowd out BGP and OSPF packets, usually the network itself stays up and running. If what's failing is an overload of BGP routes or something, that's different - and sometimes the load on the system shrinks as components fail, but sometimes that just makes everything flap all at once, increasing load and delaying convergence.