HTTP is not nearly as cacheable as you would think, and caching it has some bad side effects in many cases - which your customers will likely bitch about.
(temptation to advertise a product here resisted with some difficulty.)
Let's say that you can cache 50% of the HTTP traffic, which frankly, from what I've seen is HIGHLY aggressive, but I'll be nice and give you that for the sake of argument.
50% is easy with two level caching. you just need fat pipes between the two levels, and high availability at the root of the hierarchy, and a LOT of users to help get as much variety as possible in the requests. i've seen 65% when the wind was behind it.
Ok, so its only 500:1 assuming 50% effectiveness on the HTTP side.
It still won't work.
Now, if you intend to rate-shape (as opposed to tossing packets on the floor when you get overcommitted) then you ARE committing fraud if you don't tell the truth about it. And, frankly, the customer really gets hosed with this kind of model - because you have to be pretty predictive for this to give you any kind of net gain in effective utilization, which means you apply the chokes BEFORE the peak levels get hit.
and this differs from the cable modem internet market in precisely which way?