Dirk Harms-Merbitz <dirk@power.net> wrote:
Would you please describe any useful mechanism of traffic-based inter-backbone settelemts?
Computers (third party?) on the edges of my network count the number of packets that transit my network over a certain time period. Periodically I issue invoices.
Ok. How do you translate the number of packets into figure in $s? In a capitalist economy, price generally follows value. What is the value of a packet? It is not even clear if packet crossing from your network to my network gives _me_ any value. It can as well be for _your_ benefit. Since there's no way to determine value, the scheme is thus completely arbitrary; and therefore _is_ regressive. Economics 101.
Of course, other networks do the same. End users (including co-located computers) simply appear as connected networks that don't provide any transit, hence they pay for their connection.
They do it today. Where's the difference? You seem to forget that carrying traffic is only a part of network service; in fact, a lot of _value_ is in the _ability_ to communicate universally. That's what made Internet a killer to all those X.25 networks.
Because there is a real cost for long-haul packet transport.
Remember that I said "if the same amount of packets are transfered". The speed of my connection does not necessarily affect the number of packets that I'm transmitting.
When you lease an apartment landlord doesn't generally care if you don't sleep there six days out of seven. That's because he has (implied) obligation to provide adequate service during peak usage.
"Connection costs" is rent on transmission facilities, plus overhead (upkeep of the property, insurance, administration, etc).
No. That's "transmission cost", not "connection cost". The cost of me connecting to your network (not counting setup and so on) is the sum of the interface electronics on both ends and whatever link we need inbetween.
Connection costs are generally figured into non-recurring charges. They usually do not exceed costs of equipment and overhead.
Off on a tangent... creating bandwidth is somewhat like making computer chips. Making the first production Pentium (or whatever) processor costs billions. The second is a few cents. Here, laying the cable is expensive. Once it is in the ground, the cost for transporting another packet is almost zero.
This is an example of patently meaningless analogy. _Every_ business (even pyramid scams) has some capital costs. So what. Telcos lend money, put in fiber and than pay loans off from the resulting revenues. They can borrow capital internally, from their shareholders (in form of reduced dividends). The fact that tfiber is not loaded 100% all the time is figured in the user fees.
An interesting question is what do to with potential bandwidth/cpu cycles/free ram, i.e. what are the costs of not using available capacity? One answer is that the cost is zero until your competitor starts being more efficient then you are. See Soviet UNION vs USA.
There's no secret that the name of the game in the Internet backbones is "shove congestion back to competitor's backbone".
Back to networks... it would be logical to assume that whenever people deploy fiber they drop as much as they can afford at that moment.
You will be surprised. Sprint has 6 (six) strands of fiber in its trunks. The cost of putting in 100 strands was purely incremental, but then, some beancounters figured that those six strands will be good forever. Now, it's digging time again.
If that's true then there must be enourmous amounts of unlit fiber.
In some places. Generally, LDCs are out of capacity.
What's the total capacity going accross the atlantic or pacific?
Not much. It's not like surface cables, where the real cost is digging a tranche, and buying rights of way. It costs next to nothing to drop undersea cable off the ship; but the cable itself with all its hamstrung amplifiers costs a fortune.
How much of it is being used?
Practically all of it. BTW, it is generally priced in DS-0 chunks; i.e. when you get a DS-1, you pay the same as for 24 DS-0s.
I'd bet that there are many terabits of fiber that are not in use.
Not at all. The worst part of it, you can't just install WDM equipment at the ends, and get more capacity. Replacement of amplifiers in the existing undersea cable is about as costly as putting in new cable.
Yes, of course, the closer you get to what's technologically feasable, (the closer you move to the border to the future) the more expensive things become. I guess that's the main argument used to defend higher costs for faster connections.
Never underestimate bandwidth of 10 sq feet of fiber strands.
But we don't even need to go to that edge. Building gigabit, if not terrabit routers is amazingly simple and can be done with off the shelf technology.
Tell me about it.
Hey, a few Linux boxes interconnected with a few 100MB/sec ethernet switches in the right fashion would allow the creation of a super NAP that should outperform gigaswitches easily.
Not _that_ simple. I won't go into details on that; but so far the only known way to do that is pretty much covered by the pending patents.
Summary: I would not be surprised if the packet carrying capacity of the Internet could be increased by two or three orders of magnitude with surprisingly little investments. The real challenge is how to get people to do that.
Four orders. Then something different is needed. --vadim