RE: concern over public peering points [WAS: Peering point speed publicly available?]
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Mikael Abrahamsson Sent: Saturday, July 03, 2004 10:22 AM To: nanog@merit.edu Subject: Re: concern over public peering points [WAS: Peering point speed publicly available?]
On Sat, 3 Jul 2004, Laurence F. Sheldon, Jr. wrote:
Does the person that sweeps the floor do so for free? And supply
broom?
The marginal cost of half a rack being occupied by an IX switch in a multi-hundred-rack facility is negiglabe. Yes, it should carry a cost of a few hundred dollars per month in "rent", and the depreciation of the equipment is also a factor, but all-in-all these costs are not high and if an IX point rakes in $200k a year that should well compensate for
the these
costs.
-- Mikael Abrahamsson email: swmike@swm.pp.se
At the Seattle Internet Exchange a, granted, smaller peering exchange, you have to account for the following costs (and, mind you, this list is not exhaustive). 1) 1 Rack 2) Space for the rack in a secure facility 3) AC for the equipment 4) Power for the equipment (including line and UPS) 5) Fiber and Copper runs to the facility for cross-connects 6) Terminations of (5) 7) O&M of space and gear 8) Layer 8 and 9 negotiation of (1) through (7) to keep costs down. That's not a trivial set of expenses, particularly when there are limitations in place to recovering costs via non-cash methods, such as advertising the hosting of the exchange. Thankfully, there is some altruism on the behalf of several parties that allow the exchange to continue providing "zero cost" connections to participants. I hardly think the cost of their time and effort is "marginal". Mike NoaNet
On Sat, 3 Jul 2004, Michael Smith wrote:
1) 1 Rack 2) Space for the rack in a secure facility 3) AC for the equipment 4) Power for the equipment (including line and UPS)
This can be had for approx $300-1000 a month in my market.
5) Fiber and Copper runs to the facility for cross-connects 6) Terminations of (5)
This is carried on a per connection basis in my market.
7) O&M of space and gear
$50-100k over three years isn't that much.
8) Layer 8 and 9 negotiation of (1) through (7) to keep costs down.
I'd say that the time spent in negotiations is wasted, manpower is too expensive compared to the costs involved.
Thankfully, there is some altruism on the behalf of several parties that allow the exchange to continue providing "zero cost" connections to participants. I hardly think the cost of their time and effort is "marginal".
In the big picture it's marginal. Asking someone to patch a cable is a 10 minute job and the patch cable costs perhaps 30-50 dollars. Handling an invoice for this job is a major cost in the equation so yes, altruism is great. We gathered players that already had engineers, already had billing departments, already had all of the above you were referring to and get everybody to agree on a way to cooperate. The marginal costs for everybody to establish 5 PoPs and interconnecting them was quite low and since there are no billing being done between participants, that cuts down on paperwork as well. It's like a car pool. If everybody is going to bill everybody it's going to be a big operation. If you just agree to drive every fourth day and carry your own costs, everybody is better off. I realise from everybody who answered that we live in different markets and do things differently. I just think you're making it too advanced and that increases cost until public IXes stop to make sense. Keep it simple. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sat, Jul 03, 2004 at 10:57:20AM -0700, Michael Smith wrote:
At the Seattle Internet Exchange a, granted, smaller peering exchange, you have to account for the following costs (and, mind you, this list is not exhaustive).
1) 1 Rack 2) Space for the rack in a secure facility 3) AC for the equipment 4) Power for the equipment (including line and UPS) 5) Fiber and Copper runs to the facility for cross-connects 6) Terminations of (5) 7) O&M of space and gear 8) Layer 8 and 9 negotiation of (1) through (7) to keep costs down.
That's not a trivial set of expenses, particularly when there are limitations in place to recovering costs via non-cash methods, such as advertising the hosting of the exchange.
Thankfully, there is some altruism on the behalf of several parties that allow the exchange to continue providing "zero cost" connections to participants. I hardly think the cost of their time and effort is "marginal".
Which means that SIX's costs would be completely covered by charging each member with a GigE port $1k/mo. I would rather pay them the $1k/mo with the expectation that they will be able obtain quality hardware (which btw doesn't necessarily mean running to their favorite vendor and asking for the most expensive product available), provide reliable service, handle growth, etc. I would not however, pay them $14k/mo for the same service. I count 68 active participants on the SIX website. I won't venture a guess as to how many have GigE ports, and a few are connected from PAIX, etc, but I would bet that there is more than enough business available to cover the costs of intelligent spending. You could probably still give away FastE ports for free, and pretty much assume that any major ISP who can afford the GigE port and sees value in connecting with the smaller guys will go ahead and pay for it. -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
beware. six is funny. it's in seattle's carrier hotel, the westin, 32 floors of racks, more colo providers than fleas on a dawg, and very very low inter-suite fiber rates from the building owners. so, though the six does have a core, it is also kinda splattered into switches all over the building; with ease of connection and low cost being achieved at the expense of reliability. and costs are distributed along with the six infrastructure. so colo provider A may have a switch and charge $a to access it, while colo provider B may charge $b, where $b != $a. for a small local exchange this is ok, even cool. i would not want to do similarly in virginmania or palo attitude, and i would not join the six if i was a major player (only a research rack is on the six). my internal indirect costs would not be worth the traffic shed. randy
On Sat, Jul 03, 2004 at 01:39:03PM -0700, Randy Bush <randy@psg.com> wrote:
building owners. so, though the six does have a core, it is also kinda splattered into switches all over the building; with ease of connection and low cost being achieved at the expense of reliability.
Though that's true, the SIX has been extremely reliable: one unscheduled core outage in the last 3 years (about 30 minutes due to power loss). In one other case, an extension switch (7 peers) was disconnected for about 30 minutes to troubleshoot a potential problem. Peer-operated extension switches have also been very reliable. Most are above 99.9% availability including scheduled maintenance and 99.99% for unscheduled problems. The SIX's staffed 24x7 NOC lets peers treat it like any other carrier relationship, with one phone number to report a problem. Often the ops staff at national networks never know the SIX is non-profit or donation-supported. Peers of all sizes seem happy with the reliability. Everyone has open-posting mailing lists and an annual opportunity to elect the Board of Directors, so there is recourse if circumstances change. Cheers, Troy (SIX janitor)
let's just say that my experience is not all that reliable. i i suspect it varies greatly between colo/sub-switch providers. but considering the cost, i ain't got no complaints. qed. randy
On Sat, 3 Jul 2004, Richard A Steenbergen wrote: > SIX's costs would be completely covered by charging each > member with a GigE port $1k/mo. Yes, but the SIX's costs are also completely covered _without_ charging anybody $1K/month. Go back and think about the purpose of an exchange: it's an economic optimization over transit. It's the value-add that lets someone who buys transit sell a service that's of greater value yet lesser cost than what they buy. Now, what's an exchange that costs more money? Less effective. So charging more money is just a way to make an exchange less effective, not more effective. Some people who don't bother to run the numbers think that reliability is a justification for charging money at an exchange. I'd encourage them to run the numbers. An exchange which costs twice as much money needs to be twice as reliable to be equally effective. Ask yourself how many exchanges there are that drop more than half the bits on the floor, before you think of charging twice as much money. In other words, if it's not broke, don't fix it. > I would bet that there is more than enough business available to > cover the costs of intelligent spending. While I won't categorically dub that an oxymoron, I'd say that the possibility of their being "intelligent spending" at an exchange is, um, extremely rare. Military intelligence. Jumbo shrimp. Microsoft Works. > You could probably still give away FastE ports for free Any economist, and most practical thinkers generally, will tell you that creating artificial inequities isn't terribly wise. Likewise, incenting bad behaviors like using too-small ports isn't terribly wise. In an exchange, if you want to avoid congestion, and have to charge money for some reason, you probably want to charge the same amount per port, regardless of port speed. But you're a lot better off just solving whatever problem is costing money in the first place. -Bill
On Sun, 4 Jul 2004, Bill Woodcock wrote:
Go back and think about the purpose of an exchange: it's an economic optimization over transit. It's the value-add that lets someone who buys transit sell a service that's of greater value yet lesser cost than what they buy. Now, what's an exchange that costs more money? Less effective.
So charging more money is just a way to make an exchange less effective, not more effective.
Some people who don't bother to run the numbers think that reliability is a justification for charging money at an exchange. I'd encourage them to run the numbers. An exchange which costs twice as much money needs to be twice as reliable to be equally effective. Ask yourself how many exchanges there are that drop more than half the bits on the floor, before you think of charging twice as much money.
In other words, if it's not broke, don't fix it.
I'm going to quibble with Bill a bit on this. I'm going to attempt to do so carefully, both because I work for him, and because this is an area he knows much better than I do. ;) Peering is often said to help a network (and thus add value) in two ways, by increasing performance and by reducing costs. Performance is said to increase because other networks are brought closer. Costs are said to go down because a network starts getting connectivity for free that it would otherwise be paying for. In well connected urban areas of the US, for all but the biggest networks, it's no longer clear that either of these arguments are valid. The performance arguments are probably more controversial. The arguments are that shortening the path between two networks increases performance, and that removing an extra network in the middle increases reliability. The first argument holds relatively little water, since it's in many cases only the AS Path (not really relevant for packet forwarding performance) that gets shortened, rather than the number of routers or even the number of fiber miles. If traffic goes from network A, to network A's router at an exchange point, to network C, that shouldn't be different performance-wise from the traffic going from network A, to Network B's router at the exchange point, to Network C. Assuming none of the three networks are underprovisioning, the ownership of the router in the middle shouldn't make much difference. The reliability argument is probably more valid -- one less network means one less set of engineers to screw something up, but the big transit networks tend to be pretty reliable these days, and buying transit from two of them should be quite safe. The pricing issues are simpler. There's a cost to transit (which is, to some degree, paying some other network to do your peering for you), and there's a cost to peering. Without a clear qualitative difference between the two, peering needs to be cheaper to make much sense. The costs of transit involve not just what gets paid to the transit provider for the IP transit, but also the circuit to the transit provider, the router interface connecting to the transit provider, engineering time to maintain the connection and deal with the transit provider if they have issues, and so forth. Costs of peering include not just the cost of the exchange port, but also the circuit to get to the exchange switch, sometimes colo in the exchange facility, engineering time to deal with the connection and deal with the switch operator if there are issues, and time spent dealing with each individual peer, both in convincing them to turn the session up, and dealing with problems affecting the session. Even if the port on the exchange switch were free, there would be some scenarios in which peering would not be cheaper than transit. The situation changes considerably in less developed areas. The transit costs tend to be a lot higher (largely due to increased long-haul circuit costs), and there's a significant performance cost to having your traffic go hundreds or thousands of miles to get across town. The argument against free exchanges is, I think, more of an argument in favor of full-service facilities, and the savings they provide in terms of operational engineering time. If there's a problem at 3 am at PAIX, Equinix, or NOTA (to pick three well-known North American commercial exchange operators), it's easy to pick up the phone and get it resolved. When dealing with volunteers, or with an organization that doesn't have the budget for a 24/7 paid staff, there's at least a perception that it may be hard to find somebody who will make fixing somebody else's problem their top priority. Again, it becomes a matter of plugging that cost into the cost comparison, and figuring out whether it costs more to peer with or without that level of service.
> I would bet that there is more than enough business available to > cover the costs of intelligent spending.
While I won't categorically dub that an oxymoron, I'd say that the possibility of their being "intelligent spending" at an exchange is, um, extremely rare. Military intelligence. Jumbo shrimp. Microsoft Works.
To use the Seattle Exchange as an example again, it's not really fair to say there's no spending there. The spending is just hidden a bit better. The Equinix/PAIX/NOTA model involves the exchange operator operating the switch, as well as a big datacenter and cable plant for the customers. Customers generally buy the exchange port, the colo space, and some private crossconnects, and pay the exchange operator money for all of that. At the SIX, all the exchange organization runs is the stack of switches. The closet the exchange switches are in is provided by the landlord, who presumably more than makes up for it in rent charged to the exchange participants for their colo space. Several colo providers in the building provide remote hands service to exchange participants, which they presumably charge for. Then there's the donated switches, donated labor, and various other donations, all of which have a cost to their donors. The donors apparently feel that they get enough out of the exchange to justify those expenditures, even if it means they're paying considerably more for the exchange than some of the other participants are, so it all works out, but it's important to note that that the exchange isn't free for everybody.
> You could probably still give away FastE ports for free
Any economist, and most practical thinkers generally, will tell you that creating artificial inequities isn't terribly wise. Likewise, incenting bad behaviors like using too-small ports isn't terribly wise. In an exchange, if you want to avoid congestion, and have to charge money for some reason, you probably want to charge the same amount per port, regardless of port speed. But you're a lot better off just solving whatever problem is costing money in the first place.
Assuming there really is a reason to charge for the exchange ports (and certainly not advocating charging just for the sake of charging), a business model based on saving the customers money needs to keep in mind that different customers may have different costs for alternatives. If small amounts of transit are cheaper than large amounts of transit, it may be important to keep the small exchange ports cheaper than the small transit connections. If the amount of money required is such it's sufficient to charge all participants an amount less than the smallest participant's transit bill would be, charging a flat rate may work. If more money is required, it's probably better to get the extra from those who are saving more money by peering at the exchange. -Steve
* scg@gibbard.org (Steve Gibbard) [Mon 05 Jul 2004, 10:19 CEST]: [..]
The performance arguments are probably more controversial. The arguments are that shortening the path between two networks increases performance, and that removing an extra network in the middle increases reliability. The first argument holds relatively little water, since it's in many cases only the AS Path (not really relevant for packet forwarding performance) that gets shortened, rather than the number of routers or even the number of fiber miles.
"Not really"? Not always, perhaps. But it's more the rule than the exception, I think.
If traffic goes from network A, to network A's router at an exchange point, to network C, that shouldn't be different performance-wise from the traffic going from network A, to Network B's router at the exchange point, to Network C.
Except that, due to "peering games" some companies tend to engage in, the exchange point where A and B exchange traffic may well be in a different country from where A, C and their nearest exchange point is.
Assuming none of the three networks are underprovisioning, the ownership of the router in the middle shouldn't make much difference. The reliability argument is probably more valid -- one less network means one less set of engineers to screw something up, but the big transit networks tend to be pretty reliable these days, and buying transit from two of them should be quite safe.
The correct phrasing is indeed "one less network" and not "one less router." It's rarely one device in my experience. -- Niels.
On Mon, 5 Jul 2004, Niels Bakker wrote:
The correct phrasing is indeed "one less network" and not "one less router." It's rarely one device in my experience.
I'm not sure the number of routers matters much anymore, with more and more MPLS deployment you can't be sure that the path from A to B goes directly from A to B, it might go through F, G, H, I before B :( It's probably more relevant the path RTT isn't it?
On 7/5/04 1:18 AM, "Steve Gibbard" <scg@gibbard.org> wrote:
The performance arguments are probably more controversial. The arguments are that shortening the path between two networks increases performance, and that removing an extra network in the middle increases reliability. The first argument holds relatively little water, since it's in many cases only the AS Path (not really relevant for packet forwarding performance) that gets shortened, rather than the number of routers or even the number of fiber miles. If traffic goes from network A, to network A's router at an exchange point, to network C, that shouldn't be different performance-wise from the traffic going from network A, to Network B's router at the exchange point, to Network C. Assuming none of the three networks are underprovisioning, the ownership of the router in the middle shouldn't make much difference. The reliability argument is probably more valid -- one less network means one less set of engineers to screw something up, but the big transit networks tend to be pretty reliable these days, and buying transit from two of them should be quite safe.
I believe that peering does lead to a more robust network and somewhat better performance. Being heavily peered means that when one of my transit providers suffers a network 'event', I am less affected. Also, just because I'm sitting at a network exchange point (and take my transit there) doesn't mean that's where my transit networks peer. Quite often, I see traffic going to Stockton or Sacramento through one of my transit connections to be delivered to a router just a few cages away at PAIX.
The pricing issues are simpler. There's a cost to transit (which is, to some degree, paying some other network to do your peering for you), and there's a cost to peering. Without a clear qualitative difference between the two, peering needs to be cheaper to make much sense. The costs of transit involve not just what gets paid to the transit provider for the IP transit, but also the circuit to the transit provider, the router interface connecting to the transit provider, engineering time to maintain the connection and deal with the transit provider if they have issues, and so forth. Costs of peering include not just the cost of the exchange port, but also the circuit to get to the exchange switch, sometimes colo in the exchange facility, engineering time to deal with the connection and deal with the switch operator if there are issues, and time spent dealing with each individual peer, both in convincing them to turn the session up, and dealing with problems affecting the session. Even if the port on the exchange switch were free, there would be some scenarios in which peering would not be cheaper than transit.
When we established our connection at PAIX, peering bandwidth was a factor of 20 cheaper than transit. Now they're at parity. Unfortunately, some *IX operators haven't seen fit to become more competitive on pricing to keep peering more economical than average transit pricing. $5000 for an ethernet switch port? It makes me long for the days of throwing ethernet cables over the ceiling to informally peer with other networks in a building. In the 'bad' old days of public exchanges (even the ad hoc ones), most of the problems were with the design and traffic capacity of the equipment itself (not a real problem now), not with actual 'operations'.
On Mon, Jul 05, 2004 at 10:55:42AM -0700, joe mcguckin wrote:
$5000 for an ethernet switch port? It makes me long for the days of throwing ethernet cables over the ceiling to informally peer with other networks in a
Throwing ethernet cables over the ceiling does not scale. /vijay
On Jul 5, 2004, at 2:02 PM, vijay gill wrote:
On Mon, Jul 05, 2004 at 10:55:42AM -0700, joe mcguckin wrote:
$5000 for an ethernet switch port? It makes me long for the days of throwing ethernet cables over the ceiling to informally peer with other networks in a
Throwing ethernet cables over the ceiling does not scale.
Sure it does. The question is: "How far does it scale?" Nothing scales to infinity, and very, very few things do not scale past the degenerate case of 1. If you s/ethernet cables/optical fibers/, it scales even further. Especially since this is not being used for all his traffic. Not everyone needs a terabit of exit capacity. Guaranteeing everything you do is close to infinitely scalable is a Bad Idea for people who do not. And even if you do need a terabit of exit capacity, nothing wrong with the occasional OC768 routed through the ceiling. :) -- TTFN, patrick
On Jul 5, 2004, at 5:00 PM, Patrick W Gilmore wrote:
On Jul 5, 2004, at 2:02 PM, vijay gill wrote:
Throwing ethernet cables over the ceiling does not scale.
Sure it does. The question is: "How far does it scale?" Nothing scales to infinity, and very, very few things do not scale past the degenerate case of 1.
You need to take into account all of the aspects of the complexity that you introduce when you throw that fiber over the wall tho. While the fiber installation is simple enough, you have now created other problems: who will maintain it? Who knows it is there? Who knows that it is there in the other organization? Who needs to know about it within your own organization? How is tracked? Who does the NOC call when it goes bad? While it may be a single exception to your network architecture, if it is an exception that 100 people need to know about, then I'd argue that it doesn't scale. The fun and games that we had in Ye Olden Days o' the Internet simply are not workable when you are coordinating with hundreds of other employees. Put another way, scalability can never overlook the human element. Tony
On Jul 5, 2004, at 8:35 PM, Tony Li wrote:
On Jul 5, 2004, at 5:00 PM, Patrick W Gilmore wrote:
On Jul 5, 2004, at 2:02 PM, vijay gill wrote:
Throwing ethernet cables over the ceiling does not scale.
Sure it does. The question is: "How far does it scale?" Nothing scales to infinity, and very, very few things do not scale past the degenerate case of 1.
You need to take into account all of the aspects of the complexity that you introduce when you throw that fiber over the wall tho. While the fiber installation is simple enough, you have now created other problems: who will maintain it? Who knows it is there? Who knows that it is there in the other organization? Who needs to know about it within your own organization? How is tracked? Who does the NOC call when it goes bad?
While it may be a single exception to your network architecture, if it is an exception that 100 people need to know about, then I'd argue that it doesn't scale. The fun and games that we had in Ye Olden Days o' the Internet simply are not workable when you are coordinating with hundreds of other employees.
Put another way, scalability can never overlook the human element.
I'm wondering why you think that the fiber over the ceiling tile is somehow less tracked, maintained, monitored, documented, etc., than any other fiber in the network? Put another way: Just because I am not paying someone 1000s of $$ a month to watch it for me does not mean the human element is ignored. In fact, I have seen many cases where people lost track of interconnects where they were paying lots of money for someone else to watch them. So maybe the strange ones are better.... =) If you do not want to throw cables over the ceiling in your network, then by all means do not. I have repeated many times here and elsewhere: Your network, your decision. And for those of us who can track & maintain zero-dollar interconnects, please do not begrudge us the cost savings. -- TTFN, patrick
I'm wondering why you think that the fiber over the ceiling tile is somehow less tracked, maintained, monitored, documented, etc., than any other fiber in the network?
If someone was really concerned about trackability, etc., then I suspect they would invent a number for that cable, put a record in their circuit database, and use their nifty label maker machine to put labels every meter along the cable's length stating the circuit number, and NOC contact info. All of that work is still not much more than the effort of stringing the cable but it makes the whole architecture a lot more scalable. So there is a middle ground between flinging cables around and paying $1000 per month for a cross-connect... Middle grounds are nice places to play in. Lot's of variety, lot's of possibilities. --Michael Dillon
vgill@vijaygill.com (vijay gill) writes:
Throwing ethernet cables over the ceiling does not scale.
i think it's important to distinguish between "things aol and uunet don't think are good for aol and uunet" and "things that aren't good for anybody." what i found through my PAIX experience is that the second tier is really quite deep and broad, and that the first tier doesn't ignore them like their spokesmodels claim they do. what i found in helping to hone the ssni approach is that while public peering "ethernet style" is dead, vni/pni peering is alive and well. anyone who does not agree is free to behave that way. but it's not useful to try to dissuade cooperating adults from peering any way they want to. the interesting evolutionary aspect to this is that vni/pni peering starting with atm and moving to pni doesn't work at all, because atm by and large has a high cost per bit at the interface, and a low top end, and usually doesn't mandate co-location. but vni/pni peering over 802.1Q usually does succeed, because of the low cost per bit at the interface, the obscenely high top end, and the greater likelihood that the vni parties are co-located and so can switch to pni when the traffic volume warrants it. i've been told that if i ran a tier-1 i would lose my love for the vni/pni approach, which i think scales quite nicely even when it involves an ethernet cable through the occasional ceiling. perhaps i'll eat these words when and if that promotion comes through. meanwhile, disintermediation is still my favorite word in the internet dictionary. i like it when one's competitors are free to do business with each other, it leads to more and better innovation. -- Paul Vixie
On Tue, Jul 06, 2004 at 01:43:14AM +0000, Paul Vixie wrote:
vgill@vijaygill.com (vijay gill) writes:
Throwing ethernet cables over the ceiling does not scale.
i think it's important to distinguish between "things aol and uunet don't think are good for aol and uunet" and "things that aren't good for anybody."
what i found through my PAIX experience is that the second tier is really quite deep and broad, and that the first tier doesn't ignore them like their spokesmodels claim they do.
Paul, I think you took a left at the pass and went down the wrong road here. I am not saying ethernet doesn't scale or even vni/pni doesn't scale, but the mentality embodied in the approach "throw it over the wall" doesn't bode well if you are to scale.
not agree is free to behave that way. but it's not useful to try to dissuade cooperating adults from peering any way they want to.
i've been told that if i ran a tier-1 i would lose my love for the vni/pni approach, which i think scales quite nicely even when it involves an ethernet cable through the occasional ceiling. perhaps i'll eat these words when and if that promotion comes through. meanwhile, disintermediation is still my favorite word in the internet dictionary. i like it when one's competitors
As we have seen before in previous lives, and I'm pretty sure stephen stuart will step in, normalizing "throw the ethernet over the wall" school of design just leads to an incredible amount of pain when trying to operate, run and actually document what you've got. I thought it was illustrative to take a look at some of the other messages in this thread. People rushing in to argue the scale comment when the actual heart of the matter was something else entirely, which apparently only Tony managed to get. In any case, I am going to pull a randy here and strongly encourage my competitors to deploy this "ethernet over ceiling tile" engineering methodology. /vijay
In any case, I am going to pull a randy here and strongly encourage my competitors to deploy this "ethernet over ceiling tile" engineering methodology.
Funny thing is, that there are lots of competitors doing what randy "strongly encourages" them to do and they stay in business. I think it's all a question of scale. If you are at the top end of the scale or if you seriously intend to get to the top end of the scale, then it's good to be anal about these things and strive for the absolute best practice and most scalable engineering. On the other hand there is limited room at the top and most people are happy to run a business on a somewhat smaller scale. One size does not fit all. --Michael Dillon
In a message written on Tue, Jul 06, 2004 at 04:32:14AM +0000, vijay gill wrote:
Paul, I think you took a left at the pass and went down the wrong road here. I am not saying ethernet doesn't scale or even vni/pni doesn't scale, but the mentality embodied in the approach "throw it over the wall" doesn't bode well if you are to scale.
"Throw it over the wall" can be interpreted many ways. Everyone running their cable wherever they want with no controls, and abandoning it all in place makes a huge mess, and is one way to think about it. However, there are lots of telco MMR's, with either rows of racks or cages where every party runs their own fiber. Typically trays are provided in the colo cost, and the parties run the fiber in the trays and use the fiber management, label their jumpers, and more often than not pull out unused cables. If cages are involved "dropping the cable over the cage" is a common practice. Walking into these facilities you find they are generally neat and organized. I believe the problem Vijay is referencing isn't "throw it over the wall", but rather where people have to hide the fact that they are throwing it over the wall. When some colo providers want to do things like charge a 0-mile local loop for a fiber across the room people think it's too much, and run their own "over the wall" fiber. However since it's technically not allowed it's hidden, unlabeled, abandoned when unused, and creates a huge mess. -- Leo Bicknell - bicknell@ufp.org - CCIE 3440 PGP keys at http://www.ufp.org/~bicknell/ Read TMBG List - tmbg-list-request@tmbg.org, www.tmbg.org
--On Tuesday, July 06, 2004 08:46 -0400 Leo Bicknell <bicknell@ufp.org> wrote:
Everyone running their cable wherever they want with no controls, and abandoning it all in place makes a huge mess, and is one way to think about it.
[snipped]
I believe the problem Vijay is referencing isn't "throw it over the wall", but rather where people have to hide the fact that they are throwing it over the wall. When some colo providers want to do things like charge a 0-mile local loop for a fiber across the room people think it's too much, and run their own "over the wall" fiber. However since it's technically not allowed it's hidden, unlabeled, abandoned when unused, and creates a huge mess.
Thanks. Precisely the issue. Being humans involved in this, there is a tendency to sometimes hack around a problem and then leave it in place. I know I am susceptible to this and have to be on guard against this mentality at all times. And I've seen plenty of this in various orgs. The key here is to maintain an engineering discipline and be on constant guard against 'just this once' kind of thought. There should be no negotiations with yourself. Even the best of intentions lead to massive entropy when doing hacks around issues. Temporary fixes aren't. /vijay
Thanks. Precisely the issue. Being humans involved in this, there is a tendency to sometimes hack around a problem and then leave it in place. I know I am susceptible to this and have to be on guard against this mentality at all times. And I've seen plenty of this in various orgs. The key here is to maintain an engineering discipline and be on constant guard against 'just this once' kind of thought. There should be no negotiations with yourself.
Even the best of intentions lead to massive entropy when doing hacks around issues.
Temporary fixes aren't.
/vijay
Setting aside the issue of abandoning media after you stop using it, a cable run based on a handshake between two tenants in a telco hotel CAN lead to nightmares when it goes down. On the other hand, if you figure out a way to document it, and have field support lined up, it may turn out to be more easily restored than an "official" interconnect. :-)
On Tue, 06 Jul 2004 08:46:49 EDT, Leo Bicknell <bicknell@ufp.org> said:
Everyone running their cable wherever they want with no controls, and abandoning it all in place makes a huge mess, and is one way to think about it.
While clearing out the space that eventually ended up being repurposed for a supercomputer, we encountered a small run of Ethernet Classic - the thickwire stuff. We never did figure out how or why it got there (I doubt that anybody stashed it down there just for storage stretched straight out, with 3 vampire taps still attached), as the location in question was still cow pasture when we decided that all new cable would be thinwire (and we certainly had plenty of THAT under the floor, buried under all the cat-5...) And we're a small enough shop with low enough personnel turnover that rounding up *all* the possible co-conspirators and getting somebody to admit "Ahh... now there's a story attached to that wire..." usually doesn't take more than 3 or 4 pitchers of Guinness... ;) Which almost begs the question - what's the oddest "WTF??" anybody's willing to admit finding under a raised floor, or up in a ceiling or cable chase or similar location? (Feel free to change names to protect the guilty if need be....:)
In message <200407070643.i676hGFr029494@turing-police.cc.vt.edu>, Valdis.Kletni eks@vt.edu writes:
Which almost begs the question - what's the oddest "WTF??" anybody's willing to admit finding under a raised floor, or up in a ceiling or cable chase or similar location? (Feel free to change names to protect the guilty if need be....:)
Water -- about 8" of it... We had a two-level area below the raised floor in the computer room. The deeper area was flooded; fortunately, there was only solid insulated cables in that section. If the water had reached the shallower area, where there were outlets, connectors, etc., it would have been a different story. --Steve Bellovin, http://www.research.att.com/~smb
Which almost begs the question - what's the oddest "WTF??" anybody's willing to admit finding under a raised floor, or up in a ceiling or cable chase or similar location? (Feel free to change names to protect the guilty if need be....:)
Water -- about 8" of it...
Air -- about 8 feet of it... In a comms room in a tunnel under London. Luckily for those working there, there was a ladder stored there too. The term 'raised floor' was never so apt. -- Ian Dickinson Development Engineer PIPEX ian.dickinson@pipex.net http://www.pipex.net
A minitel - in the United States! Scott C. McGrath On Thu, 8 Jul 2004, Ian Dickinson wrote:
Which almost begs the question - what's the oddest "WTF??" anybody's willing to admit finding under a raised floor, or up in a ceiling or cable chase or similar location? (Feel free to change names to protect the guilty if need be....:)
Water -- about 8" of it...
Air -- about 8 feet of it... In a comms room in a tunnel under London. Luckily for those working there, there was a ladder stored there too. The term 'raised floor' was never so apt. -- Ian Dickinson Development Engineer PIPEX ian.dickinson@pipex.net http://www.pipex.net
On Thu, 8 Jul 2004, Steven M. Bellovin wrote:
Water -- about 8" of it...
We had a two-level area below the raised floor in the computer room. The deeper area was flooded; fortunately, there was only solid
snakes in the water, which had swum (swam?) in through the entrance facility for the building electric... 'fun'!
On Wednesday 07 July 2004 02:43 am, Valdis.Kletnieks@vt.edu wrote:
Which almost begs the question - what's the oddest "WTF??" anybody's willing to admit finding under a raised floor, or up in a ceiling or cable chase or similar location? (Feel free to change names to protect the guilty if need be....:)
Not really a WTF. At my last job while working at an earthstation in Texas where I had some equipment, I looked up from the raised floor and found myself staring at a scorpion. Being that I am from the Northeast where we don't seem to have those things, it pretty much scared the heck out of me. Gave the techs at the station a good laugh. -Patrick -- Patrick Muldoon Network/Software Engineer INOC (http://www.inoc.net) PGPKEY (http://www.inoc.net/~doon) Key ID: 0x370D752C
Select * from users where clue > 0 O Rows Returned
On Thu, 8 Jul 2004, Patrick Muldoon wrote:
At my last job while working at an earthstation in Texas where I had some equipment, I looked up from the raised floor and found myself staring at a scorpion. Being that I am from the Northeast where we don't seem to have those things, it pretty much scared the heck out of me. Gave the techs at the station a good laugh.
Sounds like they need to make cowboy boots standard attire down there :)
On Wednesday 07 July 2004 02:43 am, Valdis.Kletnieks@vt.edu wrote:
Which almost begs the question - what's the oddest "WTF??" anybody's willing to admit finding under a raised floor, or up in a ceiling or cable chase or similar location? (Feel free to change names to protect the guilty if need be....:)
Raccoons. Came in late one night and heard noises that I didn't really expect. Turns out the facility had diverse entrances and multiple conduits - and one of them had been exposed outside due to some erosion and had been damaged. We found little surprises for quite awhile after that. Undergarments and shoes. His and hers, but no other clothing. A crutch. Just one. On the "not under the floor" I was at a facility that had an enormous amount of open floor space and far too much air. (Wishful thinking) The ops staff moved all the grated tiles to a central area and used to play adult-sized air hockey complete with a rubber puck and sticks... but only late at night. -John
On Fri, 9 Jul 2004, John Ferriby wrote:
The ops staff moved all the grated tiles to a central area and used to play adult-sized air hockey complete with a rubber puck and sticks... but only late at night.
'login;' ran a story about 4-5 years ago about some machine room in the UK (I think), something about playing cricket friday evenings... until somone hit one out of the 'park' tripping the emergency power off button for the machine room :(
Careful with those invokations, Vijay.
As we have seen before in previous lives, and I'm pretty sure stephen stuart will step in, normalizing "throw the ethernet over the wall" school of design just leads to an incredible amount of pain when trying to operate, run and actually document what you've got.
The various replies have largely covered what I would say; that it's all about the OA&M. Yes, in the previous life that you mention (known currently as "the good old days"), some pain was suffered. If there is a spectrum roughly described as: - have no standards or documentation (and none of the pain of developing, following, maintaining them) and spend all your time doing discovery each time you have a problem to solve (so that all of your pain results from having no operational consistency) - have standards that you don't follow and documentation that you don't maintain and constantly trip over exception cases (suffer pain on both ends of the spectrum) - have standards that are followed and documentation that is maintained and achieve a high level of operational consistency (this is widely regarded as "better") then the pain that you describe came from moving along that spectrum. The pain came from moving, not necessarily from the direction that we chose. We moved in the direction that we did because the goals that we set for ourselves demanded it. Hopefully the folks still there continue to reap the benefits of that work. Each organization chooses (whether consciously or not) a point in the spectrum described above and operates there. They compete in the marketplace without that choice being a significant differentiator; an organization that lacks design skills might compensate by being able to debug problems quickly, for example, such that externally measurable metrics that drive purchasing decisions are roughly equivalent. There is no One Truth; you try to make your organizational strengths work for you to maximum benefit, while not getting tripped up by your weaknesses. What *is* a differentiator is how well you execute at the point in the spectrum that you've chosen. That choice is made over and over again within the lifecycle of an organization. The wheel turns, and administrations come and go, moving one way or the other, using "the previous administration did X and we must do Y" as justification for desired resources to travel in either direction (or to stay in one place, with the appropriate label engineering to make it look as though motion will occur). All the while, the benefits and drawbacks of various aspects of various choices will be debated on the NANOG mailing list. There's an analogy to samsara hiding in there, for those that like analogies. I'd elaborate, but it's time to take the dogma for a walk. Stephen
participants (26)
-
Bill Woodcock
-
Christopher L. Morrow
-
Ian Dickinson
-
joe mcguckin
-
John Ferriby
-
Leo Bicknell
-
Mark Borchers
-
Michael Smith
-
Michael.Dillon@radianz.com
-
Mikael Abrahamsson
-
Niels Bakker
-
Patrick Muldoon
-
Patrick W Gilmore
-
Paul Vixie
-
Randy Bush
-
Richard A Steenbergen
-
Scott McGrath
-
Stephen J. Wilcox
-
Stephen Stuart
-
Steve Gibbard
-
Steven M. Bellovin
-
Tom (UnitedLayer)
-
Tony Li
-
Troy Davis
-
Valdis.Kletnieks@vt.edu
-
vijay gill