Michael Dillon is spot on when he states the following (quotation below), although he could have gone another step in suggesting how the distance insensitivity of fiber could be further leveraged:
The high speed fibre in Metro Area Networks will tie it all together with the result that for many applications, it won't matter where the servers are.
In fact, those same servers, and a host of other storage and network elements, can be returned to the LAN rooms and closets of most commercial buildings from whence they originally came prior to the large-scale data center consolidations of the current millennium, once organizations decide to free themselves of the 100-meter constraint imposed by UTP-based LAN hardware and replace those LANs with collapsed fiber backbone designs that attach to remote switches (which could be either in-building or remote), instead of the minimum two switches on every floor that has become customary today. We often discuss the empowerment afforded by optical technology, but we've barely scratched the surface of its ability to effect meaningful architectural changes. The earlier prospects of creating consolidated data centers were once near-universally considered timely and efficient, and they still are in many respects. However, now that the problems associated with a/c and power have entered into the calculus, some data center design strategies are beginning to look more like anachronisms that have been caught in a whip-lash of rapidly shifting conditions, and in a league with the constraints that are imposed by the now-seemingly-obligatory 100-meter UTP design. Frank A. Coluccio DTI Consulting Inc. 212-587-8150 Office 347-526-6788 Mobile On Sat Mar 29 13:57 , sent:
Can someone please, pretty please with sugar on top, explain the point behind high power density?
It allows you to market your operation as a "data center". If you spread it out to reduce power density, then the logical conclusion is to use multiple physical locations. At that point you are no longer centralized.
In any case, a lot of people are now questioning the traditional data center model from various angles. The time is ripe for a paradigm change. My theory is that the new paradigm will be centrally managed, because there is only so much expertise to go around. But the racks will be physically distributed, in virtually every office building, because some things need to be close to local users. The high speed fibre in Metro Area Networks will tie it all together with the result that for many applications, it won't matter where the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop trend will make it much easier to place an application without worrying about the exact locations of the physical servers.
Back in the old days, small ISPs set up PoPs by finding a closet in the back room of a local store to set up modem banks. In the 21st century folks will be looking for corporate data centers with room for a rack or two of multicore CPUs running XEN, and Opensolaris SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.
--Michael Dillon
On Sat, 29 Mar 2008, Frank Coluccio wrote:
In fact, those same servers, and a host of other storage and network elements, can be returned to the LAN rooms and closets of most commercial buildings from whence they originally came prior to the
How does that work? So now we buy a whole bunch of tiny gensets, and a whole bunch of baby UPSen and smaller cooling units to support little datacenters? Not to mention diverse paths to each point.. Didn't we (the customers) try that already and realize that it's rather unmanagable? I suppose the maintenance industry would love the surge in extra contracts to keep all the gear running.... ..david --- david raistrick http://www.netmeister.org/news/learn2quote.html drais@icantclick.org http://www.expita.com/nomime.html
On Sat, 29 Mar 2008, Frank Coluccio wrote:
We often discuss the empowerment afforded by optical technology, but we've barely scratched the surface of its ability to effect meaningful architectural changes.
If you talk to the server people, they have an issue with this: Latency. I've talked to people who have collapsed layers in their LAN because they can see performance degradation for each additional switch packets have to pass in their NFS-mount. Yes, higher speeds means lower serialisation delay, but there is still a lookup time involved and 10GE is substantionally more expensive than GE. -- Mikael Abrahamsson email: swmike@swm.pp.se
On Sat, Mar 29, 2008 at 3:04 PM, Frank Coluccio <frank@dticonsulting.com> wrote:
Michael Dillon is spot on when he states the following (quotation below), although he could have gone another step in suggesting how the distance insensitivity of fiber could be further leveraged:
Dillon is not only not spot on, dillon is quite a bit away from being spot on. Read on.
The high speed fibre in Metro Area Networks will tie it all together with the result that for many applications, it won't matter where the servers are.
In fact, those same servers, and a host of other storage and network elements, can be returned to the LAN rooms and closets of most commercial buildings from whence they originally came prior to the large-scale data center consolidations of the current millennium, once organizations decide to free themselves of the 100-meter constraint imposed by UTP-based LAN hardware and replace those LANs with collapsed fiber backbone designs that attach to remote switches (which could be either in-building or remote), instead of the minimum two switches on every floor that has become customary today.
Here is a little hint - most distributed applications in traditional jobsets, tend to work best when they are close together. Unless you can map those jobsets onto truly partitioned algorithms that work on local copy, this is a _non starter_.
We often discuss the empowerment afforded by optical technology, but we've barely scratched the surface of its ability to effect meaningful architectural changes.
No matter how much optical technology you have, it will tend to be more expensive to run, have higher failure rates, and use more power, than simply running fiber or copper inside your datacenter. There is a reason most people, who are backed up by sober accountants, tend to cluster stuff under one roof.
The earlier prospects of creating consolidated data centers were once near-universally considered timely and efficient, and they still are in many respects. However, now that the problems associated with a/c and power have entered into the calculus, some data center design strategies are beginning to look more like anachronisms that have been caught in a whip-lash of rapidly shifting conditions, and in a league with the constraints that are imposed by the now-seemingly-obligatory 100-meter UTP design.
Frank, lets assume we have abundant dark fiber, and a 800 strand ribbon fiber cable costs the same as a utp run. Can you get me some quotes from a few folks about terminating and patching 800 strands x2? /vijay
Frank A. Coluccio DTI Consulting Inc. 212-587-8150 Office 347-526-6788 Mobile
On Sat Mar 29 13:57 , sent:
Can someone please, pretty please with sugar on top, explain the point behind high power density?
It allows you to market your operation as a "data center". If you spread it out to reduce power density, then the logical conclusion is to use multiple physical locations. At that point you are no longer centralized.
In any case, a lot of people are now questioning the traditional data center model from various angles. The time is ripe for a paradigm change. My theory is that the new paradigm will be centrally managed, because there is only so much expertise to go around. But the racks will be physically distributed, in virtually every office building, because some things need to be close to local users. The high speed fibre in Metro Area Networks will tie it all together with the result that for many applications, it won't matter where the servers are. Note that the Google MapReduce, Amazon EC2, Haddoop trend will make it much easier to place an application without worrying about the exact locations of the physical servers.
Back in the old days, small ISPs set up PoPs by finding a closet in the back room of a local store to set up modem banks. In the 21st century folks will be looking for corporate data centers with room for a rack or two of multicore CPUs running XEN, and Opensolaris SANs running ZFS/raidz providing iSCSI targets to the XEN VMs.
--Michael Dillon
Here is a little hint - most distributed applications in traditional jobsets, tend to work best when they are close together. Unless you can map those jobsets onto truly partitioned algorithms that work on local copy, this is a _non starter_.
Let's make it simple and say it in plain English. The users of services have made the decision that it is "good enough" to be a user of a service hosted in a data center that is remote from the client. Remote means in another building in the same city, or in another city. Now, given that context, many of these "good enough" applications will run just fine if the "data center" is no longer in one physical location, but distributed across many. Of course, as you point out, one should not be stupid when designing such distributed data centers or when setting up the applications in them. I would assume that every data center has local storage available using some protocol like iSCSI and probably over a separate network from the external client access. That right there solves most of your problems of traditional jobsets. And secondly, I am not suggesting that everybody should shut down big data centers or that every application should be hosted across several of these distributed data centers. There will always be some apps that need centralised scaling. But there are many others that can scale in a distributed manner, or at least use distributed mirrors in a failover scenario.
No matter how much optical technology you have, it will tend to be more expensive to run, have higher failure rates, and use more power, than simply running fiber or copper inside your datacenter. There is a reason most people, who are backed up by sober accountants, tend to cluster stuff under one roof.
Frankly I don't understand this kind of statement. It seems obvious to me that high-speed metro fibre exists and corporate IT people already have routers and switches and servers in the building, connected to the metro fiber. Also, the sober accountants do tend to agree with spending money on backup facilities to avoid the risk of single points of failure. Why should company A operate two data centers, and company B operate two data centers, when they could outsource it all to ISP X running one data center in each of the two locations (Company A and Company B). In addition, there is a trend to commoditize the whole data center. Amazon EC2 and S3 is not the only example of a company who does not offer any kind of colocation, but you can host your apps out of their data centers. I believe that this trend will pick up steam and that as the corporate market begins to accept running virtual servers on top of a commodity infrastructure, there is an opportunity for network providers to branch out and not only be specialists in the big consolidated data centers, but also in running many smaller data centers that are linked by fast metro fiber. --Michael Dillon
On Mon, Mar 31, 2008 at 11:24 AM, <michael.dillon@bt.com> wrote:
Let's make it simple and say it in plain English. The users of services have made the decision that it is "good enough" to be a user of a service hosted in a data center that is remote from the client. Remote means in another building in the same city, or in another city.
Now, given that context, many of these "good enough" applications will run just fine if the "data center" is no longer in one physical location, but distributed across many. Of course, as you point out, one should not be stupid when designing such distributed data centers or when setting up the applications in them.
I think many folks have gotten used to 'my application runs in a datacenter' (perhaps even a remote datacenter) but they still want performance from their application. If the DC is 20ms away from their desktop (say they are in NYC and the DC is in IAD which is about 20ms distant on a good run) things are still 'snappy'. If the application uses bits/pieces from a farm of remote datacenters (also 20ms away or so so anywhere from NYC to ATL to CHI away from IAD) latency inside that application is now important... something like a DB heavy app will really suffer under this scenario if locality of the database's data isn't kept in mind as well. Making multiple +20ms hops around for information is really going to impact user experience of the application I think. The security model as well would be highly interesting in this sort of world.. both physical security (line/machine/cage) and information security (data over the links). This seems to require fairly quick encryption in a very distributed envorinment where physical security isn't very highly assured.
I would assume that every data center has local storage available using some protocol like iSCSI and probably over a separate network from the external client access. That right there solves most of your problems of traditional jobsets. And secondly, I am not suggesting that everybody should shut down big data centers or that every application should be hosted across several of these distributed data centers. There will always be some apps that need centralised scaling. But there are many others that can scale in a distributed manner, or at least use distributed mirrors in a failover scenario.
ah, like the distributed DR sites financials use? (I've heard of designs, perhaps from this list even, of distributed DC's 60+ miles apart with iscsi on fiber between the sites... pushing backup copies of transaction data to the DR facility) That doesn't help in scenarios with highly interactive data sets, or lower latency requirements for applications... I also remember a SAP (I think) installation that got horribly unhappy with the database/front-end parts a few cities apart from each other over an 'internal' network...
In addition, there is a trend to commoditize the whole data center. Amazon EC2 and S3 is not the only example of a company who does not offer any kind of colocation, but you can host your apps out of their data centers. I believe that this trend will pick up
asp's were a trend in the late 90's, for some reason things didn't work out then (reason not really imporant now). Today/going-forward some things make sense to outsource in this manner, I'm not sure that customer critical data or data with high change-rates are it though, certainly nothing that's critical to your business from an IP perspective, at least not without lots of security thought/controls. When working at a large networking company we found it really hard to get people to move their applications out from under their desk (yes, literally) and into a production datacenter... even with offers of mostly free hardware and management of systems (so less internal budget used). Some of that was changing when I left, but certainly not quickly. it's an interesting proposition, and the DC's in question were owned by the company in question, I'm not sure about moving off to another company's facilities though... scary security problems result. -Chris
On Mon, Mar 31, 2008 at 8:24 AM, <michael.dillon@bt.com> wrote:
Here is a little hint - most distributed applications in traditional jobsets, tend to work best when they are close together. Unless you can map those jobsets onto truly partitioned algorithms that work on local copy, this is a _non starter_.
Let's make it simple and say it in plain English. The users of services have made the decision that it is "good enough" to be a user of a service hosted in a data center that is remote from the client. Remote means in another building in the same city, or in another city.
Try reading for comprehension. The users of services have made the decision that it is good enough to be a user of a service hosted in a datacenter, and thanks to the wonders of AJAX and pipelining, you can even get snappy performance. What the users haven't signed up for is the massive amounts of scatter gathers that happen _behind_ the front end. Eg, I click on a web page to log in. The login process then kicks off a few authentication sessions with servers located halfway around the world. Then you do the data gathering, 2 phase locks, distributed file systems with the masters and lock servers all over the place. Your hellish user experience, let me SHOW YOU IT.
Now, given that context, many of these "good enough" applications will run just fine if the "data center" is no longer in one physical location, but distributed across many. Of course, as you point out, one should not be stupid when designing such distributed data centers or when setting up the applications in them.
Other than that minor handwaving, we are all good. Turns out that desining such distributed datacenters and setting up applications that you just handwaved away is a bit harder than it looks. I eagerly await papers on distributed database transactions with cost estimates for a distributed datacenter model vs. a traditional model.
I would assume that every data center has local storage available using some protocol like iSCSI and probably over a separate network from the external client access. That right there solves most of your problems of traditional jobsets. And secondly, I am not suggesting that everybody should shut down big data centers or that every application should be hosted across several of these distributed data centers.
See above. That right there doesn't quite solve most of the problems of traditional jobsets but its kind of hard to hear with the wind in my ears.
There will always be some apps that need centralised scaling. But there are many others that can scale in a distributed manner, or at least use distributed mirrors in a failover scenario.
Many many others indeed.
No matter how much optical technology you have, it will tend to be more expensive to run, have higher failure rates, and use more power, than simply running fiber or copper inside your datacenter. There is a reason most people, who are backed up by sober accountants, tend to cluster stuff under one roof.
Frankly I don't understand this kind of statement. It seems obvious to me that high-speed metro fibre exists and corporate IT people already have routers and switches and servers in the building, connected to the metro fiber. Also, the sober accountants do tend to agree with spending money on backup facilities to avoid the risk of single points of failure. Why should company A operate two data centers, and company B operate two data centers, when they could outsource it all to ISP X running one data center in each of the two locations (Company A and Company B).
I guess I can try to make it clearer by example: look at the cross-sectional bandwidth availability of a datacenter, now compare and contrast what it would take to pull it apart by a few tens of miles and conduct the cost comparison. /vijay
In addition, there is a trend to commoditize the whole data center. Amazon EC2 and S3 is not the only example of a company who does not offer any kind of colocation, but you can host your apps out of their data centers. I believe that this trend will pick up steam and that as the corporate market begins to accept running virtual servers on top of a commodity infrastructure, there is an opportunity for network providers to branch out and not only be specialists in the big consolidated data centers, but also in running many smaller data centers that are linked by fast metro fiber.
--Michael Dillon
Eg, I click on a web page to log in. The login process then kicks off a few authentication sessions with servers located halfway around the world. Then you do the data gathering, 2 phase locks, distributed file systems with the masters and lock servers all over the place. Your hellish user experience, let me SHOW YOU IT.
I know that this kind of hellish user experience exists today but in those situations, the users are not happy with it, and I don't think anybody would consider it to be a well-architected application. But let's use two concrete examples. One is a city newspaper hosted in a local ISP data center which interacts with a database at the newspaper's premises. The other is an Internet retailer hosted in the same local ISP data center which basically runs standalone except for sending shipping orders to a fulfillment center in another city, and the nightly site updates uploaded by the store owner after returning home from the dayjob. Now, move the retailer half a mile closer to the city center in distributed pod 26 of a distributed ISP data center, and move the newspaper to pod 11 which just happens to be on their own premises because they have outsourced their data center to the ISP who runs these distributed data center pods. For the life of me, I can't see any technical issue with this. Obviously, it doesn't suit the customers who can't fit into a single pod because they risk creating a scenario like you describe. I'm not foolish enough to suggest that one size fits all. All I am suggesting is that there is room for a new type of data center that mitigates the power and cooling issues by spreading out into multiple pod locations instead of clustering everything into one big blob.
Other than that minor handwaving, we are all good. Turns out that desining such distributed datacenters and setting up applications that you just handwaved away is a bit harder than it looks. I eagerly await papers on distributed database transactions with cost estimates for a distributed datacenter model vs. a traditional model.
For the big applications which cannot fit inside a single pod, I expect the Amazon EC2/S3 model to influence the way in which they approach decentralized scaling. And in that case, when these people figure out the details, then the distributed pod data center architecture can support this model just as easily as the big blob architecture. I'm not holding my breath waiting for papers because in the real world, people are going with what works. The scientific world has bought into the grid architecture, or the so-called supercomputer cluster model. Everyone else is looking to Google MapReduce/BigTable, Amazon EC2/S3, Yahoo Hadoop, XEN virtualization and related technologies.
See above. That right there doesn't quite solve most of the problems of traditional jobsets but its kind of hard to hear with the wind in my ears.
This is NANOG. Traditional jobsets here are web servers, blogs/wikis, Internet stores, Content-management sites like CNN, on-demand video, etc. The kind of things that our customers run out of the racks that they rent. Tell me again how on-demand video works better in one big data center?
I guess I can try to make it clearer by example: look at the cross-sectional bandwidth availability of a datacenter, now compare and contrast what it would take to pull it apart by a few tens of miles and conduct the cost comparison.
It would be pretty dumb to try and pull apart a big blob architecture and convert it into a distributed pod architecture. But it would be very clever to build some distributed pods and put new customers in there. --Michael Dillon
--On March 29, 2008 5:04:01 PM -0500 Frank Coluccio <frank@dticonsulting.com> wrote:
Michael Dillon is spot on when he states the following (quotation below), although he could have gone another step in suggesting how the distance insensitivity of fiber could be further leveraged:
The high speed fibre in Metro Area Networks will tie it all together with the result that for many applications, it won't matter where the servers are.
In fact, those same servers, and a host of other storage and network elements, can be returned to the LAN rooms and closets of most commercial buildings from whence they originally came prior to the large-scale data center consolidations of the current millennium, once organizations decide to free themselves of the 100-meter constraint imposed by UTP-based LAN hardware and replace those LANs with collapsed fiber backbone designs that attach to remote switches (which could be either in-building or remote), instead of the minimum two switches on every floor that has become customary today.
Yeah except in a lot of areas there is no MAN, and the ILECs want to bend you over for any data access. I've no idea how well the MAN idea is coming along in various areas, but you still have to pay for access to it somehow, and that adds to overhead. Which leads to attempt efficiency gains through centralization and increased density.
We often discuss the empowerment afforded by optical technology, but we've barely scratched the surface of its ability to effect meaningful architectural changes. The earlier prospects of creating consolidated data centers were once near-universally considered timely and efficient, and they still are in many respects. However, now that the problems associated with a/c and power have entered into the calculus, some data center design strategies are beginning to look more like anachronisms that have been caught in a whip-lash of rapidly shifting conditions, and in a league with the constraints that are imposed by the now-seemingly-obligatory 100-meter UTP design.
In order for the MAN scenarios to work though access has to be pretty cheap, and fairly ubiquitous. Last i checked though making a trench was a very messy very expensive process. So MANs are great once they're installed but those installing/building them will want to recoup their large investments.
On Tue, 01 Apr 2008 16:48:47 MDT, Michael Loftis said:
Yeah except in a lot of areas there is no MAN, and the ILECs want to bend you over for any data access. I've no idea how well the MAN idea is coming along in various areas, but you still have to pay for access to it somehow, and that adds to overhead. Which leads to attempt efficiency gains through centralization and increased density.
I doubt we'll ever see the day when running gigabit across town becomes cost effective when compared to running gigabit to the other end of your server room/cage/whatever.
Speaking of running gig long distances, does anyone on the list have suggestions on a >8 port L2 switch with fiber ports based on personal experience? Lots of 48 port gig switches have 2-4 fiber uplink ports, but this means daisy-chains instead of hub/spoke. Looking for a central switch for a star topography to home fiber runs that is cost effective and works. Considering: DLink DXS-3326GSR NetGear GSM7312 Foundry SX-FI12GM-4 Zyxel GS-4012F I realize not all these switches are IEEE 802.3ae, Clause 49 or IEEE 802.3aq capable. Andrew -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: Tuesday, April 01, 2008 5:06 PM To: Michael Loftis Cc: frank@dticonsulting.com; michael.dillon@bt.com; nanog@merit.edu Subject: Re: cooling door On Tue, 01 Apr 2008 16:48:47 MDT, Michael Loftis said:
Yeah except in a lot of areas there is no MAN, and the ILECs want to bend you over for any data access. I've no idea how well the MAN idea is coming along in various areas, but you still have to pay for access to it somehow, and that adds to overhead. Which leads to attempt efficiency gains through centralization and increased density.
I doubt we'll ever see the day when running gigabit across town becomes cost effective when compared to running gigabit to the other end of your server room/cage/whatever.
I have one of these. http://www.netgear.com/Products/Switches/Layer3ManagedSwitches/GSM7328FS.asp... Relatively inexpensive, and works happily with Cisco or OEM GBICs. I've always had good success working with their engineering folks for feature requests and troubleshooting. My main gripe is there is no good RPS option for it right now. Are you running the fiber connections as GigE, or 100FX? Are they L2 or L3 routed connections? Andrew Staples wrote:
Speaking of running gig long distances, does anyone on the list have suggestions on a >8 port L2 switch with fiber ports based on personal experience? Lots of 48 port gig switches have 2-4 fiber uplink ports, but this means daisy-chains instead of hub/spoke. Looking for a central switch for a star topography to home fiber runs that is cost effective and works.
Considering: DLink DXS-3326GSR NetGear GSM7312 Foundry SX-FI12GM-4 Zyxel GS-4012F
I realize not all these switches are IEEE 802.3ae, Clause 49 or IEEE 802.3aq capable.
Andrew
-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Valdis.Kletnieks@vt.edu Sent: Tuesday, April 01, 2008 5:06 PM To: Michael Loftis Cc: frank@dticonsulting.com; michael.dillon@bt.com; nanog@merit.edu Subject: Re: cooling door
On Tue, 01 Apr 2008 16:48:47 MDT, Michael Loftis said:
Yeah except in a lot of areas there is no MAN, and the ILECs want to bend you over for any data access. I've no idea how well the MAN idea is coming along in various areas, but you still have to pay for access to it somehow, and that adds to overhead. Which leads to attempt efficiency gains through centralization and increased density.
I doubt we'll ever see the day when running gigabit across town becomes cost effective when compared to running gigabit to the other end of your server room/cage/whatever.
Speaking of running gig long distances, does anyone on the list have suggestions on a >8 port L2 switch with fiber ports based on personal experience? Lots of 48 port gig switches have 2-4 fiber uplink ports, but this means daisy-chains instead of hub/spoke. Looking for a central switch for a star topography to home fiber runs that is cost effective and works.
Considering: DLink DXS-3326GSR NetGear GSM7312 Foundry SX-FI12GM-4 Zyxel GS-4012F
I realize not all these switches are IEEE 802.3ae, Clause 49 or IEEE 802.3aq capable.
Cost effective would probably be the Dell 6024F. We have some of these and they've worked well, but we're not making any use of their "advanced features." Can be had cheaply on eBay these days. Has basic L3 capabilities (small forwarding table, OSPF), built in redundant power supply, etc. If you're fine with a non-ae/aq switch, these are worth considering. 16 SFP plus 8 shared SFP/copper make it a fairly flexible device. You did say cost effective, right? :-) ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Tue, 1 Apr 2008, Andrew Staples wrote:
Speaking of running gig long distances, does anyone on the list have suggestions on a >8 port L2 switch with fiber ports based on personal experience? Lots of 48 port gig switches have 2-4 fiber uplink ports, but this means daisy-chains instead of hub/spoke. Looking for a central switch for a star topography to home fiber runs that is cost effective and works.
Nortel 5530 has 24x copper, 12x SFP and 2x10GE (XFP): http://www.nortel.com/ers5530 We don't have 5530's but we have good experience with 5510 as L2 device. In your case you may be looking for a larger number of SFP GE ports though. -- Pekka Savola "You each name yourselves king, yet the Netcore Oy kingdom bleeds." Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
We run nortel 5530. They are not exactly "cheap" by my standards for 24 GE (10k list), but they do have 2x10G. Also they don't play nice with rstp to cisco, and I still can't figure out how to get it to show me stp port status. Both vendors in the tree think they're root. CLI is tolerable, but if you get into lacp config or want to know what knobs are set for a given port, you'll want to light them on fire (many vendors suck this way anyway). L3 features are lacking. No ACLs, pay extra for OSPF. For 24 ports of GE and 2x 10G XFP, they'll work nice as a simple L2 switch, just don't stack em (does not work as advertised when making changes under load), and dump some of the vendor proprietary defaults such as nortel-mstp and mlt link aggs. On 4/1/08, Pekka Savola <pekkas@netcore.fi> wrote:
On Tue, 1 Apr 2008, Andrew Staples wrote:
Speaking of running gig long distances, does anyone on the list have suggestions on a >8 port L2 switch with fiber ports based on personal experience? Lots of 48 port gig switches have 2-4 fiber uplink ports, but this means daisy-chains instead of hub/spoke. Looking for a central switch for a star topography to home fiber runs that is cost effective and works.
Nortel 5530 has 24x copper, 12x SFP and 2x10GE (XFP): http://www.nortel.com/ers5530
We don't have 5530's but we have good experience with 5510 as L2 device. In your case you may be looking for a larger number of SFP GE ports though.
-- Pekka Savola "You each name yourselves king, yet the Netcore Oy kingdom bleeds." Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
The HP ProCurve 3500/5400 range is working nicely for us. The 3500 is a fixed format 24/48 port 10/100/1000 with 4 dual-personality SFP ports. You can slap a 10Gig expansion module in the back. Around $3k/$6k for the 24/48 port We prefer the 5406 chassis, as it is more flexible. Does consume more space though. We only run basic L2 stuff, although the unit supports L3. Tim:> On Thu, Apr 3, 2008 at 10:05 AM, Kevin Blackham <blackham@gmail.com> wrote:
We run nortel 5530. They are not exactly "cheap" by my standards for 24 GE (10k list), but they do have 2x10G. Also they don't play nice with rstp to cisco, and I still can't figure out how to get it to show me stp port status. Both vendors in the tree think they're root. CLI is tolerable, but if you get into lacp config or want to know what knobs are set for a given port, you'll want to light them on fire (many vendors suck this way anyway). L3 features are lacking. No ACLs, pay extra for OSPF.
For 24 ports of GE and 2x 10G XFP, they'll work nice as a simple L2 switch, just don't stack em (does not work as advertised when making changes under load), and dump some of the vendor proprietary defaults such as nortel-mstp and mlt link aggs.
On 4/1/08, Pekka Savola <pekkas@netcore.fi> wrote:
On Tue, 1 Apr 2008, Andrew Staples wrote:
Speaking of running gig long distances, does anyone on the list have suggestions on a >8 port L2 switch with fiber ports based on personal experience? Lots of 48 port gig switches have 2-4 fiber uplink ports, but this means daisy-chains instead of hub/spoke. Looking for a central switch for a star topography to home fiber runs that is cost effective and works.
Nortel 5530 has 24x copper, 12x SFP and 2x10GE (XFP): http://www.nortel.com/ers5530
We don't have 5530's but we have good experience with 5510 as L2 device. In your case you may be looking for a larger number of SFP GE ports though.
-- Pekka Savola "You each name yourselves king, yet the Netcore Oy kingdom bleeds." Systems. Networks. Security. -- George R.R. Martin: A Clash of Kings
On Thu, 3 Apr 2008, Tim Durack wrote:
The HP 3500 is a fixed format 24/48 port 10/100/1000 with 4 dual-personality SFP ports. You can slap a 10Gig expansion module in the back. Around $3k/$6k for the 24/48 port
We only run basic L2 stuff, although the unit supports L3.
Foundry FLS624/648 is half that cost (well, 2500/3500), and the 10G modules are just SFPs....and if you're not doing L3, why buy L3? :) 2x10G for the 24, 4x10G for the 48.... ...david --- david raistrick http://www.netmeister.org/news/learn2quote.html drais@icantclick.org http://www.expita.com/nomime.html
I guess we've had good experience with ProCurve, so have stuck with them. I also like the fact that it is the same code train for the 5400 chassis and 3500 fixed-format. Less work for me when it comes to the test-upgrade cycle. We also make heavy use of sFlow monitoring (although I believe Foundry supports this too.) Anyway, it's another option. Tim:> On Thu, Apr 3, 2008 at 12:00 PM, david raistrick <drais@icantclick.org> wrote:
On Thu, 3 Apr 2008, Tim Durack wrote:
The HP 3500 is a fixed format 24/48 port 10/100/1000 with 4
dual-personality SFP ports. You can slap a 10Gig expansion module in the back. Around $3k/$6k for the 24/48 port
We only run basic L2 stuff, although the unit supports L3.
Foundry FLS624/648 is half that cost (well, 2500/3500), and the 10G modules are just SFPs....and if you're not doing L3, why buy L3? :) 2x10G for the 24, 4x10G for the 48....
...david
--- david raistrick http://www.netmeister.org/news/learn2quote.html drais@icantclick.org http://www.expita.com/nomime.html
Tim Durack wrote:
I guess we've had good experience with ProCurve, so have stuck with them. I also like the fact that it is the same code train for the 5400 chassis and 3500 fixed-format. Less work for me when it comes to the test-upgrade cycle. We also make heavy use of sFlow monitoring (although I believe Foundry supports this too.)
Anyway, it's another option.
As another Procurve user (also 3500's)...I'd point out that Procurve has a lifetime warranty on their gear. -- Jeff McAdams "They that can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -- Benjamin Franklin
As another Procurve user (also 3500's)...I'd point out that Procurve has a lifetime warranty on their gear.
...and the stuff seems to last forever too. We still have quite a few 4000M-series 80-port switches in production, as well as newer models. They are bulletproof... just run and run and run. The couple of times we've seen components fail (or even just act funny!) HP has sent us replacement parts without hesitation. Great products, great service. --chuck
Just a question what is the group opinion on Dell L3 Power Connect switch's ? Best regards, Fernando Jeff McAdams wrote:
Tim Durack wrote:
I guess we've had good experience with ProCurve, so have stuck with them. I also like the fact that it is the same code train for the 5400 chassis and 3500 fixed-format. Less work for me when it comes to the test-upgrade cycle. We also make heavy use of sFlow monitoring (although I believe Foundry supports this too.)
Anyway, it's another option.
As another Procurve user (also 3500's)...I'd point out that Procurve has a lifetime warranty on their gear.
-- Fernando Ribeiro Departamento de Internet Tvtel Comunicações S.A. http://www.tvtel.pt/
Are you wanting hardened devices for an outside cabinet install (if it's going outside then you'd better want hardened devices) or is this for an internal environmentally-sound install? What's your definition of "long distance"? 1800ft, 10km, 20km, 40km, 70, 80, 110? Assuming SMF, do you need simplex or duplex? Have you considered talking to a FTTx vendor? We use Occam here and they make some cost-effective fiber products built with FTTx in mind. I'm not sure how they compare price-wise with products you listed below but it might be worth checking out. Justin Andrew Staples wrote:
Speaking of running gig long distances, does anyone on the list have suggestions on a >8 port L2 switch with fiber ports based on personal experience? Lots of 48 port gig switches have 2-4 fiber uplink ports, but this means daisy-chains instead of hub/spoke. Looking for a central switch for a star topography to home fiber runs that is cost effective and works.
Considering: DLink DXS-3326GSR NetGear GSM7312 Foundry SX-FI12GM-4 Zyxel GS-4012F
I realize not all these switches are IEEE 802.3ae, Clause 49 or IEEE 802.3aq capable.
Andrew
I doubt we'll ever see the day when running gigabit across town becomes cost effective when compared to running gigabit to the other end of your server room/cage/whatever.
You show me the ISP with the majority of their userbase located at the other end of their server room, and I'll concede the argument. Last time I looked the eyeballs were across town so I already have to deliver my gigabit feed across town. My theory is that you can achieve some scaling advantages by delivering it from multiple locations instead of concentrating one end of that gigabit feed in a big blob data center where the cooling systems will fail within an hour or two of a major power systems failure. --Michael Dillon
On Wed, Apr 2, 2008 at 3:06 AM, <michael.dillon@bt.com> wrote:
I doubt we'll ever see the day when running gigabit across town becomes cost effective when compared to running gigabit to the other end of your server room/cage/whatever.
You show me the ISP with the majority of their userbase located at the other end of their server room, and I'll concede the argument.
Last time I looked the eyeballs were across town so I already have to deliver my gigabit feed across town. My theory is that you can achieve some scaling advantages by delivering it from multiple locations instead of concentrating one end of that gigabit feed in a big blob data center where the cooling systems will fail within an hour or two of a major power systems failure.
It might be worth the effort to actually operate a business with real datacenters and customers before going off with these homilies. Experience says that for every transaction sent to the user, there are a multiplicity of transactions on the backend that need to occur. This is why the bandwidth into a datacenter is often 100x smaller than the bandwidth inside the datacenter. Communication within a rack, communication within a cluster, communication within a colo and then communication within a campus are different than communication with a user. /vijay
--Michael Dillon
On Wed, Apr 2, 2008 at 6:06 AM, <michael.dillon@bt.com> wrote:
I doubt we'll ever see the day when running gigabit across town becomes cost effective when compared to running gigabit to the other end of your server room/cage/whatever.
You show me the ISP with the majority of their userbase located at the other end of their server room, and I'll concede the argument.
Last time I looked the eyeballs were across town so I already have to deliver my gigabit feed across town. My theory is that you can achieve some scaling advantages by delivering it from multiple locations instead of concentrating one end of that gigabit feed in a big blob data center where the cooling systems will fail within an hour or two of a major power systems failure.
That would be a choice for most of us. -M<
participants (19)
-
Andrew Staples
-
Christopher Morrow
-
chuck goolsbee
-
David Coulson
-
david raistrick
-
Fernando André
-
Frank Coluccio
-
Jeff McAdams
-
Joe Greco
-
Justin Shore
-
Kevin Blackham
-
Martin Hannigan
-
Michael Loftis
-
michael.dillon@bt.com
-
Mikael Abrahamsson
-
Pekka Savola
-
Tim Durack
-
Valdis.Kletnieks@vt.edu
-
vijay gill