Server rental inside of One Wilshire in Los Angeles
Asking for a friend, please contact me off list. The ask: Multi-core server + 32G memory (or 64G) more than 1T storage space. At least 4 10GE optical ports. Linux OS 1 year term Thanks Walt
I can't help you, but I'm just awfully curious and must ask, why specifically optical ports? Seems very strange and a limiting requirement for upside that my imagination struggles to find. On Tue, 6 Aug 2024 at 21:51, Walt <walt@wollny.org> wrote:
Asking for a friend, please contact me off list.
The ask:
Multi-core server + 32G memory (or 64G)
more than 1T storage space.
At least 4 10GE optical ports.
Linux OS
1 year term
Thanks
Walt
-- ++ytti
On 8/7/24 08:01, Saku Ytti wrote:
I can't help you, but I'm just awfully curious and must ask, why specifically optical ports? Seems very strange and a limiting requirement for upside that my imagination struggles to find.
Many of the reasons I've heard for folk going optical for servers at 10G vs. copper is power, and the ensuing thermal management that follows 10G copper installations vs. SFP+. Of course, there is always the distance issue, but that will vary from user to user. Power seems to be the biggest concern. Some folk even prefer DAC over RJ-45 for the same reason. Others consider space and bulkiness, which is where fibre beats DAC and UTP. Mark.
On 8/7/24 08:17, Mark Tinka wrote:
On 8/7/24 08:01, Saku Ytti wrote:
I can't help you, but I'm just awfully curious and must ask, why specifically optical ports? Seems very strange and a limiting requirement for upside that my imagination struggles to find.
Many of the reasons I've heard for folk going optical for servers at 10G vs. copper is power, and the ensuing thermal management that follows 10G copper installations vs. SFP+.
Of course, there is always the distance issue, but that will vary from user to user.
Power seems to be the biggest concern. Some folk even prefer DAC over RJ-45 for the same reason.
Others consider space and bulkiness, which is where fibre beats DAC and UTP.
Mark.
Many of the big DCs don't do copper xconns anymore, so if you have a server with optical NICs, you don't need a switch or media-converter.
On 8/7/24 16:14, Bryan Holloway wrote:
Many of the big DCs don't do copper xconns anymore, so if you have a server with optical NICs, you don't need a switch or media-converter.
If it's in-rack or in-cage (or even in-contiguous-row racks), most data centres may permit your own copper x-connects. Mark.
bryan@shout.net (Bryan Holloway) wrote:
Many of the big DCs don't do copper xconns anymore, so if you have a server with optical NICs, you don't need a switch or media-converter.
Which is really detrimental if you need to OOB connect a server. IPMI ports are generally copper; I suppose that will change, but it hasn't yet. El "pissed off by some of those folks, really" mar
On 8/7/24 17:38, Elmar K. Bins wrote:
Which is really detrimental if you need to OOB connect a server. IPMI ports are generally copper; I suppose that will change, but it hasn't yet.
Unless others have done it differently, what I used to do was run fibre to whatever the local terminal server's gateway router was, and use copper within or between (nearby) racks between the terminal server and the end device. Mark.
On 8/7/24 02:01, Saku Ytti wrote:
I can't help you, but I'm just awfully curious and must ask, why specifically optical ports? Seems very strange and a limiting requirement for upside that my imagination struggles to find.
Among the other reasons folks have given, the 10GBASE-T PHY has added latency beyond the basic packetization/serialization delay inherent to Ethernet due to the use of a relatively long line code plus LDPC. It's not much (2-4us which is still less than 1000BASE-T serialization+packetization latency with larger packets), but it's more than 10GBASE-R PHYs. The HFT guys may care, but most other folks probably don't give a hoot.
On Wed, 7 Aug 2024 at 17:41, Brandon Martin <lists.nanog@monmotha.net> wrote:
Among the other reasons folks have given, the 10GBASE-T PHY has added latency beyond the basic packetization/serialization delay inherent to Ethernet due to the use of a relatively long line code plus LDPC. It's not much (2-4us which is still less than 1000BASE-T serialization+packetization latency with larger packets), but it's more than 10GBASE-R PHYs. The HFT guys may care, but most other folks probably don't give a hoot.
I think this is the least bad explanation, some explanations are that copper may not be available, but that doesn't explain preference. Nor do I think wattage/heat explains preference, as it's hosted, so customers probably shouldn't care. Latency could very well explain preference, but it seems doubtful, when hardware is so underspecified, surely if you are talking in single microseconds or nanoseconds budget, the actual hardware becomes very important, so i think lack of specificity there implies it's not about latency. -- ++ytti
On Wed, Aug 7, 2024 at 12:52 PM Saku Ytti <saku@ytti.fi> wrote:
budget, the actual hardware becomes very important, so i think lack of specificity there implies it's not about latency.
I'd bet the real answer is that someone wants to connect a commodity server to an IX and pretend to be some network/asn and then do some not terrific things with that setup :( seen this in AMSIX and DECIX ... don't know that I've not seen it also at 1-wilshire ;(
On Wed, 7 Aug 2024 at 20:05, Christopher Morrow <morrowc.lists@gmail.com> wrote:
I'd bet the real answer is that someone wants to connect a commodity server to an IX and pretend to be some network/asn and then do some not terrific things with that setup :(
seen this in AMSIX and DECIX ... don't know that I've not seen it also at 1-wilshire ;(
This seems very plausible, considering the chosen demo. Thanks. -- ++ytti
CoreSite now charges a disconnect fee for all cross-connects in addition to the MRC and connection fee. If you don't plan to cross-connect at CoreSite LA1 (One Wilshire), you may consider other nearby facilities. Most facilities are backhauled there anyway. On Thu, Aug 8, 2024 at 1:15 PM Saku Ytti <saku@ytti.fi> wrote:
On Wed, 7 Aug 2024 at 20:05, Christopher Morrow <morrowc.lists@gmail.com> wrote:
I'd bet the real answer is that someone wants to connect a commodity server to an IX and pretend to be some network/asn and then do some not terrific things with that setup :(
seen this in AMSIX and DECIX ... don't know that I've not seen it also at 1-wilshire ;(
This seems very plausible, considering the chosen demo. Thanks.
-- ++ytti
nanog@nanog.org (Siyuan Miao via NANOG) wrote:
CoreSite now charges a disconnect fee for all cross-connects in addition to the MRC and connection fee.
As have so many others. It could be justified *if* they actually removed the physical crossconnect. My last visit to the site was >10 years ago, and by then they had *not* removed any of the cabling, some of which looked like it was put in in the Seventies. Has that changed in 1Wil? Elmar.
On 8/7/24 22:18, Siyuan Miao via NANOG wrote:
CoreSite now charges a disconnect fee for all cross-connects in addition to the MRC and connection fee.
Wait, really? How new is this fee? As a current CoreSite customer we have not noticed any disconnect fee. Only a NRC for the setup and then a MRC for the fiber cross connect. -- Adam Brenner https://aeb.io/
While it may be a plausible scenario, it IMO is highly unlikely (< 0.000000000001%) that this is the case in this situation, given the person that is asking... Regards, Christopher Hawker ________________________________ From: NANOG <nanog-bounces+chris=thesysadmin.au@nanog.org> on behalf of Saku Ytti <saku@ytti.fi> Sent: Thursday, August 8, 2024 3:13 PM To: Christopher Morrow <morrowc.lists@gmail.com> Cc: nanog@nanog.org <nanog@nanog.org> Subject: Re: Server rental inside of One Wilshire in Los Angeles On Wed, 7 Aug 2024 at 20:05, Christopher Morrow <morrowc.lists@gmail.com> wrote:
I'd bet the real answer is that someone wants to connect a commodity server to an IX and pretend to be some network/asn and then do some not terrific things with that setup :(
seen this in AMSIX and DECIX ... don't know that I've not seen it also at 1-wilshire ;(
This seems very plausible, considering the chosen demo. Thanks. -- ++ytti
On 8/7/24 18:52, Saku Ytti wrote:
I think this is the least bad explanation, some explanations are that copper may not be available, but that doesn't explain preference. Nor do I think wattage/heat explains preference, as it's hosted, so customers probably shouldn't care. Latency could very well explain preference, but it seems doubtful, when hardware is so underspecified, surely if you are talking in single microseconds or nanoseconds budget, the actual hardware becomes very important, so i think lack of specificity there implies it's not about latency.
I don't think you are going to get a single answer that has an overwhelming majority of support for fibre. Use-cases are very different, and what you will see in the field is some statistically insignificant representation of each of the reasons given. By and large, I can say the bulk of servers are cabled with copper, which would make fibre niche if you took a global view. On that basis, squabbling over the reason is inconsequential. Mark.
From a strictly physical cabling point of view, while 10GBaseT is likely to work on ordinary cat5e or cat 6 cabling at very short distances such as from a server to a top of rack aggregation switch, more successful results will be seen with cat6a.
Your typical cat 6A cable is significantly fatter in diameter, less flexible and takes up much more space inside vertical cabling management up and down the inside of a dense cabinet, compared to an ordinary figure-8 shaped duplex singlemode fiber patch cable. And even more space savings are possible with single tube/uniboot, 1.6 mm diameter patch cables. On Wed, Aug 7, 2024, 3:48 PM Saku Ytti <saku@ytti.fi> wrote:
On Wed, 7 Aug 2024 at 17:41, Brandon Martin <lists.nanog@monmotha.net> wrote:
Among the other reasons folks have given, the 10GBASE-T PHY has added latency beyond the basic packetization/serialization delay inherent to Ethernet due to the use of a relatively long line code plus LDPC. It's not much (2-4us which is still less than 1000BASE-T serialization+packetization latency with larger packets), but it's more than 10GBASE-R PHYs. The HFT guys may care, but most other folks probably don't give a hoot.
I think this is the least bad explanation, some explanations are that copper may not be available, but that doesn't explain preference. Nor do I think wattage/heat explains preference, as it's hosted, so customers probably shouldn't care. Latency could very well explain preference, but it seems doubtful, when hardware is so underspecified, surely if you are talking in single microseconds or nanoseconds budget, the actual hardware becomes very important, so i think lack of specificity there implies it's not about latency.
-- ++ytti
On Wed, Aug 7, 2024 at 4:07 PM Eric Kuhnke <eric.kuhnke@gmail.com> wrote:
Your typical cat 6A cable is significantly fatter in diameter, less flexible and [...]
Hi Eric, All of these are excellent reasons why the DC -operator- should want to use fiber in 10GE links. The question was: why does a DC -customer- want 40 gigs of specifically fiber optic connections in what is otherwise a minimum server configuration, the sort that easily fits in 1U. The Linux network stack would struggle to even drive 40 gigs; you'd be into very custom network software built with something like DPDK but the guy hasn't placed any conditions on the available network infrastructure and connectivity except that it offer 4x 10gig fiber optic ethernet. That's weird. Regards, Bill Herrin -- William Herrin bill@herrin.us https://bill.herrin.us/
On 8/8/24 03:37, William Herrin wrote:
Hi Eric,
All of these are excellent reasons why the DC -operator- should want to use fiber in 10GE links.
The question was: why does a DC -customer- want 40 gigs of specifically fiber optic connections in what is otherwise a minimum server configuration, the sort that easily fits in 1U. The Linux network stack would struggle to even drive 40 gigs; you'd be into very custom network software built with something like DPDK but the guy hasn't placed any conditions on the available network infrastructure and connectivity except that it offer 4x 10gig fiber optic ethernet. That's weird.
"Weird" is not what I would use to describe it. Unusual, perhaps. I mean, vendors are producing optical NIC's for servers. I'm aware of some deployments that struggled with availability of copper-based switches and NIC's 2020/2021, but SFP28 was available, so they moved to that. Again, a special case. Copper will continue to dominate the server market for a while yet. Mark.
I completely agree, the original "rfq" is super suspicious. There's no need to require to be specifically at One Wilshire for a single 1U server (particularly with only 10GbE interfaces, not 100), since the most effective use of being at a major interconnect point like that is only if you're prepared to incur the recurring monthly expense of many intra-building cross connects. Realistic use by a small ISP that needs a presence there would be more like a minimum 1/4th of a cabinet in its own compartment. On Wed, Aug 7, 2024, 6:38 PM William Herrin <bill@herrin.us> wrote:
On Wed, Aug 7, 2024 at 4:07 PM Eric Kuhnke <eric.kuhnke@gmail.com> wrote:
Your typical cat 6A cable is significantly fatter in diameter, less flexible and [...]
Hi Eric,
All of these are excellent reasons why the DC -operator- should want to use fiber in 10GE links.
The question was: why does a DC -customer- want 40 gigs of specifically fiber optic connections in what is otherwise a minimum server configuration, the sort that easily fits in 1U. The Linux network stack would struggle to even drive 40 gigs; you'd be into very custom network software built with something like DPDK but the guy hasn't placed any conditions on the available network infrastructure and connectivity except that it offer 4x 10gig fiber optic ethernet. That's weird.
Regards, Bill Herrin
-- William Herrin bill@herrin.us https://bill.herrin.us/
On 08/08/2024 00:07:44, "Eric Kuhnke" <eric.kuhnke@gmail.com> wrote:
From a strictly physical cabling point of view, while 10GBaseT is likely to work on ordinary cat5e or cat 6 cabling at very short distances such as from a server to a top of rack aggregation switch, more successful results will be seen with cat6a.
Your typical cat 6A cable is significantly fatter in diameter, less flexible and takes up much more space inside vertical cabling management
While some 6a are thicker than regular 5e thinner cables are available for in rack use, such as 4.2mm 28AWG CAT6A https://www.fs.com/uk/products/151381.html
And even more space savings are possible with single tube/uniboot, 1.6 mm diameter patch cables.
It is hard to compete with that but relaxing to cat6 and perhaps dubious compliance there are even thinner 32AWG, while no use for POE they are OK for the 2 or 3m needed in a rack e.g. 2.8mm https://www.amazon.co.uk/gp/product/B07Y3T6BYQ/ Also good for all the consoles/management ports which aren't fibre. systimax cat6a is a general problem, I've done two building rewires with Nexans Cat7a S/FTP where there was insufficient containment space for systimax 6A. brandon
participants (13)
-
Adam Brenner
-
Brandon Butterworth
-
Brandon Martin
-
Bryan Holloway
-
Christopher Hawker
-
Christopher Morrow
-
Elmar K. Bins
-
Eric Kuhnke
-
Mark Tinka
-
Saku Ytti
-
Siyuan Miao
-
Walt
-
William Herrin