I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for. Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for. Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
We've found that running windows in safe mode produces better results with Ookla. And MACs usually do better as well. We've gotten >900mb/s with those two approaches. On 07/16/18 17:58 +0000, Chris Gross wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
On Mon, Jul 16, 2018 at 01:02:28PM -0500, Dan White wrote:
We've found that running windows in safe mode produces better results with Ookla. And MACs usually do better as well. We've gotten >900mb/s with those two approaches.
I've seen engineers even forget to account for differing behaviors of vendors, eg: Juniper doesn't display the layer-2 header counters This means a 920Mb/s link may actually be 100% once you add back in ethernet framing. Remind folks that they are seeing the TCP/UDP throughput and there is ethernet + IP headers involved. - Jared
+1 to Jared. I’ve seen people not account for this when sizing CoS as well on Juniper. -Eddie
On Jul 16, 2018, at 11:08 AM, Jared Mauch <jared@puck.nether.net> wrote:
On Mon, Jul 16, 2018 at 01:02:28PM -0500, Dan White wrote: We've found that running windows in safe mode produces better results with Ookla. And MACs usually do better as well. We've gotten >900mb/s with those two approaches.
I've seen engineers even forget to account for differing behaviors of vendors, eg: Juniper doesn't display the layer-2 header counters
This means a 920Mb/s link may actually be 100% once you add back in ethernet framing. Remind folks that they are seeing the TCP/UDP throughput and there is ethernet + IP headers involved.
- Jared
On 16/Jul/18 20:08, Jared Mauch wrote:
This means a 920Mb/s link may actually be 100% once you add back in ethernet framing. Remind folks that they are seeing the TCP/UDP throughput and there is ethernet + IP headers involved.
But you sold me 1Gbps. Stop stiffing me my 80Mbps :-)... Mark.
Which is why we over provision by 10%.
On Jul 17, 2018, at 07:51, Mark Tinka <mark.tinka@seacom.mu> wrote:
On 16/Jul/18 20:08, Jared Mauch wrote:
This means a 920Mb/s link may actually be 100% once you add back in ethernet framing. Remind folks that they are seeing the TCP/UDP throughput and there is ethernet + IP headers involved.
But you sold me 1Gbps. Stop stiffing me my 80Mbps :-)...
Mark.
Get a better middle mile. That’s why we use Comcast for much of our middle mile.
On Jul 17, 2018, at 08:12, Mark Tinka <mark.tinka@seacom.mu> wrote:
On 17/Jul/18 14:07, Matt Hoppes wrote:
Which is why we over provision by 10%.
After a bunch of customers, it starts to add ($$) up.
And what do you do if you don't own the last mile, and your Layer 2 service provider sticks to the letter?
Mark.
Build your own last mile or order that 10% more? ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Mark Tinka" <mark.tinka@seacom.mu> To: "Matt Hoppes" <mattlists@rivervalleyinternet.net> Cc: "North American Network Operators' Group" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 7:12:39 AM Subject: Re: Proving Gig Speed On 17/Jul/18 14:07, Matt Hoppes wrote:
Which is why we over provision by 10%.
After a bunch of customers, it starts to add ($$) up. And what do you do if you don't own the last mile, and your Layer 2 service provider sticks to the letter? Mark.
Most ISPs I know build their own last mile. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Mark Tinka" <mark.tinka@seacom.mu> To: "Mike Hammett" <nanog@ics-il.net> Cc: "North American Network Operators' Group" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 10:45:48 AM Subject: Re: Proving Gig Speed On 17/Jul/18 16:42, Mike Hammett wrote: Build your own last mile... Hmmh, don't know why everyone doesn't just do that. <blockquote> or order that 10% more? </blockquote> As Trump said, "All I can do is ask the question..." Mark.
On Tue, Jul 17, 2018, at 16:42, Mike Hammett wrote:
Build your own last mile or order that 10% more?
Do you realize what you are saying ? Let me offer a few translations: 1. "Don't spend N00 Currency/month for X Mbps from your customer to your aggregation DC on an existing NNI, but pay something like N0 KCurrency one shot (sometimes significantly more) + whatever is needed to extend your backbone ot the customer area (long-haul capacity, equipment, housing, ...)." CFO hates this unless you have enough customers in a single decently-sized area. 2. The "10% more" does not work this way. In this part of the world, the next step after 100 Mbps is 200 Mbps, and the next step after 1Gbps (on 1G port) is 2 Gbps (on 10G port). You can't buy 110 Mbps or 1100 Mbps. You just can't over-provision L2 transport for those speeds. Even if you are in a situation where you really can over-provision, your customer stays yours only as long as the price is correct. A competitor that does not over-provision but instead explains how things work ends up winning "your"customers. 3. There are zone where you are just not allowed to run your local loop. Most common example is airport and harbor areas. Then there are country-specific zones where you may not be allowed, and finally there are zones where a few select people do a lot of things so that only their favorite provider (usually the incumbent) deploys. A derivative of this, is the "select people" is the telecom regulator, that grants an almost-monopoly in certain areas to the first operator that comes with a deployment plan for the whole area (fiber anyone - individual and business - in a 80K people town - you have 10/15/20 years to do it). You may argue that some of those issues do not apply in North America (the NA from NANOG), but NANOG became pretty much global :)
On 22/Jul/18 09:46, Radu-Adrian Feurdean wrote:
You may argue that some of those issues do not apply in North America (the NA from NANOG), but NANOG became pretty much global :)
I am certain that there are places in (North) America where you cannot "build your own" or "order 10% more"... Mark.
As someone that has built his own last-mile ISP and knows first hand literally hundreds of others and coaches thousands more through social media and a podcast, yes, I realize what I'm saying when I say to build your own last mile. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Radu-Adrian Feurdean" <nanog@radu-adrian.feurdean.net> To: nanog@nanog.org Sent: Sunday, July 22, 2018 2:46:25 AM Subject: Re: Proving Gig Speed On Tue, Jul 17, 2018, at 16:42, Mike Hammett wrote:
Build your own last mile or order that 10% more?
Do you realize what you are saying ? Let me offer a few translations: 1. "Don't spend N00 Currency/month for X Mbps from your customer to your aggregation DC on an existing NNI, but pay something like N0 KCurrency one shot (sometimes significantly more) + whatever is needed to extend your backbone ot the customer area (long-haul capacity, equipment, housing, ...)." CFO hates this unless you have enough customers in a single decently-sized area. 2. The "10% more" does not work this way. In this part of the world, the next step after 100 Mbps is 200 Mbps, and the next step after 1Gbps (on 1G port) is 2 Gbps (on 10G port). You can't buy 110 Mbps or 1100 Mbps. You just can't over-provision L2 transport for those speeds. Even if you are in a situation where you really can over-provision, your customer stays yours only as long as the price is correct. A competitor that does not over-provision but instead explains how things work ends up winning "your"customers. 3. There are zone where you are just not allowed to run your local loop. Most common example is airport and harbor areas. Then there are country-specific zones where you may not be allowed, and finally there are zones where a few select people do a lot of things so that only their favorite provider (usually the incumbent) deploys. A derivative of this, is the "select people" is the telecom regulator, that grants an almost-monopoly in certain areas to the first operator that comes with a deployment plan for the whole area (fiber anyone - individual and business - in a 80K people town - you have 10/15/20 years to do it). You may argue that some of those issues do not apply in North America (the NA from NANOG), but NANOG became pretty much global :)
On 22/Jul/18 15:25, Mike Hammett wrote:
As someone that has built his own last-mile ISP and knows first hand literally hundreds of others and coaches thousands more through social media and a podcast, yes, I realize what I'm saying when I say to build your own last mile.
Hell, what are you doing all the way out there man? Wanna come to Africa and make some real bucks :-)... Mark.
I'm on a Mac and launch 40 speedtests at the same time and monitor interface bandwidth #!/bin/bash for i in `./speedtest-cli --list | cut -f1 -d')' | head -n 40`; do ./speedtest-cli --server $i & done I've been able to saturate 10G links with this method -Matt -- Matthew Crocker Crocker Communications, Inc. President On 7/16/18, 1:59 PM, "NANOG on behalf of Chris Gross" <nanog-bounces@nanog.org on behalf of CGross@ninestarconnect.com> wrote: I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for. Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for. Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
We use Iperf3 for customers that complain about throughput, it's relatively low overhead compared to the Ookla HTML5 client. Same scenario as you, we have the tech hook up their laptop to the customer's drop and perform testing. I suspect your antivirus may be attempting to perform real-time inspection on the http(s) traffic, which would crush the little laptop CPU for sure. Message me off-list and I'll send you a private Iperf3 server IP to test with. -Matt On Mon, Jul 16, 2018 at 12:58 PM, Chris Gross <CGross@ninestarconnect.com> wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
I have this deployed for our customers, it works well. I have yet to hear any complaints of not being able to max out a connection. https://github.com/adolfintel/speedtest -----Original Message----- From: NANOG <nanog-bounces@nanog.org> On Behalf Of Matt Erculiani Sent: Monday, July 16, 2018 11:17 AM To: Chris Gross <CGross@ninestarconnect.com> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: Proving Gig Speed We use Iperf3 for customers that complain about throughput, it's relatively low overhead compared to the Ookla HTML5 client. Same scenario as you, we have the tech hook up their laptop to the customer's drop and perform testing. I suspect your antivirus may be attempting to perform real-time inspection on the http(s) traffic, which would crush the little laptop CPU for sure. Message me off-list and I'll send you a private Iperf3 server IP to test with. -Matt On Mon, Jul 16, 2018 at 12:58 PM, Chris Gross <CGross@ninestarconnect.com> wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes? Attention: Information contained in this message and or attachments is intended only for the recipient(s) named above and may contain confidential and or privileged material that is protected under State or Federal law. If you are not the intended recipient, any disclosure, copying, distribution or action taken on it is prohibited. If you believe you have received this email in error, please contact the sender with a copy to compliance@ochin.org, delete this email and destroy all copies.
Winner winner chicken dinner. I forgot to pull "Antivirus is at fault" card from my deck. 250/675 with it installed, 920/920 when removed so now I get to pass the the issue onwards. Thanks everyone for your replies and the responses for the adolfintel/speedtest github, I'll definitely look at it as a replacement for later. -----Original Message----- From: Matt Erculiani <merculiani@gmail.com> Sent: Monday, July 16, 2018 2:17 PM To: Chris Gross <CGross@ninestarconnect.com> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: Proving Gig Speed We use Iperf3 for customers that complain about throughput, it's relatively low overhead compared to the Ookla HTML5 client. Same scenario as you, we have the tech hook up their laptop to the customer's drop and perform testing. I suspect your antivirus may be attempting to perform real-time inspection on the http(s) traffic, which would crush the little laptop CPU for sure. Message me off-list and I'll send you a private Iperf3 server IP to test with. -Matt On Mon, Jul 16, 2018 at 12:58 PM, Chris Gross <CGross@ninestarconnect.com> wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
It's a complete rabbit hole different hardware with different browsers give different readings, even not having your laptop plugged into power can cause a change in results due to dropping cpu to power save. The only reliable solution we found for field techs was the exfo ex1. Still talks to the ookla speedtest server etc. Obvious this is a well known issue and exfo has a solution. https://www.exfo.com/en/products/field-network-testing/network-protocol-test... Carlos Alcantar Race Communications / Race Team Member Phone: +1 415 376 3314 / carlos@race.com<mailto:carlos@race.com> / http://www.race.com<http://www.race.com/> ________________________________ From: NANOG <nanog-bounces@nanog.org> on behalf of Chris Gross <CGross@ninestarconnect.com> Sent: Monday, July 16, 2018 12:39 PM To: Matt Erculiani Cc: North American Network Operators' Group Subject: RE: Proving Gig Speed Winner winner chicken dinner. I forgot to pull "Antivirus is at fault" card from my deck. 250/675 with it installed, 920/920 when removed so now I get to pass the the issue onwards. Thanks everyone for your replies and the responses for the adolfintel/speedtest github, I'll definitely look at it as a replacement for later. -----Original Message----- From: Matt Erculiani <merculiani@gmail.com> Sent: Monday, July 16, 2018 2:17 PM To: Chris Gross <CGross@ninestarconnect.com> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: Proving Gig Speed We use Iperf3 for customers that complain about throughput, it's relatively low overhead compared to the Ookla HTML5 client. Same scenario as you, we have the tech hook up their laptop to the customer's drop and perform testing. I suspect your antivirus may be attempting to perform real-time inspection on the http(s) traffic, which would crush the little laptop CPU for sure. Message me off-list and I'll send you a private Iperf3 server IP to test with. -Matt On Mon, Jul 16, 2018 at 12:58 PM, Chris Gross <CGross@ninestarconnect.com> wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
On Jul 16, 2018, at 4:31 PM, Carlos Alcantar <carlos@race.com> wrote:
It's a complete rabbit hole different hardware with different browsers give different readings, even not having your laptop plugged into power can cause a change in results due to dropping cpu to power save. The only reliable solution we found for field techs was the exfo ex1. Still talks to the ookla speedtest server etc. Obvious this is a well known issue and exfo has a solution.
https://www.exfo.com/en/products/field-network-testing/network-protocol-test...
This is an interesting device. But the manufactures pages promote it like “Speedtest for Dummies”. Why don’t the User Manual or Spec Sheet mention IPv6 (or even (IPv4)? I should think technicians would want technical answers. Cutler
Carlos Alcantar
Race Communications / Race Team Member
Phone: +1 415 376 3314 / carlos@race.com<mailto:carlos@race.com> / http://www.race.com<http://www.race.com/>
________________________________ From: NANOG <nanog-bounces@nanog.org> on behalf of Chris Gross <CGross@ninestarconnect.com> Sent: Monday, July 16, 2018 12:39 PM To: Matt Erculiani Cc: North American Network Operators' Group Subject: RE: Proving Gig Speed
Winner winner chicken dinner. I forgot to pull "Antivirus is at fault" card from my deck. 250/675 with it installed, 920/920 when removed so now I get to pass the the issue onwards.
Thanks everyone for your replies and the responses for the adolfintel/speedtest github, I'll definitely look at it as a replacement for later.
-----Original Message----- From: Matt Erculiani <merculiani@gmail.com> Sent: Monday, July 16, 2018 2:17 PM To: Chris Gross <CGross@ninestarconnect.com> Cc: North American Network Operators' Group <nanog@nanog.org> Subject: Re: Proving Gig Speed
We use Iperf3 for customers that complain about throughput, it's relatively low overhead compared to the Ookla HTML5 client. Same scenario as you, we have the tech hook up their laptop to the customer's drop and perform testing. I suspect your antivirus may be attempting to perform real-time inspection on the http(s) traffic, which would crush the little laptop CPU for sure.
Message me off-list and I'll send you a private Iperf3 server IP to test with.
-Matt
On Mon, Jul 16, 2018 at 12:58 PM, Chris Gross <CGross@ninestarconnect.com> wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
That's absolutely true, but I don't see any real alternatives in some cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link. On Tue, Jul 17, 2018 at 8:03 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 16/Jul/18 22:31, Carlos Alcantar wrote:
It's a complete rabbit hole...
Couldn't have said it better myself!
Mark.
On 17/Jul/18 14:07, K. Scott Helms wrote:
That's absolutely true, but I don't see any real alternatives in some cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link.
So what are you going to do when 10Gbps FTTH into the home becomes the norm? Perhaps laptops and servers of the time won't even see this as a rounding error :-\... Mark.
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Mark Tinka" <mark.tinka@seacom.mu> To: "K. Scott Helms" <kscott.helms@gmail.com> Cc: "NANOG list" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 7:11:35 AM Subject: Re: Proving Gig Speed On 17/Jul/18 14:07, K. Scott Helms wrote:
That's absolutely true, but I don't see any real alternatives in some cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link.
So what are you going to do when 10Gbps FTTH into the home becomes the norm? Perhaps laptops and servers of the time won't even see this as a rounding error :-\... Mark.
"There is no reason for any individual to have a computer in his home." "640K ought to be enough for anybody." On 7/17/18 10:41 AM, Mike Hammett wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "Mark Tinka" <mark.tinka@seacom.mu> To: "K. Scott Helms" <kscott.helms@gmail.com> Cc: "NANOG list" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 7:11:35 AM Subject: Re: Proving Gig Speed
On 17/Jul/18 14:07, K. Scott Helms wrote:
That's absolutely true, but I don't see any real alternatives in some cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link. So what are you going to do when 10Gbps FTTH into the home becomes the norm?
Perhaps laptops and servers of the time won't even see this as a rounding error :-\...
Mark.
Unrelated. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Brant Ian Stevens" <branto@argentiumsolutions.com> To: nanog@nanog.org Sent: Tuesday, July 17, 2018 9:47:35 AM Subject: Re: Proving Gig Speed "There is no reason for any individual to have a computer in his home." "640K ought to be enough for anybody." On 7/17/18 10:41 AM, Mike Hammett wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "Mark Tinka" <mark.tinka@seacom.mu> To: "K. Scott Helms" <kscott.helms@gmail.com> Cc: "NANOG list" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 7:11:35 AM Subject: Re: Proving Gig Speed
On 17/Jul/18 14:07, K. Scott Helms wrote:
That's absolutely true, but I don't see any real alternatives in some cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link. So what are you going to do when 10Gbps FTTH into the home becomes the norm?
Perhaps laptops and servers of the time won't even see this as a rounding error :-\...
Mark.
On Jul 17, 2018, at 9:41 AM, Mike Hammett <nanog@ics-il.net> wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
Well, in a few years when we’re all watching 4D 32K Netflix on our 16-foot screens with 5 million DPI, it’ll make all the difference in the world, right? Tongue-in-cheek obviously. ---- Andy Ringsmuth andy@newslink.com News Link – Manager Technology, Travel & Facilities 2201 Winthrop Rd., Lincoln, NE 68502-4158 (402) 475-6397 (402) 304-0083 cellular
On Tue, 17 Jul 2018 at 17:45, Mike Hammett <nanog@ics-il.net> wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
I admire your confidence, when historically we've had poor success in these type of predictions. I seriously doubt we're now living in special time in history where we find the limit of consumer bandwidth demand, while I have no idea what would drive it or how it would be implemented, I'm betting against myself having all the information about future. -- ++ytti
On 17/Jul/18 17:38, Saku Ytti wrote:
I admire your confidence, when historically we've had poor success in these type of predictions. I seriously doubt we're now living in special time in history where we find the limit of consumer bandwidth demand, while I have no idea what would drive it or how it would be implemented, I'm betting against myself having all the information about future.
+1. Mark.
We already supply far, far greater than the actual consumer usage (versus want or demand). Consumers are moving away from wired connections in the home for wireless connections (for obvious mobility and ease of setup where there isn't existing wired infrastructure). Consumers are moving away from power desktops and laptops to phones, tablets, and purpose-built appliances. My in-laws have a Comcast service that's >100 megabit/s. The 2.4 and 5 GHz noise floors are so high (-50 to -75 dB, depending on channel and location within the house) that unless you're in the same room, you're not getting more than 10 megabit/s on wireless. On a wire, Comcast delivers full data rate. Speed tests from wire to wireless mirror the wireless to Internet performance. If it can't be delivered within the home, delivering it to the home is pointless. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Saku Ytti" <saku@ytti.fi> To: "Mike Hammett" <nanog@ics-il.net> Cc: "Mark Tinka" <mark.tinka@seacom.mu>, nanog@nanog.org Sent: Tuesday, July 17, 2018 10:38:52 AM Subject: Re: Proving Gig Speed On Tue, 17 Jul 2018 at 17:45, Mike Hammett <nanog@ics-il.net> wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
I admire your confidence, when historically we've had poor success in these type of predictions. I seriously doubt we're now living in special time in history where we find the limit of consumer bandwidth demand, while I have no idea what would drive it or how it would be implemented, I'm betting against myself having all the information about future. -- ++ytti
SoIP surely will sure require trigabits. Mike On 7/17/18 8:38 AM, Saku Ytti wrote:
On Tue, 17 Jul 2018 at 17:45, Mike Hammett <nanog@ics-il.net> wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later. I admire your confidence, when historically we've had poor success in these type of predictions. I seriously doubt we're now living in special time in history where we find the limit of consumer bandwidth demand, while I have no idea what would drive it or how it would be implemented, I'm betting against myself having all the information about future.
hi, another prediction would be that your internet connection (and most devices in house) connected by 5G - maybe with some local WiFi - 802.11ax - if theres still spectrum left after the LTE groups have taken it all for aforementioned 5G purposes... legacy devices, still around for another decade or more can have some 2.4GHz connectivity - that ISM band is troublesome to repurpose thanks to all the medical and video senders etc. big old wild west there... alan
On Wed, 18 Jul 2018 at 00:47, Alan Buxey <alan.buxey@gmail.com> wrote:
another prediction would be that your internet connection (and most devices in house) connected by 5G - maybe with some local WiFi - 802.11ax - if theres still spectrum left after the LTE groups have taken it all for aforementioned 5G purposes...
Already fairly common in Finland to have just LTE dongle for Internet, especially for younger people. DNA quotes average consumption of 8GB per subscriber per month. You can get unlimited for 20eur/month, it's much faster than DSL with lower latency. And if your home DSL is down, it may affect just you, so MTTR can be days, where as on mobile MTTR even without calling anyone is minutes or hour. Even more strange, providers, particularly one of them, is printing money. It's not immediately obvious to me why this the same fundamentals do not seem to work elsewhere. In Cyprus I can't buy more than 6GB contract and connectivity is spotty even in urban centres, which echoes my experience in US and central EU. -- ++ytti
On 18/Jul/18 00:01, Saku Ytti wrote:
Already fairly common in Finland to have just LTE dongle for Internet, especially for younger people. DNA quotes average consumption of 8GB per subscriber per month. You can get unlimited for 20eur/month, it's much faster than DSL with lower latency. And if your home DSL is down, it may affect just you, so MTTR can be days, where as on mobile MTTR even without calling anyone is minutes or hour. Even more strange, providers, particularly one of them, is printing money. It's not immediately obvious to me why this the same fundamentals do not seem to work elsewhere. In Cyprus I can't buy more than 6GB contract and connectivity is spotty even in urban centres, which echoes my experience in US and central EU.
Fairly common in Africa, where there is plenty of GSM infrastructure, and not so much fibre. Mark.
On 17/Jul/18 16:41, Mike Hammett wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
Doesn't stop customers from buying it if it's cheap and available, which doesn't stop them from proving they are getting 10Gbps as advertised. Mark.
On Jul 17, 2018, at 10:44 AM, Mark Tinka <mark.tinka@seacom.mu> wrote:
On 17/Jul/18 16:41, Mike Hammett wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
Doesn't stop customers from buying it if it's cheap and available, which doesn't stop them from proving they are getting 10Gbps as advertised.
Mark.
I suppose in reality it’s no different than any other utility. My home has 200 amp electrical service. Will I ever use 200 amps at one time? Highly highly unlikely. But if my electrical utility wanted to advertise “200 amp service in all homes we supply!” they sure could. Would an electrician be able to test it? I’m sure there is a way somehow. If me and everyone on my street tried to use 200 amps all at the same time, could the infrastructure handle it? Doubtful. But do I on occasion saturate my home fiber 300 mbit synchronous connection? Every now and then yes, but rarely. Although if I’m paying for 300 and not getting it, my ISP will be hearing from me. If my electrical utility told me “hey, you can upgrade to 500 amp service for no additional charge” would I do it? Sure, what the heck. If my water utility said “guess what? You can upgrade to a 2-inch water line at no additional charge!” would I do it? Probably yeah, why not? Would I ever use all that capacity on $random_utility at one time? Of course not. But nice to know it’s there if I ever need it. ---- Andy Ringsmuth andy@newslink.com News Link – Manager Technology, Travel & Facilities 2201 Winthrop Rd., Lincoln, NE 68502-4158 (402) 475-6397 (402) 304-0083 cellular
On 17/Jul/18 18:12, Andy Ringsmuth wrote:
I suppose in reality it’s no different than any other utility. My home has 200 amp electrical service. Will I ever use 200 amps at one time? Highly highly unlikely. But if my electrical utility wanted to advertise “200 amp service in all homes we supply!” they sure could. Would an electrician be able to test it? I’m sure there is a way somehow.
If me and everyone on my street tried to use 200 amps all at the same time, could the infrastructure handle it? Doubtful. But do I on occasion saturate my home fiber 300 mbit synchronous connection? Every now and then yes, but rarely. Although if I’m paying for 300 and not getting it, my ISP will be hearing from me.
If my electrical utility told me “hey, you can upgrade to 500 amp service for no additional charge” would I do it? Sure, what the heck. If my water utility said “guess what? You can upgrade to a 2-inch water line at no additional charge!” would I do it? Probably yeah, why not?
Would I ever use all that capacity on $random_utility at one time? Of course not. But nice to know it’s there if I ever need it.
The difference, of course, between electricity and the Internet is that there is a lot more information and tools freely available online that Average Jane can arm herself with to run amok with figuring out whether she is getting 300Mbps of her 300Mbps from her ISP. Average Jane could care less about measuring whether she's getting 200 amps of her 200 amps from the power company; likely because there is a lot more structure with how power is produced and delivered, or more to the point, a lot less freely available tools and information with which she can arm herself to run amok with. To her, the power company sucks if the lights go out. In the worst case, if her power starts a fire, she's calling the fire department. Mark.
At least in the US, Jane also doesn’t really have a choice of her electricity provider, so she’s not getting bombarded with advertising from vendors selling “Faster WiFi” than the next guy. I don’t get to choose my method of power generation and therefore cost per kWh. I’d love to buy $.04 from the Pacific NW when I’m in the Southern US. I’m not a betting guy, but my money says when self power generation hits some point and multiple vendors are trying to get people to buy their system, we’ll get “More amps per X hours of sunlight with our system” and she will care. On Jul 18, 2018, at 7:01 AM, Mark Tinka <mark.tinka@seacom.mu<mailto:mark.tinka@seacom.mu>> wrote: On 17/Jul/18 18:12, Andy Ringsmuth wrote: I suppose in reality it’s no different than any other utility. My home has 200 amp electrical service. Will I ever use 200 amps at one time? Highly highly unlikely. But if my electrical utility wanted to advertise “200 amp service in all homes we supply!” they sure could. Would an electrician be able to test it? I’m sure there is a way somehow. If me and everyone on my street tried to use 200 amps all at the same time, could the infrastructure handle it? Doubtful. But do I on occasion saturate my home fiber 300 mbit synchronous connection? Every now and then yes, but rarely. Although if I’m paying for 300 and not getting it, my ISP will be hearing from me. If my electrical utility told me “hey, you can upgrade to 500 amp service for no additional charge” would I do it? Sure, what the heck. If my water utility said “guess what? You can upgrade to a 2-inch water line at no additional charge!” would I do it? Probably yeah, why not? Would I ever use all that capacity on $random_utility at one time? Of course not. But nice to know it’s there if I ever need it. The difference, of course, between electricity and the Internet is that there is a lot more information and tools freely available online that Average Jane can arm herself with to run amok with figuring out whether she is getting 300Mbps of her 300Mbps from her ISP. Average Jane could care less about measuring whether she's getting 200 amps of her 200 amps from the power company; likely because there is a lot more structure with how power is produced and delivered, or more to the point, a lot less freely available tools and information with which she can arm herself to run amok with. To her, the power company sucks if the lights go out. In the worst case, if her power starts a fire, she's calling the fire department. Mark. --- Keith Stokes Neill Technologies
On 18/Jul/18 23:56, Keith Stokes wrote:
At least in the US, Jane also doesn’t really have a choice of her electricity provider, so she’s not getting bombarded with advertising from vendors selling “Faster WiFi” than the next guy. I don’t get to choose my method of power generation and therefore cost per kWh. I’d love to buy $.04 from the Pacific NW when I’m in the Southern US.
And that's why I suspect that 10Gbps to the home will become a reality not out of necessity, but out of a race on who can out-market the other. The problem for us as operators - which is what I was trying to explain - was that even though the home will likely not saturate that 10Gbps link, never mind even use 1% of it in any sustained fashion, we shall be left the burden of proving the "I want to see my 10Gbps that I bought, or I'm moving to your competitor" case over and over again. When are we going to stop feeding the monster we've created (or more accurately, that has been created for us)? Mark.
On 7/19/18 1:30 AM, Mark Tinka wrote:
On 18/Jul/18 23:56, Keith Stokes wrote:
At least in the US, Jane also doesn’t really have a choice of her electricity provider, so she’s not getting bombarded with advertising from vendors selling “Faster WiFi” than the next guy. I don’t get to choose my method of power generation and therefore cost per kWh. I’d love to buy $.04 from the Pacific NW when I’m in the Southern US. And that's why I suspect that 10Gbps to the home will become a reality not out of necessity, but out of a race on who can out-market the other.
The problem for us as operators - which is what I was trying to explain - was that even though the home will likely not saturate that 10Gbps link, never mind even use 1% of it in any sustained fashion, we shall be left the burden of proving the "I want to see my 10Gbps that I bought, or I'm moving to your competitor" case over and over again.
When are we going to stop feeding the monster we've created (or more accurately, that has been created for us)?
There is a point beyond which the network ceases to be a serious imposition on what you are trying to do. When it gets there, it fades into the background as a utility function. The fact that multiple streaming audio / video applications in a household doesn't have to routinely cheese people off point to the threshold having been reached for the those applications at least in fixed networks. For others it will it still be a while. When that 5GB software update or a new purchased 25GB game takes 20 minutes to deliver that's a delay between intent and action that the user or service operator might seek to minimize. Likewise, Latency or Jitter associated with network resource contention impacts real-time applications. When the network is sufficiently scaled / well behaved that these applications can coexist without imposition that stops being a point of contention.
Mark.
On 19/Jul/18 14:57, joel jaeggli wrote:
There is a point beyond which the network ceases to be a serious imposition on what you are trying to do.
When it gets there, it fades into the background as a utility function.
I've seen this to be the case when customers are used to buying large capacity, i.e., 10Gbps, 20Gbps, 50Gbps, 120Gbps, e.t.c. Admittedly, these tend to be service providers or super large enterprises, and there is no way they are going to practically ask you to test their 50Gbps delivery - mostly because it's physically onerous, and also because they have some clue about speed tests not being any form of scientific measure. The problem is with the customers that buy orders of magnitude less than, say, 1Gbps. They will be interested in speed tests as a matter of course. We notice that as the purchased capacity goes up, customers tend to be less interested in speed tests. If anything, concern shifts to more important metrics such as packet loss and latency.
The fact that multiple streaming audio / video applications in a household doesn't have to routinely cheese people off point to the threshold having been reached for the those applications at least in fixed networks.
One angle of attack is to educate less savvy customers about bandwidth being more about supporting multiple users on the network at the same time all with smiles on their faces, than about it making things go faster. I had to tell a customer, recently, that more bandwidth will help increase application speed up to a certain point. After that, it's about being able to add users without each individual user being disadvantaged. Y'know, a case of 2 highway lanes running at 80km/hr vs. 25 highway lanes running at 80km/hr.
For others it will it still be a while. When that 5GB software update or a new purchased 25GB game takes 20 minutes to deliver that's a delay between intent and action that the user or service operator might seek to minimize.
That's where the CDN's and content operators need to chip in and do their part. The physics is the physics, and while I can (could) install DownThemAll on my Firefox install to accelerate downloads, I don't have those options when waiting for my PS4 to download that 25GB purchase, or my iPhone to download that 5GB update.
Likewise, Latency or Jitter associated with network resource contention impacts real-time applications. When the network is sufficiently scaled / well behaved that these applications can coexist without imposition that stops being a point of contention.
All agreed there. In our market, it's not a lack of backbone resources. It's that the majority of online resources customers are trying to reach are physically too far away. Mark.
On Tue, Jul 17, 2018, at 18:12, Andy Ringsmuth wrote:
I suppose in reality it’s no different than any other utility. My home has 200 amp electrical service. Will I ever use 200 amps at one time?
No, because at 201 Amps instantaneous the breaker will cut everything.
Highly highly unlikely. But if my electrical utility wanted to advertise “200 amp service in all homes we supply!” they sure could. Would an electrician be able to test it? I’m sure there is a way somehow.
Will they deal with customers calling to complain that their (unknown to the utility) "megatron equipment" says it cannot draw 199 Amps from a single outlet ? I don't think so. They just ensure the global breaker will not trigger when oven+microwave+home-wide air-con+water heating+BT rig in the basement all draw all they can (i.e. up to ~25 Amps each) for something like 5 min.
saturate my home fiber 300 mbit synchronous connection? Every now and then yes, but rarely. Although if I’m paying for 300 and not getting it, my ISP will be hearing from me.
Will you waste your time if some random site says "you have 200 Mbps" ? On residential, we only accept complaints for tests in pre-determined (wired, no intermediate device, select set of test servers and tools, customer hardware check) conditions and only for results lower than 60-70% of "advertised speed". If wireless is invoved, test is being dismissed as "dear customer, please fix your network, regards". For pro/enterprise service, we use higher bandwidth threshold, but we do expect the other side to be competent enough for something like an iperf3 test. However, I have to mention that for the moment we can afford to run a congestion-free network (strictly less than 80% charge - usually less than 50% - measured with 1-minute sampling).
If my electrical utility told me “hey, you can upgrade to 500 amp
Are the 200 Amps written somewhere in the contract or is it what reads on the usually installed breaker ? Around here, the maximal power is determined in the contract (and enforced by the "connected" electrical meter/breaker that has a generous functioning margin.
Typical electrical breakers are not instantaneous devices and likely will not trip at .5% over rated load until they've been run near limit for extended periods of time. ----- Keith Stokes
On Jul 22, 2018, at 5:52 AM, Radu-Adrian Feurdean <nanog@radu-adrian.feurdean.net> wrote:
On Tue, Jul 17, 2018, at 18:12, Andy Ringsmuth wrote:
I suppose in reality it’s no different than any other utility. My home has 200 amp electrical service. Will I ever use 200 amps at one time?
No, because at 201 Amps instantaneous the breaker will cut everything.
Highly highly unlikely. But if my electrical utility wanted to advertise “200 amp service in all homes we supply!” they sure could. Would an electrician be able to test it? I’m sure there is a way somehow.
Will they deal with customers calling to complain that their (unknown to the utility) "megatron equipment" says it cannot draw 199 Amps from a single outlet ? I don't think so. They just ensure the global breaker will not trigger when oven+microwave+home-wide air-con+water heating+BT rig in the basement all draw all they can (i.e. up to ~25 Amps each) for something like 5 min.
saturate my home fiber 300 mbit synchronous connection? Every now and then yes, but rarely. Although if I’m paying for 300 and not getting it, my ISP will be hearing from me.
Will you waste your time if some random site says "you have 200 Mbps" ? On residential, we only accept complaints for tests in pre-determined (wired, no intermediate device, select set of test servers and tools, customer hardware check) conditions and only for results lower than 60-70% of "advertised speed". If wireless is invoved, test is being dismissed as "dear customer, please fix your network, regards".
For pro/enterprise service, we use higher bandwidth threshold, but we do expect the other side to be competent enough for something like an iperf3 test.
However, I have to mention that for the moment we can afford to run a congestion-free network (strictly less than 80% charge - usually less than 50% - measured with 1-minute sampling).
If my electrical utility told me “hey, you can upgrade to 500 amp
Are the 200 Amps written somewhere in the contract or is it what reads on the usually installed breaker ? Around here, the maximal power is determined in the contract (and enforced by the "connected" electrical meter/breaker that has a generous functioning margin.
Re: 10gb TTH Just a thought: Do they need 10gb? Or do they need multiple 1gb (e.g.) channels which might be cheaper and easier to provision? -- -Barry Shein Software Tool & Die | bzs@TheWorld.com | http://www.TheWorld.com Purveyors to the Trade | Voice: +1 617-STD-WRLD | 800-THE-WRLD The World: Since 1989 | A Public Information Utility | *oo*
On 17/Jul/18 19:44, bzs@theworld.com wrote:
Re: 10gb TTH
Just a thought:
Do they need 10gb? Or do they need multiple 1gb (e.g.) channels which might be cheaper and easier to provision?
In my house, for example, I only have a single fibre core coming into my house (single fibre pair for my neighbors who are on Active-E - I'm on GPON). If you're thinking of classic LAG's, not sure how we could do that on one physical link. Mark.
On 17 July 2018 at 15:41, Mike Hammett <nanog@ics-il.net> wrote:
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
That's unless 802.11ad/.11ay gain in popularity. 60GHz offers lots of bandwidth and isn't particularly good at getting through brick walls, which might offer some relief to the noise floor problem. Dan
On 17/Jul/18 17:46, Daniel Ankers wrote:
That's unless 802.11ad/.11ay gain in popularity. 60GHz offers lots of bandwidth and isn't particularly good at getting through brick walls, which might offer some relief to the noise floor problem.
So if 60GHz can't get through brick walls, how does the home connect to the ISP PoP? Mark.
The problem cited is the last 100', not the last mile. For ISPs using 60 GHz for the last mile, a wire is ran from the outdoor antenna to the indoor router. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Mark Tinka" <mark.tinka@seacom.mu> To: "Daniel Ankers" <md1clv@md1clv.com>, "Mike Hammett" <nanog@ics-il.net>, "NANOG list" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 10:58:22 AM Subject: Re: Proving Gig Speed On 17/Jul/18 17:46, Daniel Ankers wrote: That's unless 802.11ad/.11ay gain in popularity. 60GHz offers lots of bandwidth and isn't particularly good at getting through brick walls, which might offer some relief to the noise floor problem. So if 60GHz can't get through brick walls, how does the home connect to the ISP PoP? Mark.
On 17/Jul/18 18:07, Mike Hammett wrote:
The problem cited is the last 100', not the last mile.
For ISPs using 60 GHz for the last mile, a wire is ran from the outdoor antenna to the indoor router.
Yeah, the question was rhetorical. I personally don't see ISP's using 60GHz to deliver to the home at scale. But I'll side with Saku and bet against myself for reliably predicting the future. Mark.
https://www.ignitenet.com/wireless-backhaul/ https://www.siklu.com/product/multihaul-series/ https://mikrotik.com/product/wireless_wire_dish https://mikrotik.com/product/wap_60g_ap ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Mark Tinka" <mark.tinka@seacom.mu> To: "Mike Hammett" <nanog@ics-il.net> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 6:57:28 AM Subject: Re: Proving Gig Speed On 17/Jul/18 18:07, Mike Hammett wrote: The problem cited is the last 100', not the last mile. For ISPs using 60 GHz for the last mile, a wire is ran from the outdoor antenna to the indoor router. Yeah, the question was rhetorical. I personally don't see ISP's using 60GHz to deliver to the home at scale. But I'll side with Saku and bet against myself for reliably predicting the future. Mark.
On 18/Jul/18 14:11, Mike Hammett wrote:
https://www.ignitenet.com/wireless-backhaul/ https://www.siklu.com/product/multihaul-series/
https://mikrotik.com/product/wireless_wire_dish https://mikrotik.com/product/wap_60g_ap
There is a product for everything; doesn't mean it'll make a commercially viable business for whoever chooses to implement it. Mark.
I encourage my competitors to not implement those products in their networks. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Mark Tinka" <mark.tinka@seacom.mu> To: "Mike Hammett" <nanog@ics-il.net> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 7:29:31 AM Subject: Re: Proving Gig Speed On 18/Jul/18 14:11, Mike Hammett wrote: https://www.ignitenet.com/wireless-backhaul/ https://www.siklu.com/product/multihaul-series/ https://mikrotik.com/product/wireless_wire_dish https://mikrotik.com/product/wap_60g_ap There is a product for everything; doesn't mean it'll make a commercially viable business for whoever chooses to implement it. Mark.
60 GHz isn't particularly good at getting through a wet dream. I use it outdoors with highly directional antenna (42 dBi of gain) and it's only going to be useful in the home for same-room communication. The only chance it has to be more than that are high-count beam forming antennas to take advantage of very precise reflections. Dozens if not hundreds of elements for that directivity. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Daniel Ankers" <md1clv@md1clv.com> To: "Mike Hammett" <nanog@ics-il.net>, "NANOG list" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 10:46:10 AM Subject: Re: Proving Gig Speed On 17 July 2018 at 15:41, Mike Hammett < nanog@ics-il.net > wrote: 10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later. That's unless 802.11ad/.11ay gain in popularity. 60GHz offers lots of bandwidth and isn't particularly good at getting through brick walls, which might offer some relief to the noise floor problem. Dan
Whats WiFi? Is that the "noise" that escapes from the copper cables? Switch to optical fibre, it does not emit RF noise ... --- The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic volume.
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Mike Hammett Sent: Tuesday, 17 July, 2018 08:42 To: Mark Tinka Cc: NANOG list Subject: Re: Proving Gig Speed
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "Mark Tinka" <mark.tinka@seacom.mu> To: "K. Scott Helms" <kscott.helms@gmail.com> Cc: "NANOG list" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 7:11:35 AM Subject: Re: Proving Gig Speed
On 17/Jul/18 14:07, K. Scott Helms wrote:
That's absolutely true, but I don't see any real alternatives in
some
cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link.
So what are you going to do when 10Gbps FTTH into the home becomes the norm?
Perhaps laptops and servers of the time won't even see this as a rounding error :-\...
Mark.
I don't think iPhones have SFP cages. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Keith Medcalf" <kmedcalf@dessus.com> To: "Mike Hammett" <nanog@ics-il.net>, "Mark Tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 2:32:27 PM Subject: RE: Proving Gig Speed Whats WiFi? Is that the "noise" that escapes from the copper cables? Switch to optical fibre, it does not emit RF noise ... --- The fact that there's a Highway to Hell but only a Stairway to Heaven says a lot about anticipated traffic volume.
-----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Mike Hammett Sent: Tuesday, 17 July, 2018 08:42 To: Mark Tinka Cc: NANOG list Subject: Re: Proving Gig Speed
10G to the home will be pointless as more and more people move away from Ethernet to WiFi where the noise floor for most installs prevents anyone from reaching 802.11n speeds, much less whatever alphabet soup comes later.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "Mark Tinka" <mark.tinka@seacom.mu> To: "K. Scott Helms" <kscott.helms@gmail.com> Cc: "NANOG list" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 7:11:35 AM Subject: Re: Proving Gig Speed
On 17/Jul/18 14:07, K. Scott Helms wrote:
That's absolutely true, but I don't see any real alternatives in
some
cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link.
So what are you going to do when 10Gbps FTTH into the home becomes the norm?
Perhaps laptops and servers of the time won't even see this as a rounding error :-\...
Mark.
On 7/18/18 7:26 PM, Mike Hammett wrote:
I don't think iPhones have SFP cages.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "Keith Medcalf"<kmedcalf@dessus.com> To: "Mike Hammett"<nanog@ics-il.net>, "Mark Tinka"<mark.tinka@seacom.mu> Cc: "NANOG list"<nanog@nanog.org> Sent: Wednesday, July 18, 2018 2:32:27 PM Subject: RE: Proving Gig Speed
Whats WiFi? Is that the "noise" that escapes from the copper cables? Switch to optical fibre, it does not emit RF noise ...
It's comeback time for IrDA ports!
That's absolutely a concern Mark, but most of the CPE vendors that support doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing. Scott Helms On Tue, Jul 17, 2018 at 8:11 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 17/Jul/18 14:07, K. Scott Helms wrote:
That's absolutely true, but I don't see any real alternatives in some cases. I've actually built automated testing into some of the CPE we've deployed and that works pretty well for some models but other devices don't seem to be able to fill a ~500 mbps link.
So what are you going to do when 10Gbps FTTH into the home becomes the norm?
Perhaps laptops and servers of the time won't even see this as a rounding error :-\...
Mark.
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that support doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole. Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it. In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead. For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how physically far the content is. Is this an easy task - hell no; but slamming your head against a wall over and over is no fun either. Mark.
Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler. Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users. https://www.google.com/get/videoqualityreport/ https://fast.com/# On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that support doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how physically far the content is. Is this an easy task - hell no; but slamming your head against a wall over and over is no fun either.
Mark.
On 18/Jul/18 14:40, K. Scott Helms wrote:
Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether.
In our market, most customers that put all their faith and slippers in Ookla have no qualms with choosing a random speed test server on the Ookla network, with no regard as to whether that server is on-net or off-net for their ISP, how that server is maintained, how much bandwidth capacity it has, how it was deployed, its hardware sources, how busy it is, how much of its bandwidth it can actually exhaust, how traffic routes to/from it, e.t.c. Whatever the result, the speed test server or the Ookla network is NEVER at fault. So now, an ISP in the African market has to explain why a speed test server on some unknown network in Feira de Santana is claiming that the customer is not getting what they paid for? Then again, we all need reasons to wake up in the morning :-)...
It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler.
Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users.
https://www.google.com/get/videoqualityreport/ https://fast.com/#
Personally, I don't think the content networks and CDN's should focus on developing yet-another-speed-test-server, because then they are just pushing the problem back to the ISP. I believe they should better spend their time: * Delivering as-near-to 100% of all of their services to all regions, cities, data centres as they possibly can. * Providing tools for network operators as well as their consumers that are biased toward the expected quality of experience, rather than how fast their bandwidth is. A 5Gbps link full of packet loss does not a service make - but what does that translate into for the type of service the content network or CDN is delivering? Mark.
On Wed, Jul 18, 2018 at 9:01 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
Personally, I don't think the content networks and CDN's should focus on developing yet-another-speed-test-server, because then they are just pushing the problem back to the ISP. I believe they should better spend their time:
- Delivering as-near-to 100% of all of their services to all regions, cities, data centres as they possibly can.
- Providing tools for network operators as well as their consumers that are biased toward the expected quality of experience, rather than how fast their bandwidth is. A 5Gbps link full of packet loss does not a service make - but what does that translate into for the type of service the content network or CDN is delivering?
Mark.
That's why I vastly prefer stats from the actual CDNs and content providers that aren't generated by speed tests. They're generated by measuring the actual performance of the service they deliver. Now, that won't prevent burden shifting, but it does get rid of a lot of the problems you bring up. Youtube for example wouldn't rate a video stream as good if the packet loss were high because it's actually looking at the bit rate of successfully delivered encapsulated video frames I _think_ the same is true of Netflix though they also offer a real time test as well which frankly isn't as helpful for monitoring but getting a quick test to the Netflix node you'd normally use can be nice in some cases.
Fast.com will pull from multiple nodes at the same time. I think there were four streams on the one I looked at, two to the on-net OCA and two that went off-net elsewhere. One of those off-net was in the same country, but very not near. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "K. Scott Helms" <kscott.helms@gmail.com> To: "mark tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 8:41:41 AM Subject: Re: Proving Gig Speed On Wed, Jul 18, 2018 at 9:01 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
Personally, I don't think the content networks and CDN's should focus on developing yet-another-speed-test-server, because then they are just pushing the problem back to the ISP. I believe they should better spend their time:
- Delivering as-near-to 100% of all of their services to all regions, cities, data centres as they possibly can.
- Providing tools for network operators as well as their consumers that are biased toward the expected quality of experience, rather than how fast their bandwidth is. A 5Gbps link full of packet loss does not a service make - but what does that translate into for the type of service the content network or CDN is delivering?
Mark.
That's why I vastly prefer stats from the actual CDNs and content providers that aren't generated by speed tests. They're generated by measuring the actual performance of the service they deliver. Now, that won't prevent burden shifting, but it does get rid of a lot of the problems you bring up. Youtube for example wouldn't rate a video stream as good if the packet loss were high because it's actually looking at the bit rate of successfully delivered encapsulated video frames I _think_ the same is true of Netflix though they also offer a real time test as well which frankly isn't as helpful for monitoring but getting a quick test to the Netflix node you'd normally use can be nice in some cases.
On Wed, Jul 18, 2018, at 15:45, Mike Hammett wrote:
Fast.com will pull from multiple nodes at the same time. I think there
Here in Europe, fast.com consistently proven to be 100% UNreliable, especially on high-speed FTTH. OOKla and nPerf gave better results for high-speed connections 100% of the time.
* nanog@radu-adrian.feurdean.net (Radu-Adrian Feurdean) [Sun 22 Jul 2018, 13:27 CEST]:
On Wed, Jul 18, 2018, at 15:45, Mike Hammett wrote:
Fast.com will pull from multiple nodes at the same time. I think there
Here in Europe, fast.com consistently proven to be 100% UNreliable, especially on high-speed FTTH. OOKla and nPerf gave better results for high-speed connections 100% of the time.
It has a tendency to download over IPv6 from New York, which, while still fast enough for HD movie streaming, doesn't give a good indication of line speed. (I have native IPv6 here, no tunnels.) -- Niels.
On 18/Jul/18 15:41, K. Scott Helms wrote:
That's why I vastly prefer stats from the actual CDNs and content providers that aren't generated by speed tests. They're generated by measuring the actual performance of the service they deliver. Now, that won't prevent burden shifting, but it does get rid of a lot of the problems you bring up. Youtube for example wouldn't rate a video stream as good if the packet loss were high because it's actually looking at the bit rate of successfully delivered encapsulated video frames I _think_ the same is true of Netflix though they also offer a real time test as well which frankly isn't as helpful for monitoring but getting a quick test to the Netflix node you'd normally use can be nice in some cases.
Agreed. In our market, we've generally not struggled with users and their experience for services hosted locally in-country. So in addition to providing good tools for operators and eyeballs to measure experience, the biggest win will come from the content folk and CDN's getting their services inside our market. Mark.
Mark, I am glad I don't have your challenges :) What's the Netflix (or other substantial OTT video provider) situation for direct peers? It's pretty easy and cheap for North American operators to get settlement free peering to Netflix, Amazon, Youtube and others but I don't know what that looks like in Africa. Scott Helms On Wed, Jul 18, 2018 at 10:00 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 15:41, K. Scott Helms wrote:
That's why I vastly prefer stats from the actual CDNs and content providers that aren't generated by speed tests. They're generated by measuring the actual performance of the service they deliver. Now, that won't prevent burden shifting, but it does get rid of a lot of the problems you bring up. Youtube for example wouldn't rate a video stream as good if the packet loss were high because it's actually looking at the bit rate of successfully delivered encapsulated video frames I _think_ the same is true of Netflix though they also offer a real time test as well which frankly isn't as helpful for monitoring but getting a quick test to the Netflix node you'd normally use can be nice in some cases.
Agreed.
In our market, we've generally not struggled with users and their experience for services hosted locally in-country.
So in addition to providing good tools for operators and eyeballs to measure experience, the biggest win will come from the content folk and CDN's getting their services inside our market.
Mark.
On 18/Jul/18 16:22, K. Scott Helms wrote:
Mark,
I am glad I don't have your challenges :)
What's the Netflix (or other substantial OTT video provider) situation for direct peers? It's pretty easy and cheap for North American operators to get settlement free peering to Netflix, Amazon, Youtube and others but I don't know what that looks like in Africa.
Peering isn't the problem. Proximity to content is. Netflix, Google, Akamai and a few others have presence in Africa already. So those aren't the problem (although for those currently in Africa, not all of the services they offer globally are available here - just a few). A lot of user traffic is not video streaming, so that's where a lot of work is required. In particular, cloud and gaming operators are the ones causing real pain. All the peering in the world doesn't help if the latency is well over 100ms+. That's what we need to fix. Mark.
Peering isn't the problem. Proximity to content is.
Netflix, Google, Akamai and a few others have presence in Africa already. So those aren't the problem (although for those currently in Africa, not all of the services they offer globally are available here - just a few).
A lot of user traffic is not video streaming, so that's where a lot of work is required. In particular, cloud and gaming operators are the ones causing real pain.
All the peering in the world doesn't help if the latency is well over 100ms+. That's what we need to fix.
Mark.
Mark, I agree completely, I'm working on a paper right now for a conference (waiting on Wireshark to finish with my complex filter at the moment) that shows what's happening with gaming traffic. What's really interesting is how gaming is changing and within the next few years I do expect a lot of games to move into the remote rendering world. I've tested several and the numbers are pretty substantial. You need to have <=30 ms of latency to sustain 1080p gaming and obviously jitter and packet loss are also problematic. The traffic is also pretty impressive with spikes of over 50 mbps down and sustained averages over 21 mbps. Upstream traffic isn't any more of an issue than "normal" online gaming. Nvidia, Google, and a host of start ups are all in the mix with a lot of people predicting Sony and Microsoft will be (or are already) working on pure cloud consoles. Scott Helms
The game companies (and render farms) also need to work on as extensive peering as the top CDNs have been doing. They're getting better, but not quite there yet. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "K. Scott Helms" <kscott.helms@gmail.com> To: "mark tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 9:58:09 AM Subject: Re: Proving Gig Speed
Peering isn't the problem. Proximity to content is.
Netflix, Google, Akamai and a few others have presence in Africa already. So those aren't the problem (although for those currently in Africa, not all of the services they offer globally are available here - just a few).
A lot of user traffic is not video streaming, so that's where a lot of work is required. In particular, cloud and gaming operators are the ones causing real pain.
All the peering in the world doesn't help if the latency is well over 100ms+. That's what we need to fix.
Mark.
Mark, I agree completely, I'm working on a paper right now for a conference (waiting on Wireshark to finish with my complex filter at the moment) that shows what's happening with gaming traffic. What's really interesting is how gaming is changing and within the next few years I do expect a lot of games to move into the remote rendering world. I've tested several and the numbers are pretty substantial. You need to have <=30 ms of latency to sustain 1080p gaming and obviously jitter and packet loss are also problematic. The traffic is also pretty impressive with spikes of over 50 mbps down and sustained averages over 21 mbps. Upstream traffic isn't any more of an issue than "normal" online gaming. Nvidia, Google, and a host of start ups are all in the mix with a lot of people predicting Sony and Microsoft will be (or are already) working on pure cloud consoles. Scott Helms
On 18/Jul/18 17:00, Mike Hammett wrote:
The game companies (and render farms) also need to work on as extensive peering as the top CDNs have been doing. They're getting better, but not quite there yet.
I'm not sure about North America, Asia-Pac or South America, but in Europe, the gaming folk actually peer very well. The problem for us is that is anymore from 112ms - 170ms away, depending on which side of the continent you are. And while we peer extensively with them via our network in Europe, that latency will simply not go away. They need to come into market and satisfy their demand. Mark.
* mark.tinka@seacom.mu (Mark Tinka) [Thu 19 Jul 2018, 07:08 CEST]:
I'm not sure about North America, Asia-Pac or South America, but in Europe, the gaming folk actually peer very well.
The problem for us is that is anymore from 112ms - 170ms away, depending on which side of the continent you are.
And while we peer extensively with them via our network in Europe, that latency will simply not go away. They need to come into market and satisfy their demand.
That will happen as soon as it's affordable for them to do so - which requires an ecosystem of affordable and reliable independent IP transit/transport and colocation to exist. -- Niels.
On 19/Jul/18 17:06, Niels Bakker wrote:
That will happen as soon as it's affordable for them to do so - which requires an ecosystem of affordable and reliable independent IP transit/transport and colocation to exist.
Agreed, but as experience has shown, those aren't the only considerations they have to make. You'd be amazed how often delays in deployment are about a lack of resources on their side to deploy at all the sites on their agenda. Mark.
On 18/Jul/18 16:58, K. Scott Helms wrote:
Mark,
I agree completely, I'm working on a paper right now for a conference (waiting on Wireshark to finish with my complex filter at the moment) that shows what's happening with gaming traffic. What's really interesting is how gaming is changing and within the next few years I do expect a lot of games to move into the remote rendering world. I've tested several and the numbers are pretty substantial. You need to have <=30 ms of latency to sustain 1080p gaming and obviously jitter and packet loss are also problematic. The traffic is also pretty impressive with spikes of over 50 mbps down and sustained averages over 21 mbps. Upstream traffic isn't any more of an issue than "normal" online gaming. Nvidia, Google, and a host of start ups are all in the mix with a lot of people predicting Sony and Microsoft will be (or are already) working on pure cloud consoles.
And what we need is for them to ensure all these remote rendering farms are evenly distributed around the world to ensure that 30ms peak latency is achievable. Mark.
On 19/07/18 00:27, Mark Tinka wrote:
All the peering in the world doesn't help if the latency is well over 100ms+. That's what we need to fix.
Living in Australia this is an every day experience, especially for content served out of Europe (or for that matter, Africa). TCP & below are rarely the biggest problem these days (at least with TCP-BBR & friends), far too often applications, web services etc. are simply never tested in an environment with any significant latency. While some issues may exist for static content loading for which a CDN can be helpful, that's not helpful for application traffic.
On 18/Jul/18 17:20, Julien Goodwin wrote:
Living in Australia this is an every day experience, especially for content served out of Europe (or for that matter, Africa).
TCP & below are rarely the biggest problem these days (at least with TCP-BBR & friends), far too often applications, web services etc. are simply never tested in an environment with any significant latency.
While some issues may exist for static content loading for which a CDN can be helpful, that's not helpful for application traffic.
Yip. Mark.
On 19 July 2018 at 07:06, Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 17:20, Julien Goodwin wrote:
Living in Australia this is an every day experience, especially for content served out of Europe (or for that matter, Africa).
TCP & below are rarely the biggest problem these days (at least with TCP-BBR & friends), far too often applications, web services etc. are simply never tested in an environment with any significant latency.
While some issues may exist for static content loading for which a CDN can be helpful, that's not helpful for application traffic.
Yip.
Mark.
Sorry about that. I feel bad has a webmaster. Most of us on the web we are creating websites that are not documents to be download and viewed, but applications that require to work many small parts that are executed togeter. Most VRML examples from 1997 are unavailable because host moved, directories changed name, whole websites where redone with new technologies. Only a 1% of that exist in a readable format. But the current web is much more delicate, and will break more and sooner than that. Perhaps something can be done about it. Chrome already include a option to test websites emulating "Slow 3G" that webmasters may use and want to use. I suggest a header or html meta tag where a documents disable external js scripts, or limit these to a white list of hosts. <meta http-equiv="script-whitelist" content="None">. So if you are a Vodafone customer. And you are reading a political document. Vodafone can inject a javascript script in the page. But it will not run because of the presence of <meta http-equiv="script-whitelist" content="None">. Vodafone can still further alter the html of the page to remove this meta and inject their script. Get webmasters into the idea of making websites that are documents. That require no execution of scripts. So they will still work in 2048. And will work in poor network conditions, where a website that load 47 different js files may break. tl:dr: the web is evolving into a network of applications, instead of documents. Documents can't "break" easily. Programs may break completelly even to tiny changes. Maybe getting webmasters on board of biasing in favor of documents could do us all a favour. -- -- ℱin del ℳensaje.
On 19/Jul/18 17:08, Tei wrote:
tl:dr: the web is evolving into a network of applications, instead of documents. Documents can't "break" easily. Programs may break completelly even to tiny changes. Maybe getting webmasters on board of biasing in favor of documents could do us all a favour.
Yes, that would be great, but I recall there was someone asking about deploying a WISP in rural America several weeks ago and how to deal with these "busy" web sites of 2018. And your suggestion was one of the proposals NANOG suggested to the OP. While it'd be good for webmasters to optimize the content they create, I appreciate that as the Internet continues to develop, web sites and content will only get a lot more dynamic and complicated, to appeal to the human desire for "pretty nice shiny things". Certainly, optimizations for mobile devices is better than for laptops/desktops for obvious reasons, but as those mobile devices keep adding power, it will slowly release the constraints webmasters have when developing for said platforms. So in terms of effort expended, I'd be losing if I tried to get content creators to simplify their content. Rather, I'll hunt down opportunities that encourage content to move closer to our markets. Mark.
Mark already knows this, but for the benefit of the North American network operators on the list, **where** in Africa makes a huge difference. Certain submarine cables reach certain coastal cities at very different transport prices, depending on location, what sort of organizational structure of cable it is, age of cable, etc. For example Sierra Leone and Liberia are logically network stubs, suburbs of London, UK. To the best of my knowledge the ISPs and mobile network operators there greatly prefer buying transport capacity to reach London rather than the other direction to Accra and Lagos. I do not know of any SL or LR ISPs which have small POPs with IP edge routers in Accra or Lagos, and definitely not in Cape Town. Whatever circuits exist for voice traffic that go to Lagos are much smaller. On Wed, Jul 18, 2018 at 7:27 AM, Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 16:22, K. Scott Helms wrote:
Mark,
I am glad I don't have your challenges :)
What's the Netflix (or other substantial OTT video provider) situation for direct peers? It's pretty easy and cheap for North American operators to get settlement free peering to Netflix, Amazon, Youtube and others but I don't know what that looks like in Africa.
Peering isn't the problem. Proximity to content is.
Netflix, Google, Akamai and a few others have presence in Africa already. So those aren't the problem (although for those currently in Africa, not all of the services they offer globally are available here - just a few).
A lot of user traffic is not video streaming, so that's where a lot of work is required. In particular, cloud and gaming operators are the ones causing real pain.
All the peering in the world doesn't help if the latency is well over 100ms+. That's what we need to fix.
Mark.
On 19/Jul/18 17:29, Eric Kuhnke wrote:
Mark already knows this, but for the benefit of the North American network operators on the list, **where** in Africa makes a huge difference. Certain submarine cables reach certain coastal cities at very different transport prices, depending on location, what sort of organizational structure of cable it is, age of cable, etc.
For example Sierra Leone and Liberia are logically network stubs, suburbs of London, UK. To the best of my knowledge the ISPs and mobile network operators there greatly prefer buying transport capacity to reach London rather than the other direction to Accra and Lagos. I do not know of any SL or LR ISPs which have small POPs with IP edge routers in Accra or Lagos, and definitely not in Cape Town. Whatever circuits exist for voice traffic that go to Lagos are much smaller. You're right, Eric.
Unfortunately, West Africa, at the moment, even with the number of submarine cables available in the region, is lagging behind Eastern & Southern Africa, particularly since 2009. There are a number of reasons for this, but fundamentally, there is a lot more progressiveness around openness and diversification of infrastructure deployment, operation and regional regulatory synergies that the East & South have done better than the West. Which is not to say that there isn't work going on in West Africa... there certainly is. It's just taking a little longer than it ought to. In my region, for example (which covers East & South), we are working extra hard to de-legitimise Europe as a clearing house for our traffic, and also, as a source for off-continent traffic. We are fortunate to have a number of global actors participating in this transition, and with it, a whole eco system is coming alive. West Africa need to consider their own methods of attaining this goal, so they do not look like suburbs of London from a connectivity standpoint, sooner rather than later. To their benefit, there a couple of things going on that may very well accelerate this. Mark.
Check your Google portal for more information as to what Google can do with BGP Communities related to reporting. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "K. Scott Helms" <kscott.helms@gmail.com> To: "mark tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 7:40:31 AM Subject: Re: Proving Gig Speed Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler. Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users. https://www.google.com/get/videoqualityreport/ https://fast.com/# On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that support doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how physically far the content is. Is this an easy task - hell no; but slamming your head against a wall over and over is no fun either.
Mark.
Mike, What portal would that be? Do you have a URL? On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett <nanog@ics-il.net> wrote:
Check your Google portal for more information as to what Google can do with BGP Communities related to reporting.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "K. Scott Helms" <kscott.helms@gmail.com> To: "mark tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 7:40:31 AM Subject: Re: Proving Gig Speed
Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler.
Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users.
https://www.google.com/get/videoqualityreport/ https://fast.com/#
On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that
doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how
far the content is. Is this an easy task - hell no; but slamming your
support physically head
against a wall over and over is no fun either.
Mark.
https://isp.google.com ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "K. Scott Helms" <kscott.helms@gmail.com> To: "Mike Hammett" <nanog@ics-il.net> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 8:45:22 AM Subject: Re: Proving Gig Speed Mike, What portal would that be? Do you have a URL? On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett < nanog@ics-il.net > wrote: Check your Google portal for more information as to what Google can do with BGP Communities related to reporting. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "K. Scott Helms" < kscott.helms@gmail.com > To: "mark tinka" < mark.tinka@seacom.mu > Cc: "NANOG list" < nanog@nanog.org > Sent: Wednesday, July 18, 2018 7:40:31 AM Subject: Re: Proving Gig Speed Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler. Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users. https://www.google.com/get/videoqualityreport/ https://fast.com/# On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka < mark.tinka@seacom.mu > wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that support doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how physically far the content is. Is this an easy task - hell no; but slamming your head against a wall over and over is no fun either.
Mark.
That seems only to be for direct peers Mike. On Wed, Jul 18, 2018 at 9:53 AM Mike Hammett <nanog@ics-il.net> wrote:
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "K. Scott Helms" <kscott.helms@gmail.com> To: "Mike Hammett" <nanog@ics-il.net> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 8:45:22 AM Subject: Re: Proving Gig Speed
Mike,
What portal would that be? Do you have a URL?
On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett < nanog@ics-il.net > wrote:
Check your Google portal for more information as to what Google can do with BGP Communities related to reporting.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "K. Scott Helms" < kscott.helms@gmail.com > To: "mark tinka" < mark.tinka@seacom.mu > Cc: "NANOG list" < nanog@nanog.org > Sent: Wednesday, July 18, 2018 7:40:31 AM Subject: Re: Proving Gig Speed
Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler.
Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users.
https://www.google.com/get/videoqualityreport/ https://fast.com/#
On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka < mark.tinka@seacom.mu > wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that
doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how
far the content is. Is this an easy task - hell no; but slamming your
support physically head
against a wall over and over is no fun either.
Mark.
https://isp.google.com Thought I think this is only for when you have peering, someone can correct me if that's incorrect. ns -----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of K. Scott Helms Sent: Wednesday, July 18, 2018 8:45 AM To: Mike Hammett Cc: NANOG list Subject: Re: Proving Gig Speed Mike, What portal would that be? Do you have a URL? On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett <nanog@ics-il.net> wrote:
Check your Google portal for more information as to what Google can do with BGP Communities related to reporting.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "K. Scott Helms" <kscott.helms@gmail.com> To: "mark tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 7:40:31 AM Subject: Re: Proving Gig Speed
Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler.
Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users.
https://www.google.com/get/videoqualityreport/ https://fast.com/#
On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that
doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how
far the content is. Is this an easy task - hell no; but slamming your
support physically head
against a wall over and over is no fun either.
Mark.
Correct. I figured most eyeballs had Google peering or were looking to get it. I was talking with CVF at ChiNOG about some of the shortcomings of the Google ISP Portal. He saw value in making the portal available to all ISPs. I don't know when (if) that will be available. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Luke Guillory" <lguillory@reservetele.com> To: "K. Scott Helms" <kscott.helms@gmail.com>, "Mike Hammett" <nanog@ics-il.net> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 8:48:32 AM Subject: RE: Proving Gig Speed https://isp.google.com Thought I think this is only for when you have peering, someone can correct me if that's incorrect. ns -----Original Message----- From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of K. Scott Helms Sent: Wednesday, July 18, 2018 8:45 AM To: Mike Hammett Cc: NANOG list Subject: Re: Proving Gig Speed Mike, What portal would that be? Do you have a URL? On Wed, Jul 18, 2018 at 9:25 AM Mike Hammett <nanog@ics-il.net> wrote:
Check your Google portal for more information as to what Google can do with BGP Communities related to reporting.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "K. Scott Helms" <kscott.helms@gmail.com> To: "mark tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 7:40:31 AM Subject: Re: Proving Gig Speed
Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler.
Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users.
https://www.google.com/get/videoqualityreport/ https://fast.com/#
On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that
doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how
far the content is. Is this an easy task - hell no; but slamming your
support physically head
against a wall over and over is no fun either.
Mark.
On 18/Jul/18 15:48, Luke Guillory wrote:
Thought I think this is only for when you have peering, someone can correct me if that's incorrect.
And also if you operate a GGC (which is very likely if you're peering). Mark.
On Wed, 18 Jul 2018 08:24:15 -0500, Mike Hammett said:
Check your Google portal for more information as to what Google can do with BGP Communities related to reporting.
For a horrifying moment, I misread this as Google surfacing performance stats via a BGP stream by encoding stat_name:value as community:value /me goes searching for mass quantities of caffeine....
For a horrifying moment, I misread this as Google surfacing performance stats via a BGP stream by encoding stat_name:value as community:value
/me goes searching for mass quantities of caffeine....
Because you'll be spending the night writing up that Internet-Draft? :-) -- Simon.
More speedtest and quality reporting sites\services (including internal to big content) seem more about blaming the ISP than providing the ISP usable information to fix it. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "K. Scott Helms" <kscott.helms@gmail.com> To: "mark tinka" <mark.tinka@seacom.mu> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, July 18, 2018 7:40:31 AM Subject: Re: Proving Gig Speed Agreed, and it's one of the fundamental problems that a speed test is (and can only) measure the speeds from point A to point B (often both inside the service provider's network) when the customer is concerned with traffic to and from point C off in someone else's network altogether. It's one of the reasons that I think we have to get more comfortable and more collaborative with the CDN providers as well as the large sources of traffic. Netflix, Youtube, and I'm sure others have their own consumer facing performance testing that is _much_ more applicable to most consumers as compared to the "normal" technician test and measurement approach or even the service assurance that you get from normal performance monitoring. What I'd really like to see is a way to measure network performance from the CO/head end/PoP and also get consumer level reporting from these kinds of services. If Google/Netflix/Amazon Video/$others would get on board with this idea it would make all our lives simpler. Providing individual users stats is nice, but if these guys really want to improve service it would be great to get aggregate reporting by ASN. You can get a rough idea by looking at your overall graph from Google, but it's lacking a lot of detail and there's no simple way to compare that to a head end/CO test versus specific end users. https://www.google.com/get/videoqualityreport/ https://fast.com/# On Wed, Jul 18, 2018 at 8:27 AM Mark Tinka <mark.tinka@seacom.mu> wrote:
On 18/Jul/18 14:00, K. Scott Helms wrote:
That's absolutely a concern Mark, but most of the CPE vendors that support doing this are providing enough juice to keep up with their max forwarding/routing data rates. I don't see 10 Gbps in residential Internet service being normal for quite a long time off even if the port itself is capable of 10Gbps. We have this issue today with commercial customers, but it's generally not as a much of a problem because the commercial CPE get their usage graphed and the commercial CPE have more capabilities for testing.
I suppose the point I was trying to make is when does it stop being feasible to test each and every piece of bandwidth you deliver to a customer? It may very well not be 10Gbps... perhaps it's 2Gbps, or 3.2Gbps, or 5.1Gbps... basically, the rabbit hole.
Like Saku, I am more interested in other fundamental metrics that could impact throughput such as latency, packet loss and jitter. Bandwidth, itself, is easy to measure with your choice of SNMP poller + 5 minutes. But when you're trying to explain to a simple customer buying 100Mbps that a break in your Skype video cannot be diagnosed with a throughput speed test, they don't/won't get it.
In Africa, for example, customers in only one of our markets are so obsessed with speed tests. But not to speed test servers that are in-country... they want to test servers that sit in Europe, North America, South America and Asia-Pac. With the latency averaging between 140ms - 400ms across all of those regions from source, the amount of energy spent explaining to customers that there is no way you can saturate your delivered capacity beyond a couple of Mbps using Ookla and friends is energy I could spend drinking wine and having a medium-rare steak, instead.
For us, at least, aside from going on a mass education drive in this particular market, the ultimate solution is just getting all that content localized in-country or in-region. Once that latency comes down and the resources are available locally, the whole speed test debacle will easily fall away, because the source of these speed tests is simply how physically far the content is. Is this an easy task - hell no; but slamming your head against a wall over and over is no fun either.
Mark.
On 18/Jul/18 15:24, Mike Hammett wrote:
More speedtest and quality reporting sites\services (including internal to big content) seem more about blaming the ISP than providing the ISP usable information to fix it.
Agreed. IIRC, this all began with http://www.dslreports.com/speedtest (I can't think of another speed test resource at the time) back in the late '90's... of course, it assumed all ISP's and their eyeballs were in North America. It's been downhill ever since. Mark.
On 16/Jul/18 20:17, Matt Erculiani wrote:
We use Iperf3 for customers that complain about throughput, it's relatively low overhead compared to the Ookla HTML5 client. Same scenario as you, we have the tech hook up their laptop to the customer's drop and perform testing. I suspect your antivirus may be attempting to perform real-time inspection on the http(s) traffic, which would crush the little laptop CPU for sure.
But iPerf doesn't paint pretty pictures at the end of the test that I can brag to my friends with :-)... Mark.
Hi! Here I have http://www.speedtest.net/result/7475546550 from my notebook right now. It is i5-2540M CPU. First of all, NIC is much more important than CPU. Intel NIC can give 1Gbps easy, while Realtek or Broadcom probably never gives you more than ~300mbps. Linux times faster than Windows in the same hardware config. Speedtest very dependent on the browser, so try different and find better with your configuration as well. Sometimes you will need to tune TCP stack options to have >100mbps in one TCP session. Speedtest usually shows good results on download, but somewhy shows slow upload speed. Nowdays it is better, but several years ago I can't get more than 100mbps upload in same configuration of notebook and network I have now. Real uploads was on gig speeds. But the best is to use IPERF to do meansurements. It is really accurate. 16.07.18 20:58, Chris Gross пише:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
Ookla does have a client that you can install in various OSes to remove browser issues. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Chris Gross" <CGross@ninestarconnect.com> To: "North American Network Operators' Group" <nanog@nanog.org> Sent: Monday, July 16, 2018 12:58:20 PM Subject: Proving Gig Speed I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for. Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for. Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
Second the recommendation for the downloadable ookla speedtest desktop app. -Ben
On Jul 16, 2018, at 11:30 AM, Mike Hammett <nanog@ics-il.net> wrote:
Ookla does have a client that you can install in various OSes to remove browser issues.
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest-IX http://www.midwest-ix.com
----- Original Message -----
From: "Chris Gross" <CGross@ninestarconnect.com> To: "North American Network Operators' Group" <nanog@nanog.org> Sent: Monday, July 16, 2018 12:58:20 PM Subject: Proving Gig Speed
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
I use my Lenovo Thinkpad with or any "decent" client machine and run iperf to prove the connectivity. Of course, client switch quality or firewall can be an issue. On 07/16/2018 01:58 PM, Chris Gross wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
-- Morgan A. Miskell CaroNet Data Centers 704-643-8330 x206 ---------------------------------------------------------------------------- The information contained in this e-mail is confidential and is intended only for the named recipient(s). If you are not the intended recipient you must not copy, distribute, or take any action or reliance on it. If you have received this e-mail in error, please notify the sender. Any unauthorized disclosure of the information contained in this e-mail is strictly prohibited. ----------------------------------------------------------------------------
I recently talked at the IRTF on this subject and followed up with a blog post at https://blog.apnic.net/2018/06/21/measurement-challenges-in-the-gigabit-era/. There's also an open source speed test project you may want to consider at https://github.com/Comcast/Speed-testJS. Jason On 7/16/18, 2:00 PM, "NANOG on behalf of Chris Gross" <nanog-bounces@nanog.org on behalf of CGross@ninestarconnect.com> wrote: I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for. Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for. Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
Thanks, Jason. While I might have idle curiosity of how well my link performs when I first get it, beyond that the only time I care is when I or somebody else in the house starts screaming "THE INTERTOOBZ R SLOWZ!@!". I just had this happen to me the other night as I trying to watch possibly the worst movie ever on Amazon Prime. Maybe because I've been around a long time, my first suspicion is that something at home was slagging the ISP hop. This is often the case, but it is *maddeningly* slow to diagnose -- my popcorn was getting stale. Naively, I may have thought to hook up to one of the speed testers, but it would only show what was obvious from ping times, etc: lots of drops. So where was the problem? I had to manually go around and start disconnecting things. Even after I thought I had found the perp several times, my popcorn's freshness suffered. And in the end, I never did triangulate out where the problem was... and by morning it had magically fixed itself. My popcorn, on the other hand, gave its all. As we get more and more stuff on our home networks, the probability for crappy software, infected devices, piggy updates, etc multiplies. I'm sure this isn't news to $CORPRO network managers, but at home we quite literally have nothing to help us figure out and remedy these kinds of problems. Or if it turns out to *not* be our problem, that we have some reassurance when we decide to call the support desk and be put through IVR maze hell only to find out it was a local problem after all. cheers, Mike On 7/16/18 1:27 PM, Livingood, Jason wrote:
I recently talked at the IRTF on this subject and followed up with a blog post at https://blog.apnic.net/2018/06/21/measurement-challenges-in-the-gigabit-era/. There's also an open source speed test project you may want to consider at https://github.com/Comcast/Speed-testJS.
Jason
On 7/16/18, 2:00 PM, "NANOG on behalf of Chris Gross" <nanog-bounces@nanog.org on behalf of CGross@ninestarconnect.com> wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
On 16/Jul/18 23:19, Michael Thomas wrote:
As we get more and more stuff on our home networks, the probability for crappy software, infected devices, piggy updates, etc multiplies. I'm sure this isn't news to $CORPRO network managers, but at home we quite literally have nothing to help us figure out and remedy these kinds of problems. Or if it turns out to *not* be our problem, that we have some reassurance when we decide to call the support desk and be put through IVR maze hell only to find out it was a local problem after all.
Well, at least you had the foresight not to "speed test" your way you into keeping your popcorn fresh :-). Mark.
On Mon, Jul 16, 2018 at 2:00 PM Chris Gross <CGross@ninestarconnect.com> wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
My practice is to use iperf with packet capture on both sides. The packet capture can then be analyzed for accurate per-second, or less, throughput, re-transmit rates, etc. This was implemented in a corporate network in several ways including dedicated servers (that also did other monitoring), and bootable CDs or USB sticks that a user in a small office could run on a standard desktop. Many interesting issues were discovered with this technique, and a fair number of perceived issues were debunked. Here is a wrapper to run iperf + tcpdump on each side of a connection (it could use some automation): https://github.com/meekj/perl-packet-tools/blob/master/run_iperf I originally did the analysis in Perl, but that can be fairly slow when processing 30 seconds of packets on a saturated GigE link. If anyone is interested there is now a C++ version along with analysis code in R at: https://github.com/meekj/iperfsum That version currently has only one second resolution. I have a R interface to libpcap files that could be used for analysis at any time resolution: https://github.com/meekj/libpcapR I have a plan to implement the complete test environment in a Docker container at some point. I also have a collection of small, mostly low-cost, computers that I plan to benchmark for network throughput and data analysis time. Some of the tiny computers can saturate a GigE link but are very slow processing the data. Jon
On 16/Jul/18 22:34, Jon Meek wrote:
My practice is to use iperf with packet capture on both sides. The packet capture can then be analyzed for accurate per-second, or less, throughput, re-transmit rates, etc. This was implemented in a corporate network in several ways including dedicated servers (that also did other monitoring), and bootable CDs or USB sticks that a user in a small office could run on a standard desktop. Many interesting issues were discovered with this technique, and a fair number of perceived issues were debunked.
If you are doing this for yourself, as a network engineer, this is great. But simple Joe at home doesn't care about all this fancy analysis. Does he get a pretty picture saying "Yay", or "Nay", that's all? Mark.
On 16 July 2018 at 18:58, Chris Gross <CGross@ninestarconnect.com> wrote: Hi Chris,
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
I would say, don't use a browser based speed test - how fast is your browser? Answer: It can vary wildly! Also there are SO many variables when testing TCP you MUST test using UDP if you want to just test the network path. Every OS will behave differently with TCP, also with UDP but the variance is a lot lower. Also I recommend you test to a server on you network near to your peering & transit edge. This way users can test up to the point where you would have over the "The Internet" and have no further control. Testing to a server off-net (like off-net Ookla tells me nothing in my opinion). Virtually any modern day laptop with a 1G NIC will saturate a 1G link using UDP traffic in iPerf with ease. I crummy i3 netbook with 1G NIC can do it on one core/thread. We have several iPerf servers dotted around the network and our engineers can test to those at any time and it works well for us. Cheers, James.
On Tue, 17 Jul 2018 at 10:53, James Bensley <jwbensley@gmail.com> wrote:
Virtually any modern day laptop with a 1G NIC will saturate a 1G link using UDP traffic in iPerf with ease. I crummy i3 netbook with 1G NIC can do it on one core/thread.
I guess if you use large packets this might be true. But personally, if I'm testing network, I'm interested in latency, jitter, packet, bps _AND_ pps goals as well, not just bps goal. And I've never seen clean 1Gbps on iperf with small packets. It just cannot be done, even if iPerf was written half decently and it used recvmmsg, it still wouldn't be anywhere near. Clean 1Gbps with small packets in user space is actually very much doable today, just you can't use UDP socket, you must use AF_PACKET on Linux or BPF on OSX and you can write portable 1Gbps UDP sender/receiver. I'm very surprised we don't have iperf like program for netengs which does this and reports latency, jitter, packet loss with binary search for highest lossless pps/bps rates. I started to write one with Anton Aksola in Rust (using libpnet[0]), and implemented quite flexible protocol (server/client, client can ask server exactly what kind of packet to construct/expect, what rate to send/receive over JSON based protocol), so you could also use it to ask it to DDoS your routers control-plane in lab etc. And actually got it working, OSX+Linux ~wirarate (still needs higher end laptop to do 1.5Mpps on single core and we didn't implement multicore support). But as both of us are trash in Rust (and every other applicable language in this domain), we kind of dropped the project once we had sufficient POC running on our laptops. Someone who actually can code, could easily implement such program in a weekend. I'm happy to share the trash we've done if someone intends to check this box in open source world. May use it for inspiration, or just straight up add polish and enough CLI to make it usable as-is. I think very important quality is multiplatform with static binaries. Because important use case is, that you can ask modestly informed customer to copy paste one line to donwload server and copy paste another line to have it running. If use case is that both ends have arbitrary clued people, then there are plenty of good solutions, like Cisco's trex[1]. But what I need is iPerf-like program, which actually a) performs and b) reports the correct things. [0] https://github.com/libpnet/libpnet [1] https://trex-tgn.cisco.com/ -- ++ytti
On 17 July 2018 at 09:54, Saku Ytti <saku@ytti.fi> wrote:
On Tue, 17 Jul 2018 at 10:53, James Bensley <jwbensley@gmail.com> wrote:
Virtually any modern day laptop with a 1G NIC will saturate a 1G link using UDP traffic in iPerf with ease. I crummy i3 netbook with 1G NIC can do it on one core/thread.
I guess if you use large packets this might be true. But personally, if I'm testing network, I'm interested in latency, jitter, packet, bps _AND_ pps goals as well, not just bps goal.
Hi Saku, Yeah I fully agree with what you are saying however, the OPs question sounds like he "only" needed to prove bandwidth. With 1500 byte frames I've run it up to nearly 10Gbps before (it was between VMs in two different DCs that were having slow transfers and the hyper-visors had 10G NICs, so I dare say, on bare metal with large frames it will do 10Gbps).
And I've never seen clean 1Gbps on iperf with small packets. It just cannot be done, even if iPerf was written half decently and it used recvmmsg, it still wouldn't be anywhere near. Clean 1Gbps with small packets in user space is actually very much doable today, just you can't use UDP socket, you must use AF_PACKET on Linux or BPF on OSX and you can write portable 1Gbps UDP sender/receiver. I'm very surprised we don't have iperf like program for netengs which does this and reports latency, jitter, packet loss with binary search for highest lossless pps/bps rates.
I absolutely agree there is a gap in the open source market for this exact application. A tool that sends traffic between Tx and Rx (or bidirectionally) at a specified frame size and frame rate, which can max out 10Gbps at 64 byte frames if required (I say 10Gbps instead of 1Gbps because 10Gbps as an access circuit speed is being increasingly common), and throughout the test it should report RTT and one way latency, jitter and packet loss etc. and then output the results in a format that is easy to parse. It should also have a JSON API and be able to run in a "daemon" mode like an iPerf server that is always on ready for people to test to/from.
I started to write one with Anton Aksola in Rust (using libpnet[0]), and implemented quite flexible protocol (server/client, client can ask server exactly what kind of packet to construct/expect, what rate to send/receive over JSON based protocol), so you could also use it to ask it to DDoS your routers control-plane in lab etc. And actually got it working, OSX+Linux ~wirarate (still needs higher end laptop to do 1.5Mpps on single core and we didn't implement multicore support). But as both of us are trash in Rust (and every other applicable language in this domain), we kind of dropped the project once we had sufficient POC running on our laptops. Someone who actually can code, could easily implement such program in a weekend. I'm happy to share the trash we've done if someone intends to check this box in open source world. May use it for inspiration, or just straight up add polish and enough CLI to make it usable as-is.
I went through a similar process. AF_PACKET is definitely what you need to use if you want to use user-space in Linux (don't know about MAC, only use Linux). I wrote a basic multi-threaded load generator and load sinker (Tx and Rx) in C using various Kernel methods (send(), sendmsg(), sendmmsg(), and PACKET_MMAP) with AF_PACKET to compare them all: https://github.com/jwbensley/EtherateMT The problem is that C is a great language to write high performance stuff, it's a shit language to create a JSON API in. I have two back to back lab servers at work with 10G links between them, low end 2.1Ghz Xeons, I get 1Mpps per core, 8 cores-1 for OS means I max out at 7Mpps :( I know that XDP is coming to Linux user space so we'll see where that goes, as it promises the magic performance levels we want. Also TPACKETv4 is coming for AF_PACKET in Linux which should also get us to that magic level of performance in user land (it is effectively Kernel bypass). I'll add this to EtherateMT when I get some time to check it's performance: https://lwn.net/Articles/737947/ So EtherateMT works OK as a proof of concept, but nothing more. It requires 100% CPU utilisation to send/receive at such high pps rates, there is no CPU time for stats collection or fancy rtt/latency/jitter etc. That can only be done (right now) with something like DPDK, because it we only need one or two cores for Tx/Rx and then we have free cores for stats collections/generations etc. I looked into MoonGen, it creates Lua bindings for DPDK which means you can rapidly develop DPDK based tools without knowing much about DPDK. It had some RFC2544 Lua scripts for DPDK and I started to re-write them as they were old and didn't work with the latest version of MoonGen: https://github.com/jwbensley/MoonGen-Scripts The throughput script works OK-ish (10Gbps on one core no problems): https://github.com/jwbensley/MoonGen-Scripts/blob/master/throughput.lua Luea would allow one to easily provide parseable output and more easily implement a JSON API however, since MoonGen uses DPDK, we can only use the NICs that DPDK supports and not "any Ethernet NIC supported by Linux", which is what I really want by using AF_PACKET + TPACKETv4.
I think very important quality is multiplatform with static binaries. Because important use case is, that you can ask modestly informed customer to copy paste one line to donwload server and copy paste another line to have it running. If use case is that both ends have arbitrary clued people, then there are plenty of good solutions, like Cisco's trex[1]. But what I need is iPerf-like program, which actually a) performs and b) reports the correct things.
Yeah agreed so DPDK is out the window for me for this specific requirement, it's Linux only (ignoring the minor level of BSD support) and NIC specific too. Python *yuk* is multi-OS, it has JSON libraries, and it has some support for AF_PACKET: https://stackoverflow.com/questions/1117958/how-do-i-use-raw-socket-in-pytho... I don't know enough about it, but it might be that TPACKETv4 could be leveraged through Python but that still only covers Linux as Windows and MAC have very different network stacks (but then again I do only care about link sooooo...). I'm keen to have another go at this problem now that I've got a better understanding of it having written EtherateMT and played with DPDK etc. Not sure where to go though - so just waiting on TPACKETv4 right now. Cheers, James.
On 17/07/2018 5:49 pm, James Bensley wrote:
Also there are SO many variables when testing TCP you MUST test using UDP if you want to just test the network path. Every OS will behave differently with TCP, also with UDP but the variance is a lot lower.
One of the issues I have repeatedly run into is an incorrect burst size set on shaped carriage circuits. In the specific case I have in mind now, I don't recall what the exact figures were - but the carrier side configuration was set by the carrier to be something like a 64k burst on a 20M L3 MPLS circuit. End to end speed testing results both with browsers and with iperf depended enormously on if the test was done with TCP or UDP. Consequently the end customer was unable to get more than 2-3MBit/sec of single stream TCP traffic through the link. The carrier insisted that because we could still get 19+ MBit/sec of UDP then there was no issue. This was the same for all operating systems. The end customer certainly didn't feel that they were getting the 20Mbit circuit they were sold. The carrier view was that as we were able to get 20 TCP streams running concurrently and max out the link that they were providing the service as ordered. After many months of testing and negotiating we were able to get the shaper burst increased temporarily, and the issue completely went away. The customer was able to get over 18MBit/sec of continuous TCP throughput on a single stream. I was told that despite this finding and admission that the burst was indeed way too small, the carrier was going to continue to provision circuits with almost no burst, because this was their "standard configuration". The common belief seemed to be that a burst was a free upgrade for the customer. I was of the alternate view that this parameter was required to be set correctly for TCP to function properly to get their quoted CIR. I'd be very interested in other's thoughts with regards to testing of this. It seems to me that measuring performance with UDP only means that this very critical real-world aspect of a circuit (burst size on a shaper) is not tested, and this seems to be a very common misconfiguration. In my case...seen across multiple carriers over many years and many dozens of hours spent on "faults" related to it. [NB: I've always used the rule of thumb that the L3 burst size should be about 1/8th the Contracted Line Rate, but there seems to be no consensus whatsoever about that...certainly no agreement whatsoever within the carrier world] Reuben
On 17/Jul/18 09:49, James Bensley wrote:
Also I recommend you test to a server on you network near to your peering & transit edge. This way users can test up to the point where you would have over the "The Internet" and have no further control. Testing to a server off-net (like off-net Ookla tells me nothing in my opinion).
In our market, users don't care about the server that is 1ms away. They want to test the one which is 170ms, because that is where the gaming network sits. And heaven forbid that 170ms server does not return their full 1Gbps subscription report, never mind the state of its affairs, or who's running (if anyone even remembered it's there). Mark.
It tells you how good your peering is. ;-) ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "James Bensley" <jwbensley@gmail.com> To: "North American Network Operators' Group" <nanog@nanog.org> Sent: Tuesday, July 17, 2018 2:49:58 AM Subject: Re: Proving Gig Speed On 16 July 2018 at 18:58, Chris Gross <CGross@ninestarconnect.com> wrote: Hi Chris,
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
I would say, don't use a browser based speed test - how fast is your browser? Answer: It can vary wildly! Also there are SO many variables when testing TCP you MUST test using UDP if you want to just test the network path. Every OS will behave differently with TCP, also with UDP but the variance is a lot lower. Also I recommend you test to a server on you network near to your peering & transit edge. This way users can test up to the point where you would have over the "The Internet" and have no further control. Testing to a server off-net (like off-net Ookla tells me nothing in my opinion). Virtually any modern day laptop with a 1G NIC will saturate a 1G link using UDP traffic in iPerf with ease. I crummy i3 netbook with 1G NIC can do it on one core/thread. We have several iPerf servers dotted around the network and our engineers can test to those at any time and it works well for us. Cheers, James.
From: "James Bensley" <jwbensley@gmail.com> Also I recommend you test to a server on you network near to your peering & transit edge. This way users can test up to the point where you would have over the "The Internet" and have no further control. Testing to a server off-net (like off-net Ookla tells me nothing in my opinion).
... On 17 July 2018 at 15:36, Mike Hammett <nanog@ics-il.net> wrote:
It tells you how good your peering is. ;-)
Or transit :) Cheers, James.
On 16/Jul/18 19:58, Chris Gross wrote:
I'm curious what people here have found as a good standard for providing solid speedtest results to customers. All our techs have Dell laptops of various models, but we always hit 100% CPU when doing a Ookla speedtest for a server we have on site. So then if you have a customer paying for 600M or 1000M symmetric, they get mad and demand you prove it's full speed. At that point we have to roll out different people with JDSU's to test and prove it's functional where a Ookla result would substitute fine if we didn't have crummy laptops possibly. Even though from what I can see on some google results, we exceed the standards several providers call for.
Most of these complaints come from the typical "power" internet user of course that never actually uses more than 50M sustained paying for a residential connection, so running a circuit test on each turn up is uncalled for.
Anyone have any suggestions of the requirements (CPU/RAM/etc) for a laptop that can actually do symmetric gig, a rugged small inexpensive device we can roll with instead to prove, or any other weird solution involving ritual sacrifice that isn't too offensive to the eyes?
(Ookla) speed tests, in general, are a fundamental problem we grapple with in only one of our markets, where previous ISP trust was eroded by the incumbent. So we are all paying the price for that 20 years on, i.e., the solution to every network problem is a speed test. If a Skype call drops, that's a speed test. If an e-mail attachment doesn't work properly, that's a speed tests. Wi-fi signal is bad, that's a speed test. If Office 365 self-destructs, that's a speed test. If 8.8.8.8 goes coo-koo, that's a speed test. You get the idea... I always ask speed test proponents - "How do you speed test a 100Gbps Internet service delivery :-\?" Server-side limitations notwithstanding, as you rightly point out, the client machine is also a potential point of contention (all pun intended). We recently dealt with a customer that had our NOC running around for 2 months about speed test results, only to realize that the customer was using a USB-to-Ethernet converter for his laptop to run the tests the whole time, which topped out at ±7Mbps, on their 100Mbps service. With a proper laptop that had an on-board Ethernet port being truck-rolled with an engineer to the site showing the difference, the case is now closed. But can you imagine the amount of noise that had been generated over the past 8 weeks? Find me a hole where the floor is pasted with "speed tests go here" so deep that even I can't see it, and I will throw the whole concept so far down it won't stand the chance of ever being converted to oil. But to answer your questions - for some customers, we insist on JDSU testing for large capacities, but only if it's worth the effort. Mark.
On 17 July 2018 at 12:50, Mark Tinka <mark.tinka@seacom.mu> wrote:
But to answer your questions - for some customers, we insist on JDSU testing for large capacities, but only if it's worth the effort.
Mark.
Hi Mark, Our field engineers have 1G testers, but even at 1G they are costly (in 2018!), so none have 10Gbps or higher testers and we also only do this for those that demand it (i.e. no 20Mbps EFM customer ever asks for a JSDU/EXO test, because iPerf can easily max out such a link, only those that pay for say 1G over 1G get it). Hardware testers are the best in my opinion right now but it annoys me that this is the current state of affairs, in 2018, even for 1Gbps! Cheers, James.
We use Netrounds for this. We make a speedtest site available to the customer for their "click and test" needs which is the first step. If the customer doesn’t achive their allocated speed we will send out a probe (usually some form of Intel NUC or similar machine) that can do more advanced testing automatically also along different times. The customer then gets to send us back the device when the case is solved. Its not without its faults but its been a great tool so far. //Gustav -----Ursprungligt meddelande----- Från: NANOG <nanog-bounces@nanog.org> För James Bensley Skickat: den 17 juli 2018 19:46 Till: Mark Tinka <mark.tinka@seacom.mu>; North American Network Operators' Group <nanog@nanog.org> Ämne: Re: Proving Gig Speed On 17 July 2018 at 12:50, Mark Tinka <mark.tinka@seacom.mu> wrote:
But to answer your questions - for some customers, we insist on JDSU testing for large capacities, but only if it's worth the effort.
Mark.
Hi Mark, Our field engineers have 1G testers, but even at 1G they are costly (in 2018!), so none have 10Gbps or higher testers and we also only do this for those that demand it (i.e. no 20Mbps EFM customer ever asks for a JSDU/EXO test, because iPerf can easily max out such a link, only those that pay for say 1G over 1G get it). Hardware testers are the best in my opinion right now but it annoys me that this is the current state of affairs, in 2018, even for 1Gbps! Cheers, James.
On 17/Jul/18 19:45, James Bensley wrote:
Hi Mark,
Our field engineers have 1G testers, but even at 1G they are costly (in 2018!), so none have 10Gbps or higher testers and we also only do this for those that demand it (i.e. no 20Mbps EFM customer ever asks for a JSDU/EXO test, because iPerf can easily max out such a link, only those that pay for say 1G over 1G get it). Hardware testers are the best in my opinion right now but it annoys me that this is the current state of affairs, in 2018, even for 1Gbps!
Truth. Mark.
participants (40)
-
Alan Buxey
-
Andy Ringsmuth
-
Ben Cannon
-
Brant Ian Stevens
-
bzs@theworld.com
-
Carlos Alcantar
-
Chris Gross
-
Dan White
-
Daniel Ankers
-
Eddie Parra
-
Eric Kuhnke
-
Gustav Ulander
-
James Bensley
-
James R Cutler
-
Jared Mauch
-
joel jaeggli
-
Jon Meek
-
Julien Goodwin
-
K. Scott Helms
-
Keith Medcalf
-
Keith Stokes
-
Livingood, Jason
-
Luke Guillory
-
Mark Tinka
-
Matt Erculiani
-
Matt Hoppes
-
Matthew Crocker
-
Max Tulyev
-
Michael Thomas
-
Mike Hammett
-
Morgan A. Miskell
-
Niels Bakker
-
Radu-Adrian Feurdean
-
Reuben Farrelly
-
Saku Ytti
-
Seth Mattinen
-
Simon Leinen
-
Tei
-
Tyler Applebaum
-
valdis.kletnieks@vt.edu