modeling residential subscriber bandwidth demand
How do people model and try to project residential subscriber bandwidth demands into the future? Do you base it primarily on historical data? Are there more sophisticated approaches that you use to figure out how much backbone bandwidth you need to build to keep your eyeballs happy? Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR? Tom -- ----------------------------------------------------------------------------- Tom Ammon M: (801) 784-2628 thomasammon@gmail.com -----------------------------------------------------------------------------
Residential whatnow? Sorry, to be honest, there really isn’t any. I suppose if one is buying lit services, this is important to model. But an *incredibly* huge residential network can be served by a single basic 10/40g backbone connection or two. And if you own the glass it’s trivial to spin up very many of those. Aggregate in metro cores, put the Netflix OC there, done. Then again, we don’t even do DNS anymore, we’re <1ms from Cloudflare, so in 2019 why bother? I don’t miss the days of ISDN backhaul, but those days are long gone. And I won’t go back. -Ben Cannon CEO 6x7 Networks & 6x7 Telecom, LLC ben@6by7.net <mailto:ben@6by7.net>
On Apr 2, 2019, at 9:54 AM, Tom Ammon <thomasammon@gmail.com> wrote:
How do people model and try to project residential subscriber bandwidth demands into the future? Do you base it primarily on historical data? Are there more sophisticated approaches that you use to figure out how much backbone bandwidth you need to build to keep your eyeballs happy?
Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?
Tom -- ----------------------------------------------------------------------------- Tom Ammon M: (801) 784-2628 thomasammon@gmail.com <mailto:thomasammon@gmail.com> -----------------------------------------------------------------------------
An article was published recently that discusses the possible impact of Cloud-based gaming on last-mile capacity requirements, as well as external connections. The author suggests that decentralized video services won't be the only big user of last-mile capacity. https://medium.com/@rudolfvanderberg/what-google-stadia-will-mean-for-broadb... From: "Tom Ammon" <thomasammon@gmail.com> To: "NANOG" <nanog@nanog.org> Sent: Tuesday, April 2, 2019 9:54:47 AM Subject: modeling residential subscriber bandwidth demand How do people model and try to project residential subscriber bandwidth demands into the future? Do you base it primarily on historical data? Are there more sophisticated approaches that you use to figure out how much backbone bandwidth you need to build to keep your eyeballs happy? Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR? Tom -- ----------------------------------------------------------------------------- Tom Ammon M: (801) 784-2628 thomasammon@gmail.com -----------------------------------------------------------------------------
We use trendline/95% trendline that’s built into a lot of graphing tools… solarwinds, I think even cdn cache portals have trendlines… forecasts, etc. My boss might use other growth percentages gleaned from previous years… but yeah, like another person mentioned, the more history you have the better it seems… unless there is some major shift for some strange big reason… but have we ever seen that with internet usage growth ? …yet. ? I mean has the internet bandwidth usage ever gone down nationally/globally , similar to like a graph of the housing market in 2007/2008 ? -Aaron
“…especially in the face of transport tweaks such as QUIC and TCP BBR? “ Do these “quic and tcp bbr” change bandwidth utilization as we’ve know it for years ? -Aaron
We have GB/mo figures for our customers for every month for the last ~10 years. Is there some simple figure you're looking for? I can tell you off hand that I remember we had accounts doing ~15 GB/mo and now we've got 1500 GB/mo at similar rates per month. Josh Luthman Office: 937-552-2340 Direct: 937-552-2343 1100 Wayne St Suite 1337 Troy, OH 45373 On Tue, Apr 2, 2019 at 2:16 PM Aaron Gould <aaron1@gvtc.com> wrote:
“…especially in the face of transport tweaks such as QUIC and TCP BBR? “
Do these “quic and tcp bbr” change bandwidth utilization as we’ve know it for years ?
-Aaron
On Tue, Apr 2, 2019 at 2:20 PM Josh Luthman <josh@imaginenetworksllc.com> wrote:
We have GB/mo figures for our customers for every month for the last ~10 years. Is there some simple figure you're looking for? I can tell you off hand that I remember we had accounts doing ~15 GB/mo and now we've got 1500 GB/mo at similar rates per month.
I'm mostly just wondering what others do for this kind of planning - trying to look outside of my own experience, so I don't miss something obvious. That growth in total transfer that you mention is interesting. I always wonder what the value of trying to predict utilization is anyway, especially since bandwidth is so cheap. But I figure it can't hurt to ask a group of people where I am highly likely to find somebody smarter than I am :-) -- ----------------------------------------------------------------------------- Tom Ammon M: (801) 784-2628 thomasammon@gmail.com -----------------------------------------------------------------------------
On Tue, 2 Apr 2019, Tom Ammon wrote:
Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?
I don't see how QUIC and BBR is going to change how much bandwidth is flowing. If you want to make your eyeballs happy then make sure you're not congesting your upstream links. Aim for max 50-75% utilization in 5 minute average at peak hour (graph by polling interface counters every 5 minutes). Depending on your growth curve you might need to initiate upgrades to make sure they're complete before utilization hits 75%. If you have thousands of users then typically just look at the statistics per user and extrapolate. I don't believe this has fundamentally changed in the past 20 years, this is still best common practice. If you go into the game of running your links full parts of the day then you're into the game of trying to figure out QoE values which might mean you spend more time doing that than the upgrade would cost. -- Mikael Abrahamsson email: swmike@swm.pp.se
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :) -jim On Tue, Apr 2, 2019 at 3:26 PM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
On Tue, 2 Apr 2019, Tom Ammon wrote:
Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?
I don't see how QUIC and BBR is going to change how much bandwidth is flowing.
If you want to make your eyeballs happy then make sure you're not congesting your upstream links. Aim for max 50-75% utilization in 5 minute average at peak hour (graph by polling interface counters every 5 minutes). Depending on your growth curve you might need to initiate upgrades to make sure they're complete before utilization hits 75%.
If you have thousands of users then typically just look at the statistics per user and extrapolate. I don't believe this has fundamentally changed in the past 20 years, this is still best common practice.
If you go into the game of running your links full parts of the day then you're into the game of trying to figure out QoE values which might mean you spend more time doing that than the upgrade would cost.
-- Mikael Abrahamsson email: swmike@swm.pp.se
+1 Also on this.
From my viewpoint, the game is roughly the same for the last 20+ years. You might want to validate that your per-customer bandwidth use across your markets is roughly the same for the same service/speeds/product. If you have that data over time, then you can extrapolate what each market's bandwidth use would be when you lay on a customer growth forecast.
But for something that's simpler and actionable now, yeah, just make sure that your upstream and peering(!!) links are not congested. I agree that the 50-75% is a good target not only for the lead time to bring up more capacity, but also to allow for spikes in traffic for various events throughout the year. Louie Google Fiber On Tue, Apr 2, 2019 at 11:36 AM jim deleskie <deleskie@gmail.com> wrote:
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :)
-jim
On Tue, Apr 2, 2019 at 3:26 PM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
On Tue, 2 Apr 2019, Tom Ammon wrote:
Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?
I don't see how QUIC and BBR is going to change how much bandwidth is flowing.
If you want to make your eyeballs happy then make sure you're not congesting your upstream links. Aim for max 50-75% utilization in 5 minute average at peak hour (graph by polling interface counters every 5 minutes). Depending on your growth curve you might need to initiate upgrades to make sure they're complete before utilization hits 75%.
If you have thousands of users then typically just look at the statistics per user and extrapolate. I don't believe this has fundamentally changed in the past 20 years, this is still best common practice.
If you go into the game of running your links full parts of the day then you're into the game of trying to figure out QoE values which might mean you spend more time doing that than the upgrade would cost.
-- Mikael Abrahamsson email: swmike@swm.pp.se
Louie, Its almost like us old guys knew something, and did know everything back then, the more things have changed the more that they have stayed the same :) -jim On Tue, Apr 2, 2019 at 3:52 PM Louie Lee <louiel@google.com> wrote:
+1 Also on this.
From my viewpoint, the game is roughly the same for the last 20+ years. You might want to validate that your per-customer bandwidth use across your markets is roughly the same for the same service/speeds/product. If you have that data over time, then you can extrapolate what each market's bandwidth use would be when you lay on a customer growth forecast.
But for something that's simpler and actionable now, yeah, just make sure that your upstream and peering(!!) links are not congested. I agree that the 50-75% is a good target not only for the lead time to bring up more capacity, but also to allow for spikes in traffic for various events throughout the year.
Louie Google Fiber
On Tue, Apr 2, 2019 at 11:36 AM jim deleskie <deleskie@gmail.com> wrote:
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :)
-jim
On Tue, Apr 2, 2019 at 3:26 PM Mikael Abrahamsson <swmike@swm.pp.se> wrote:
On Tue, 2 Apr 2019, Tom Ammon wrote:
Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?
I don't see how QUIC and BBR is going to change how much bandwidth is flowing.
If you want to make your eyeballs happy then make sure you're not congesting your upstream links. Aim for max 50-75% utilization in 5 minute average at peak hour (graph by polling interface counters every 5 minutes). Depending on your growth curve you might need to initiate upgrades to make sure they're complete before utilization hits 75%.
If you have thousands of users then typically just look at the statistics per user and extrapolate. I don't believe this has fundamentally changed in the past 20 years, this is still best common practice.
If you go into the game of running your links full parts of the day then you're into the game of trying to figure out QoE values which might mean you spend more time doing that than the upgrade would cost.
-- Mikael Abrahamsson email: swmike@swm.pp.se
On Apr 2, 2019, at 2:35 PM, jim deleskie <deleskie@gmail.com> wrote:
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :)
I think sometimes folks have the challenge with how to deal with aggregate scale and growth vs what happens in a pure linear model with subscribers. The first 75 users look a lot different than the next 900. You get different population scale and average usage. I could roughly estimate some high numbers for population of earth internet usage at peak for maximum, but in most cases if you have a 1G connection you can support 500-800 subscribers these days. Ideally you can get a 10G link for a reasonable price. Your scale looks different as well as you can work with “the content guys” once you get far enough. Thursdays are still the peak because date night is still generally Friday. - Jared
Certainly. Projecting demand is one thing. Figuring out what to buy for your backbone, edge (uplink & peer), and colo (for CDN caches too!), for which scale+growth is quite another. And yeah, Jim, overall, things have stayed the same. There are just the nuances added with caches, gaming, OTT streaming, some IoT (like always-on home security cams) plus better tools now for network management and network analysis. Louie Google Fiber. On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch <jared@puck.nether.net> wrote:
On Apr 2, 2019, at 2:35 PM, jim deleskie <deleskie@gmail.com> wrote:
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :)
I think sometimes folks have the challenge with how to deal with aggregate scale and growth vs what happens in a pure linear model with subscribers.
The first 75 users look a lot different than the next 900. You get different population scale and average usage.
I could roughly estimate some high numbers for population of earth internet usage at peak for maximum, but in most cases if you have a 1G connection you can support 500-800 subscribers these days. Ideally you can get a 10G link for a reasonable price. Your scale looks different as well as you can work with “the content guys” once you get far enough.
Thursdays are still the peak because date night is still generally Friday.
- Jared
FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix. I have had no complains about speed in 4 1/2 years. I have been planning to bump them to 1G for the last 4 years, but there is currently no economic justification. paul
On Apr 2, 2019, at 3:21 PM, Louie Lee via NANOG <nanog@nanog.org> wrote:
Certainly.
Projecting demand is one thing. Figuring out what to buy for your backbone, edge (uplink & peer), and colo (for CDN caches too!), for which scale+growth is quite another.
And yeah, Jim, overall, things have stayed the same. There are just the nuances added with caches, gaming, OTT streaming, some IoT (like always-on home security cams) plus better tools now for network management and network analysis.
Louie Google Fiber.
On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch <jared@puck.nether.net> wrote:
On Apr 2, 2019, at 2:35 PM, jim deleskie <deleskie@gmail.com> wrote:
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :)
I think sometimes folks have the challenge with how to deal with aggregate scale and growth vs what happens in a pure linear model with subscribers.
The first 75 users look a lot different than the next 900. You get different population scale and average usage.
I could roughly estimate some high numbers for population of earth internet usage at peak for maximum, but in most cases if you have a 1G connection you can support 500-800 subscribers these days. Ideally you can get a 10G link for a reasonable price. Your scale looks different as well as you can work with “the content guys” once you get far enough.
Thursdays are still the peak because date night is still generally Friday.
- Jared
I would say this is perhaps atypical but may depend on the customer type(s). If they’re residential and use OTT data then sure. If it’s SMB you’re likely in better shape. - Jared
On Apr 2, 2019, at 5:21 PM, Paul Nash <paul@nashnetworks.ca> wrote:
FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix. I have had no complains about speed in 4 1/2 years. I have been planning to bump them to 1G for the last 4 years, but there is currently no economic justification.
paul
On Apr 2, 2019, at 3:21 PM, Louie Lee via NANOG <nanog@nanog.org> wrote:
Certainly.
Projecting demand is one thing. Figuring out what to buy for your backbone, edge (uplink & peer), and colo (for CDN caches too!), for which scale+growth is quite another.
And yeah, Jim, overall, things have stayed the same. There are just the nuances added with caches, gaming, OTT streaming, some IoT (like always-on home security cams) plus better tools now for network management and network analysis.
Louie Google Fiber.
On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch <jared@puck.nether.net> wrote:
On Apr 2, 2019, at 2:35 PM, jim deleskie <deleskie@gmail.com> wrote:
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :)
I think sometimes folks have the challenge with how to deal with aggregate scale and growth vs what happens in a pure linear model with subscribers.
The first 75 users look a lot different than the next 900. You get different population scale and average usage.
I could roughly estimate some high numbers for population of earth internet usage at peak for maximum, but in most cases if you have a 1G connection you can support 500-800 subscribers these days. Ideally you can get a 10G link for a reasonable price. Your scale looks different as well as you can work with “the content guys” once you get far enough.
Thursdays are still the peak because date night is still generally Friday.
- Jared
Mixed residential (ages 25 - 75, 1 - 6 people per unit), group who worked together to keep costs down. Works well for them. Friday nights we get to about 85% utilization (Netflix), other than that, usually sits between 25 - 45% paul
On Apr 2, 2019, at 5:44 PM, Jared Mauch <jared@puck.nether.net> wrote:
I would say this is perhaps atypical but may depend on the customer type(s).
If they’re residential and use OTT data then sure. If it’s SMB you’re likely in better shape.
- Jared
On Apr 2, 2019, at 5:21 PM, Paul Nash <paul@nashnetworks.ca> wrote:
FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix. I have had no complains about speed in 4 1/2 years. I have been planning to bump them to 1G for the last 4 years, but there is currently no economic justification.
paul
On Apr 2, 2019, at 3:21 PM, Louie Lee via NANOG <nanog@nanog.org> wrote:
Certainly.
Projecting demand is one thing. Figuring out what to buy for your backbone, edge (uplink & peer), and colo (for CDN caches too!), for which scale+growth is quite another.
And yeah, Jim, overall, things have stayed the same. There are just the nuances added with caches, gaming, OTT streaming, some IoT (like always-on home security cams) plus better tools now for network management and network analysis.
Louie Google Fiber.
On Tue, Apr 2, 2019 at 12:00 PM Jared Mauch <jared@puck.nether.net> wrote:
On Apr 2, 2019, at 2:35 PM, jim deleskie <deleskie@gmail.com> wrote:
+1 on this. its been more than 10 years since I've been responsible for a broadband network but have friends that still play in that world and do some very good work on making sure their models are very well managed, with more math than I ever bothered with, That being said, If had used the methods I'd had used back in the 90's they would have fully predicted per sub growth including all the FB/YoutubeNetflix traffic we have today. The "rapid" growth we say in the 90's and the 2000' and even this decade are all magically the same curve, we'd just further up the incline, the question is will it continue another 10+ years, where the growth rate is nearing straight up :)
I think sometimes folks have the challenge with how to deal with aggregate scale and growth vs what happens in a pure linear model with subscribers.
The first 75 users look a lot different than the next 900. You get different population scale and average usage.
I could roughly estimate some high numbers for population of earth internet usage at peak for maximum, but in most cases if you have a 1G connection you can support 500-800 subscribers these days. Ideally you can get a 10G link for a reasonable price. Your scale looks different as well as you can work with “the content guys” once you get far enough.
Thursdays are still the peak because date night is still generally Friday.
- Jared
On Tue, 2 Apr 2019, Paul Nash wrote:
FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix. I have had no complains about speed in 4 1/2 years. I have been planning to bump them to 1G for the last 4 years, but there is currently no economic justification.
I know FTTH footprints where peak evening average per customer is 3-5 megabit/s. I know others who claim their customers only average equivalent 5-10% of that. It all depends on what services you offer. Considering my household has 250/100 for 40 USD a month I'd say your above solution wouldn't even be enough to deliver an acceptable service to even 10 households. -- Mikael Abrahamsson email: swmike@swm.pp.se
A 100/100 enterprise connection can easily support hundreds of desktop users if not more. It’s a lot of bandwidth even today. -Ben
On Apr 2, 2019, at 10:35 PM, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
On Tue, 2 Apr 2019, Paul Nash wrote:
FWIW, I have a 250 subscribers sitting on a 100M fiber into Torix. I have had no complains about speed in 4 1/2 years. I have been planning to bump them to 1G for the last 4 years, but there is currently no economic justification.
I know FTTH footprints where peak evening average per customer is 3-5 megabit/s. I know others who claim their customers only average equivalent 5-10% of that.
It all depends on what services you offer. Considering my household has 250/100 for 40 USD a month I'd say your above solution wouldn't even be enough to deliver an acceptable service to even 10 households.
-- Mikael Abrahamsson email: swmike@swm.pp.se
On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
A 100/100 enterprise connection can easily support hundreds of desktop users if not more. It’s a lot of bandwidth even today.
And what happens when a significant fraction of those users fire up Netflix with an HD stream? We're discussing residential not corporate connections, I thought....
Paul, I have hard time seeing how you aren't maxing out that circuit. We see about 2.3 mbps average per customer at peak with a primarily residential user base. That would about 575 mbps average at peak for 250 users on our network so how do we use 575 but you say your users don't even top 100 mbps at peak? It doesn't make sense that our customers use 6 times as much bandwidth at peak than yours do. We're a rural and small town mix in Minnesota, no urban areas in our coverage. 90% of our customers are on a plan 22 mbps or less and the other 10% are on a 100 mbps plan but their average usage isn't really much higher. Enterprise environments can easily handle many more users on a 100 meg circuit because they aren't typically streaming video like they would be at home. Residential will always be much higher usage per person than most enterprise users. On Wed, Apr 3, 2019, 2:46 AM Valdis Klētnieks <valdis.kletnieks@vt.edu> wrote:
On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
A 100/100 enterprise connection can easily support hundreds of desktop users if not more. It’s a lot of bandwidth even today.
And what happens when a significant fraction of those users fire up Netflix with an HD stream?
We're discussing residential not corporate connections, I thought....
I am also surprised. However, we have had a total of 5 complaints about network speed over a 3 year period. One possible reason is that because they own the infrastructure collectively and pay for the bandwidth directly (I just manage everything for them), they are prepared to put up with the odd slowdown to avoid the expense of an upgrade. Our original plan was to start with the 100M circuit so that they could make sure that everything would work, that we had reliable wifi delivery (about 95% of users only use a wifi connection to their computers/iDevices/whatever), and then to upgrade to 1G as soon as the dust started settling. They have postponed the upgrade for 3 years now, with no complaints. I guess that if they will be directly impacted by higher bandwidth costs, some people can make do with slower service (or something). paul
On Apr 3, 2019, at 8:41 AM, Darin Steffl <darin.steffl@mnwifi.com> wrote:
Paul,
I have hard time seeing how you aren't maxing out that circuit. We see about 2.3 mbps average per customer at peak with a primarily residential user base. That would about 575 mbps average at peak for 250 users on our network so how do we use 575 but you say your users don't even top 100 mbps at peak? It doesn't make sense that our customers use 6 times as much bandwidth at peak than yours do.
We're a rural and small town mix in Minnesota, no urban areas in our coverage. 90% of our customers are on a plan 22 mbps or less and the other 10% are on a 100 mbps plan but their average usage isn't really much higher.
Enterprise environments can easily handle many more users on a 100 meg circuit because they aren't typically streaming video like they would be at home. Residential will always be much higher usage per person than most enterprise users.
On Wed, Apr 3, 2019, 2:46 AM Valdis Klētnieks <valdis.kletnieks@vt.edu> wrote: On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
A 100/100 enterprise connection can easily support hundreds of desktop users if not more. It’s a lot of bandwidth even today.
And what happens when a significant fraction of those users fire up Netflix with an HD stream?
We're discussing residential not corporate connections, I thought....
On Wed, Apr 03, 2019 at 03:45:17AM -0400, Valdis Klētnieks wrote:
On Tue, 02 Apr 2019 23:53:06 -0700, Ben Cannon said:
A 100/100 enterprise connection can easily support hundreds of desktop users if not more. It???s a lot of bandwidth even today.
And what happens when a significant fraction of those users fire up Netflix with an HD stream?
We're discussing residential not corporate connections, I thought....
Yes, Enterprise requirements are certainly different, though inching upwards with the prevalance of SaaS services like Salesforce, O365 and file sharing services (the latter are a growing % of our traffic at branch offices). I feel like our rule of thumb on the Enterprise side is in the 1.5-2Mbps per user range these days (for Internet). Ray
On Tue, 2 Apr 2019 at 17:57, Tom Ammon <thomasammon@gmail.com> wrote:
How do people model and try to project residential subscriber bandwidth demands into the future? Do you base it primarily on historical data? Are there more sophisticated approaches that you use to figure out how much backbone bandwidth you need to build to keep your eyeballs happy?
Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?
Tom
Hi Tom, Historical data is definitely the way to predict a trend, you can’t call something a trend if it only started today IMO, something (e.g. bandwidth profiling) needs to have been recorded for a while before you can say that you are trying to predict the trend. Without historical data you're just making predications without any direction, which I don't think you want J Assuming you have a good mixture of subs, i.e. adults, children, male, female, different regions etc. and 100% of your subs aren't a single demographic like university campuses for example; then I don't think you need to worry about specifics like the adoption of QUIC or BBR. You will never see a permeant AND massive increase in your total aggregate network utilisation from one day to the next. If for example, a large CDN makes a change that increases per-user bandwidth requirements, it's unlikely they are going to deploy it globally in one single big-bang change. This would also be just one of your major bandwidth sources/destinations, of which you'll likely have several big-hitters that make up the bulk of your traffic. If you have planned well so far, and have plenty of spare capacity (as others have mentioned, in the 50-70% range and your backhaul/peering/transit links are of a reasonable size ratio to your subs, e.g. subs get 10-20Mbps services and your links are 1Gbps) there should be no persisting risk to your network capacity as long as you keep following the same upgrade trajectory. Major social events like the Super Bowl where you are (or here in England, sunshine) will cause exceptional traffic increases, but only for brief periods. You haven't mentioned exactly what you're doing for modelling capacity demand (assuming that you wanted feedback on it)? Assuming all the above is true for you, to give us a reasonable foundation to build on; In my experience the standard method is to record your ingress traffic rate at all your PEs or P&T nodes, and essentially divide this by the number of subs you have (egress is important too, it's just usually negligible in comparison). For example, if your ASN has a total average ingress traffic rate of 1Gbps at during peak hours and, you have 10,000 subs, you can model on say 0.1Mbps per sub. That’s actually a crazily low figure these days but, it’s just a fictional example to demonstrate the calculation. The ideal scenario is that you have this info for as long as you can. Also, the more subs you have the better it all averages out. For business ISPs, bringing on 1 new customer can make a major difference, if it’s a 100Gbps end-site site and your backbone is a single 100Gbps link you could be in trouble. For residential services, subs almost always have slower links than your backbone/P&T/PE nodes. If you have different types of subs it’s also worth breaking down the stats by sub type. For example; we have ADSL subs and VDSL subs. We record the egress traffic rate on the BNGs towards each type of sub separately and then aggregate across all BNGs. For example, today peak inbound for our ASN was X, of that X, Y went to ADSL subs and Z when to VDSL subs. Y / $number_of_adsl_subs == peak average for an ADSL line and, Z / $number_of_vdsl_subs == peak average for a VDSL line. It’s good to know this difference because a sub migrating from ADSL to VDSL is not the same as getting a new sub in terms of additional traffic growth. We have a lot of users upgrading to VDSL which makes a difference at scale, e.g 10K upgrades is less additional traffic than 10k new subs. Rinse and repeat for you other customer types (FTTP/H, wireless etc.)
On Tue, Apr 2, 2019 at 2:20 PM Josh Luthman <josh@imaginenetworksllc.com> wrote:
We have GB/mo figures for our customers for every month for the last ~10 years. Is there some simple figure you're looking for? I can tell you off hand that I remember we had accounts doing ~15 GB/mo and now we've got 1500 GB/mo at similar rates per month.
I'm mostly just wondering what others do for this kind of planning - trying to look outside of my own experience, so I don't miss something obvious. That growth in total transfer that you mention is interesting.
You need to be careful with volumetric based usage figures. As links continuously increase in speed over the years, users can transfer the same amount of data in less bit-time. The problem with polling at any interval (be it 1 seconds or 15 minutes) is that you miss bursts in between the polls. Volumetric based accounting misses the link utilisation which is how congestion is identified. You must measure utilisation and divide that by $number_of_subs. Your links can be congested and if you only measure by data volume transferred, you’ll see month by month subs transferred the same amount of data overall but day by day, hour by hour, it took longer because a link somewhere is congested, and everyone is pissed off. So with faster end-user speeds, one may have shorter but high core link utilisation.
I always wonder what the value of trying to predict utilization is anyway, especially since bandwidth is so cheap. But I figure it can't hurt to ask a group of people where I am highly likely to find somebody smarter than I am :-)
The main requirement in my opinion is upgrades. You need to know how long a link upgrade takes for your operational teams, or a device upgrade etc. If it takes 2 months to deliver a new backhaul link to a regional PoP, call it 3 months to allow for wayleaves, sick engineers, DC access failures, etc. Then make sure you trigger a backhaul upgrade when your growth model says you’re 3-4 months away from 70% utilisation (or whatever figure suites your operations and customers). Cheers, James. P.S. Sorry for the epistle.
All, I am chairing an effort in the IEEE 802.3 Ethernet Working Group to understand bandwidth demand and how it will impact future Ethernet needs. This is exactly the type of discussion i would like to get shared with this activity. I would appreciate follow-on conversations with anyone wishing to share their observations. Regards, John D'Ambrosia Chair, IEEE 802.3 New Ethernet Applications Ad hoc -----Original Message----- From: NANOG <nanog-bounces@nanog.org> On Behalf Of James Bensley Sent: Thursday, April 4, 2019 4:41 AM To: Tom Ammon <thomasammon@gmail.com>; NANOG <nanog@nanog.org> Subject: Re: modeling residential subscriber bandwidth demand On Tue, 2 Apr 2019 at 17:57, Tom Ammon <thomasammon@gmail.com> wrote:
How do people model and try to project residential subscriber bandwidth demands into the future? Do you base it primarily on historical data? Are there more sophisticated approaches that you use to figure out how much backbone bandwidth you need to build to keep your eyeballs happy?
Netflow for historical data is great, but I guess what I am really asking is - how do you anticipate the load that your eyeballs are going to bring to your network, especially in the face of transport tweaks such as QUIC and TCP BBR?
Tom
Hi Tom, Historical data is definitely the way to predict a trend, you can’t call something a trend if it only started today IMO, something (e.g. bandwidth profiling) needs to have been recorded for a while before you can say that you are trying to predict the trend. Without historical data you're just making predications without any direction, which I don't think you want J Assuming you have a good mixture of subs, i.e. adults, children, male, female, different regions etc. and 100% of your subs aren't a single demographic like university campuses for example; then I don't think you need to worry about specifics like the adoption of QUIC or BBR. You will never see a permeant AND massive increase in your total aggregate network utilisation from one day to the next. If for example, a large CDN makes a change that increases per-user bandwidth requirements, it's unlikely they are going to deploy it globally in one single big-bang change. This would also be just one of your major bandwidth sources/destinations, of which you'll likely have several big-hitters that make up the bulk of your traffic. If you have planned well so far, and have plenty of spare capacity (as others have mentioned, in the 50-70% range and your backhaul/peering/transit links are of a reasonable size ratio to your subs, e.g. subs get 10-20Mbps services and your links are 1Gbps) there should be no persisting risk to your network capacity as long as you keep following the same upgrade trajectory. Major social events like the Super Bowl where you are (or here in England, sunshine) will cause exceptional traffic increases, but only for brief periods. You haven't mentioned exactly what you're doing for modelling capacity demand (assuming that you wanted feedback on it)? Assuming all the above is true for you, to give us a reasonable foundation to build on; In my experience the standard method is to record your ingress traffic rate at all your PEs or P&T nodes, and essentially divide this by the number of subs you have (egress is important too, it's just usually negligible in comparison). For example, if your ASN has a total average ingress traffic rate of 1Gbps at during peak hours and, you have 10,000 subs, you can model on say 0.1Mbps per sub. That’s actually a crazily low figure these days but, it’s just a fictional example to demonstrate the calculation. The ideal scenario is that you have this info for as long as you can. Also, the more subs you have the better it all averages out. For business ISPs, bringing on 1 new customer can make a major difference, if it’s a 100Gbps end-site site and your backbone is a single 100Gbps link you could be in trouble. For residential services, subs almost always have slower links than your backbone/P&T/PE nodes. If you have different types of subs it’s also worth breaking down the stats by sub type. For example; we have ADSL subs and VDSL subs. We record the egress traffic rate on the BNGs towards each type of sub separately and then aggregate across all BNGs. For example, today peak inbound for our ASN was X, of that X, Y went to ADSL subs and Z when to VDSL subs. Y / $number_of_adsl_subs == peak average for an ADSL line and, Z / $number_of_vdsl_subs == peak average for a VDSL line. It’s good to know this difference because a sub migrating from ADSL to VDSL is not the same as getting a new sub in terms of additional traffic growth. We have a lot of users upgrading to VDSL which makes a difference at scale, e.g 10K upgrades is less additional traffic than 10k new subs. Rinse and repeat for you other customer types (FTTP/H, wireless etc.)
On Tue, Apr 2, 2019 at 2:20 PM Josh Luthman <josh@imaginenetworksllc.com> wrote:
We have GB/mo figures for our customers for every month for the last ~10 years. Is there some simple figure you're looking for? I can tell you off hand that I remember we had accounts doing ~15 GB/mo and now we've got 1500 GB/mo at similar rates per month.
I'm mostly just wondering what others do for this kind of planning - trying to look outside of my own experience, so I don't miss something obvious. That growth in total transfer that you mention is interesting.
You need to be careful with volumetric based usage figures. As links continuously increase in speed over the years, users can transfer the same amount of data in less bit-time. The problem with polling at any interval (be it 1 seconds or 15 minutes) is that you miss bursts in between the polls. Volumetric based accounting misses the link utilisation which is how congestion is identified. You must measure utilisation and divide that by $number_of_subs. Your links can be congested and if you only measure by data volume transferred, you’ll see month by month subs transferred the same amount of data overall but day by day, hour by hour, it took longer because a link somewhere is congested, and everyone is pissed off. So with faster end-user speeds, one may have shorter but high core link utilisation.
I always wonder what the value of trying to predict utilization is anyway, especially since bandwidth is so cheap. But I figure it can't hurt to ask a group of people where I am highly likely to find somebody smarter than I am :-)
The main requirement in my opinion is upgrades. You need to know how long a link upgrade takes for your operational teams, or a device upgrade etc. If it takes 2 months to deliver a new backhaul link to a regional PoP, call it 3 months to allow for wayleaves, sick engineers, DC access failures, etc. Then make sure you trigger a backhaul upgrade when your growth model says you’re 3-4 months away from 70% utilisation (or whatever figure suites your operations and customers). Cheers, James. P.S. Sorry for the epistle.
participants (15)
-
Aaron Gould
-
Ben Cannon
-
Darin Steffl
-
James Bensley
-
Jared Mauch
-
jim deleskie
-
John DAmbrosia
-
Josh Luthman
-
Louie Lee
-
Mikael Abrahamsson
-
Paul Nash
-
Ray Van Dolson
-
Robert M. Enger
-
Tom Ammon
-
Valdis Klētnieks