The CRTC hearing went well (thanks for all your help). One of the unanswered questions was how to set performance standards for the last mile to ensure people get advertised speeds (within reason). I had asked the question about contention ratio and it appears there is no proper way to set such a moving target as a regulatory standard. During the hearing, someone suggested that advertised speeds be achievable 80% of the time. (chairman then asked if "time" was 24 hours, or just the time you needed to use the internet (aka: peak). Out of curiosity, could such a thing be measured by the last mile operator ? There is the easy answer of synch. For DSL, non delivery of advertised speed is easy since that metric is on the modem statistics. However, for fixed wireless, would a customer too far from tower see a lower synch rate or would he just see poor performance due to lots of retransmits ? So some generic questions: What are the different ways used to determine if the last mile is congested and needs to be upgraded ?
From the network operator's point of view, would it not be looking at 5 minute throughput samples and trigger upgrades when it sees throughput reaching x% of last mile segment capacity for more than X minutes per day ?
What other means do network admins use to monitor when it is time to upgrade shared last mile segments such as coax or fixed wireless ? from a policy point of view, is it possible to set the same standard for different technologies or should each (dsl, coax, fixed wireless and satellite) have they own standards and methods of measurements before of intrinsic differences in how they work ? In the case of Rogers (canadian cableco), the CRTC record shows that they trigger node split process when capacity reaches 60%. This is because it takes them so long to do all the paperwork, committees etc that by the time the node split is done, that segment has grown to about 75% utilisation. Would that be a sound basis to set a policy ? For shared last mile, would different technologies have similar thresholds that trigger the need for upgrades or would coax start to degrade at 75% whereas fixed wireless or satellite start to degrade at lower/higher number ? For FTTP, while likely not a big problem yet, would similar number apply when the ~2gbps download and ~1gbps upload start to get filled by the 32 homes served ?
For us (FTTH) we had/have enough aggressive foresight to do smaller splits.. 1x16. Some are doing 1x2's or 1x4's at the corner somewhere into 1x16's or 1x8's, so at the point where you start to hit decent saturation you can just shrink the upstream split and fuse onto a new upstream strand / optic. Once that gets overused, thankfully you can overlay NG-PON2. As far as measuring, in our case just having your NMS of choice just monitoring the OLT via SNMP. For fixed wireless, monitoring and management are the easy part. The hard stuff is detecting what amounts to "dark matter", or the things you're not normally looking for. Channel utilization (completely variable from moment to moment), over all AP capacity, CPU usage, retransmissions, keeping per client modulation rates very high to limit tdma timeslot utilization, etc. The fixed wireless side ends up requiring a LOT of experience, monitoring, and guesswork. On Apr 30, 2016 11:28 AM, "Jean-Francois Mezei" <jfmezei_nanog@vaxination.ca> wrote:
The CRTC hearing went well (thanks for all your help).
One of the unanswered questions was how to set performance standards for the last mile to ensure people get advertised speeds (within reason).
I had asked the question about contention ratio and it appears there is no proper way to set such a moving target as a regulatory standard.
During the hearing, someone suggested that advertised speeds be achievable 80% of the time. (chairman then asked if "time" was 24 hours, or just the time you needed to use the internet (aka: peak).
Out of curiosity, could such a thing be measured by the last mile operator ?
There is the easy answer of synch. For DSL, non delivery of advertised speed is easy since that metric is on the modem statistics. However, for fixed wireless, would a customer too far from tower see a lower synch rate or would he just see poor performance due to lots of retransmits ?
So some generic questions:
What are the different ways used to determine if the last mile is congested and needs to be upgraded ?
From the network operator's point of view, would it not be looking at 5 minute throughput samples and trigger upgrades when it sees throughput reaching x% of last mile segment capacity for more than X minutes per day ?
What other means do network admins use to monitor when it is time to upgrade shared last mile segments such as coax or fixed wireless ?
from a policy point of view, is it possible to set the same standard for different technologies or should each (dsl, coax, fixed wireless and satellite) have they own standards and methods of measurements before of intrinsic differences in how they work ?
In the case of Rogers (canadian cableco), the CRTC record shows that they trigger node split process when capacity reaches 60%. This is because it takes them so long to do all the paperwork, committees etc that by the time the node split is done, that segment has grown to about 75% utilisation.
Would that be a sound basis to set a policy ?
For shared last mile, would different technologies have similar thresholds that trigger the need for upgrades or would coax start to degrade at 75% whereas fixed wireless or satellite start to degrade at lower/higher number ?
For FTTP, while likely not a big problem yet, would similar number apply when the ~2gbps download and ~1gbps upload start to get filled by the 32 homes served ?
On 30/Apr/16 20:36, Josh Reynolds wrote:
For us (FTTH) we had/have enough aggressive foresight to do smaller splits.. 1x16. Some are doing 1x2's or 1x4's at the corner somewhere into 1x16's or 1x8's, so at the point where you start to hit decent saturation you can just shrink the upstream split and fuse onto a new upstream strand / optic. Once that gets overused, thankfully you can overlay NG-PON2.
If you're being this aggressive, and then having to re-invest in the next PON standard, isn't the case for Active-E being made more and more? Mark.
No. Active has higher initial and ongoing plant costs (cabinet power, cabinet wear and tear, more battery banks, chargers, etc). You also end up using far, far less fiber strands. On May 1, 2016 3:46 AM, "Mark Tinka" <mark.tinka@seacom.mu> wrote:
On 30/Apr/16 20:36, Josh Reynolds wrote:
For us (FTTH) we had/have enough aggressive foresight to do smaller splits.. 1x16. Some are doing 1x2's or 1x4's at the corner somewhere into 1x16's or 1x8's, so at the point where you start to hit decent saturation you can just shrink the upstream split and fuse onto a new upstream strand / optic. Once that gets overused, thankfully you can overlay NG-PON2.
If you're being this aggressive, and then having to re-invest in the next PON standard, isn't the case for Active-E being made more and more?
Mark.
On 1/May/16 10:55, Josh Reynolds wrote:
No. Active has higher initial and ongoing plant costs (cabinet power, cabinet wear and tear, more battery banks, chargers, etc). You also end up using far, far less fiber strands.
I tend to disagree, but this is one of those debates that could go on forever... Lord knows I've been having it since 2008. Mark.
Disagreeing is okay. It wouldn't make you any less wrong though :P On May 1, 2016 3:58 AM, "Mark Tinka" <mark.tinka@seacom.mu> wrote:
On 1/May/16 10:55, Josh Reynolds wrote:
No. Active has higher initial and ongoing plant costs (cabinet power, cabinet wear and tear, more battery banks, chargers, etc). You also end up using far, far less fiber strands.
I tend to disagree, but this is one of those debates that could go on forever...
Lord knows I've been having it since 2008.
Mark.
It hugely depends on the physical layout of the homes/area for economics of active-E vs GPON... The scale of the outside plant aerial fiber is very different in certain scenarios. A green field modern housing development with everything underground might be very different than a semi-rural chain shaped topology of GPON along a road of houses on 1 acre plots. Or an urban townhouse development. Or a 35 floor condo tower. On Sun, May 1, 2016 at 1:58 AM, Mark Tinka <mark.tinka@seacom.mu> wrote:
On 1/May/16 10:55, Josh Reynolds wrote:
No. Active has higher initial and ongoing plant costs (cabinet power, cabinet wear and tear, more battery banks, chargers, etc). You also end up using far, far less fiber strands.
I tend to disagree, but this is one of those debates that could go on forever...
Lord knows I've been having it since 2008.
Mark.
In addition, the upgrade path uses the same strands simultaneously. On May 1, 2016 3:46 AM, "Mark Tinka" <mark.tinka@seacom.mu> wrote:
On 30/Apr/16 20:36, Josh Reynolds wrote:
For us (FTTH) we had/have enough aggressive foresight to do smaller splits.. 1x16. Some are doing 1x2's or 1x4's at the corner somewhere into 1x16's or 1x8's, so at the point where you start to hit decent saturation you can just shrink the upstream split and fuse onto a new upstream strand / optic. Once that gets overused, thankfully you can overlay NG-PON2.
If you're being this aggressive, and then having to re-invest in the next PON standard, isn't the case for Active-E being made more and more?
Mark.
participants (4)
-
Eric Kuhnke
-
Jean-Francois Mezei
-
Josh Reynolds
-
Mark Tinka