You can do 32Q plus 2 OFDM blocks as well. But who has that kind of spectrum, we surely don’t. A 96Mhz block maybe. 

But you can’t take the total at 100G and say that’s beyond your scale, you don’t run at full saturation do you? :) 

And in order to run DCAM2s you’ll have to upgrade the RSMs to the gen2s as well. 



Sent from my iPhone

On May 8, 2020, at 4:27 PM, Blake Hudson <blake@ispn.net> wrote:

 *External Email: Use Caution*
16 connectors per DCAM2 times 6 cards is 96 DS service groups in a chassis. At ~1.2 Gbps per connector (using 32 SC-QAM DOCSIS 3.0 channels) that's ~ 100gigabits per chassis. Quite a bit above my scale ;- )

The E6k can also do DOCSIS 3.1, which we use today, though I'm not sure what the capacity limit is per DCAM/SG/connector when both SQ-QAM and OFDM are used in combination.

--Blake

On 5/8/2020 4:13 PM, Luke Guillory wrote:
E6K using gen 1 DCAMs can do about 32 service groups give or take, not that hard to get to a point with splits where you want to go past those numbers. Gen 2 DCAMs double that by going to 16 connectors compared to 8. cBR8 is less than the E6K.

The point of node splits is to lower customers per SG, you can’t just split and stay on the same chassis if you’re at capacity on slots.

If you take the Comcast approach and start pushing fiber deeper in order to remove actives your node counts sky rocket. All the whole they’re lowering counts on SGs as well. 

Even us little guys or working on lowering customers per SG, they have to be moved somewhere which would be another chassis if you’re out of free connectors on like cards. 

Like 

Sent from my iPhone

On May 8, 2020, at 4:02 PM, Blake Hudson <blake@ispn.net> wrote:


Aaron, I was thinking something similar. I've never once had a node
split require moving a customer to a different CMTS. Even the very old
and (relatively) low capacity 7200 VXR could serve several nodes per
line card and supported several line cards per chassis. Newer cBR8, E6k,
and the like can serve many many times more customers across dozens of
nodes. Every L3 CMTS I've worked on uses something akin to ip unnumbered
so as long as the customer stays on the same CMTS, their IP address will
continue to work regardless of what interface or line card their
connection terminates on.

On 5/8/2020 2:34 PM, Aaron Gould wrote:
We have a provisioning system (promptlink) that we use to map cable modems
to their static ip addresses.  The provisioning system has a gui front end
and it sits on linux and also acts as a dhcp server, etc.  This is the same
ip address that we use for cable-helper (like ip-helper on a cmts bundle ip
interface) to forward dhcp requests from cable modem cpe, via the cmts, and
unicasted to promptlink and then the static ip address reservation within
the promptlink is sent back to the cpe

This all continues to work, even during node splits, as long as we don't
move that cm cpe to a different cmts... which would rarely happen since it's
across town to get to our other RF environment served be a different cmts
using a different static ip subnet... since we don't do L2 via cmts's in
order to stitch back that ip into a more globally located static subnet...
again, we don't do that.  If the customers moves locations, into a different
cmts area, that would be required to give back the single static /32 ip and
get a different on.  Unless they were a multi-static customer buying like a
/29... in which case we have no problem moving that /29 subnet off that cmts
and onto another one.  That's easy.

We do however have more centrally located subnets for some of our single
static ip customers in FTTH... but not CMTS docsis.

-Aaron


-----Original Message-----
From: NANOG [mailto:nanog-bounces@nanog.org] On Behalf Of Javier Gutierrez
Guerra
Sent: Thursday, May 7, 2020 3:50 PM
To: nanog@nanog.org
Subject: How to manage Static IPs to customers

Hi there,
Just wanted to reach out and get an idea how is people managing customers
with static Ips, more specifically on Docsis networks where the customer
could be moved between cmts's when a node is split

Thanks in advance for all responses,

Javier Gutierrez Guerra