Hi folks, Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core). I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here <https://www.telecomtv.com/content/intel-vsummit-5g-ran-5g-core/the-5g-core-is-vital-to-deliver-the-promise-of-5g-39164/> @10:30). I reason that Intel's implication is that virtualization is becoming obsolete. Would anyone care to let me know his thoughts on this prediction? Cheers all, Etienne -- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasqualeI
On 1/Aug/20 11:23, Etienne-Victor Depasquale wrote:
Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core).
I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here <https://www.telecomtv.com/content/intel-vsummit-5g-ran-5g-core/the-5g-core-is-vital-to-deliver-the-promise-of-5g-39164/> @10:30). I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
In the early dawn of SDN, where it was cool to have the RP's in Beirut and the line cards in Lagos, the industry quickly realized that was not entirely feasible. If you are looking at over-the-top services, so-called cloud-native computing makes sense in order to deliver that value accordingly, and with agility. But as it pertains to actual network transport, I'm not yet sure the industry is at the stage where we are confident enough to decompose packet forwarding through a cloud. Network operators are more likely to keep using kit that integrates forwarding hardware as well as a NOS, as no amount of cloud architecting is going to rival a 100Gbps purpose-built port, for example. Suffice it to say, there was a time when folk were considering running their critical infrastructure (such as your route reflectors) in AWS or similar. I'm not quite sure public clouds are at that level of confidence yet. So if some kind of cloud-native infrastructure is to be considered for critical infrastructure, I highly suspect it will be in-house. On the other hand, for any new budding entrepreneurs that want to get into the mobile game with as little cost as possible, there is a huge opportunity to do so by building all that infrastructure in an on-prem cloud-native architecture, and offer packet forwarding using general-purpose hardware provided they don't exceed their expectations. This way, they wouldn't have to deal with the high costs traditional vendors (Ericsson, Nokia, Huawei, Siemens, ZTE, e.t.c.) impose. Granted, it would be small scale, but maybe that is the business model. And in an industry where capex is fast out-pacing revenue, it would be the mobile network equivalent of low-cost carrier airlines. I very well could be talking out the side of my neck, but my prediction is mobile operators will be optimistic but cautious. I reckon a healthy mix between cloud-native and tried & tested practices. Mark.
The surprise for me regards Intel's (and the entire Cloud Native Computing Foundation's?) readiness to move past network functions run on VMs and towards network functions run as microservices in containers. See, for example, Azhar Sayeed's (Red Hat) contribution here <https://www.lightreading.com/webinar.asp?webinar_id=1608>@15:33. Cheers, Etienne On Sat, Aug 1, 2020 at 2:35 PM Mark Tinka <mark.tinka@seacom.com> wrote:
On 1/Aug/20 11:23, Etienne-Victor Depasquale wrote:
Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core).
I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here <https://www.telecomtv.com/content/intel-vsummit-5g-ran-5g-core/the-5g-core-is-vital-to-deliver-the-promise-of-5g-39164/> @10:30). I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
In the early dawn of SDN, where it was cool to have the RP's in Beirut and the line cards in Lagos, the industry quickly realized that was not entirely feasible.
If you are looking at over-the-top services, so-called cloud-native computing makes sense in order to deliver that value accordingly, and with agility. But as it pertains to actual network transport, I'm not yet sure the industry is at the stage where we are confident enough to decompose packet forwarding through a cloud.
Network operators are more likely to keep using kit that integrates forwarding hardware as well as a NOS, as no amount of cloud architecting is going to rival a 100Gbps purpose-built port, for example.
Suffice it to say, there was a time when folk were considering running their critical infrastructure (such as your route reflectors) in AWS or similar. I'm not quite sure public clouds are at that level of confidence yet. So if some kind of cloud-native infrastructure is to be considered for critical infrastructure, I highly suspect it will be in-house.
On the other hand, for any new budding entrepreneurs that want to get into the mobile game with as little cost as possible, there is a huge opportunity to do so by building all that infrastructure in an on-prem cloud-native architecture, and offer packet forwarding using general-purpose hardware provided they don't exceed their expectations. This way, they wouldn't have to deal with the high costs traditional vendors (Ericsson, Nokia, Huawei, Siemens, ZTE, e.t.c.) impose. Granted, it would be small scale, but maybe that is the business model. And in an industry where capex is fast out-pacing revenue, it would be the mobile network equivalent of low-cost carrier airlines.
I very well could be talking out the side of my neck, but my prediction is mobile operators will be optimistic but cautious. I reckon a healthy mix between cloud-native and tried & tested practices.
Mark.
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
On Sat, Aug 1, 2020 at 7:21 AM Etienne-Victor Depasquale <edepa@ieee.org> wrote:
The surprise for me regards Intel's (and the entire Cloud Native Computing Foundation's?) readiness to move past network functions run on VMs and towards network functions run as microservices in containers.
See, for example, Azhar Sayeed's (Red Hat) contribution here <https://www.lightreading.com/webinar.asp?webinar_id=1608>@15:33.
Be careful not to confuse vendors pumping stuff with whats actually deployed. Also, AT&T has been doing virtualization for nearly 10 years now, so perhaps you were just not paying attention https://www.fiercetelecom.com/telecom/at-t-target-for-virtualizing-75-its-ne... Not sure it has helped ATT in any meaningful way, their stock price is the same it was in 2015.
Cheers,
Etienne
On Sat, Aug 1, 2020 at 2:35 PM Mark Tinka <mark.tinka@seacom.com> wrote:
On 1/Aug/20 11:23, Etienne-Victor Depasquale wrote:
Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core).
I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here <https://www.telecomtv.com/content/intel-vsummit-5g-ran-5g-core/the-5g-core-is-vital-to-deliver-the-promise-of-5g-39164/> @10:30). I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
In the early dawn of SDN, where it was cool to have the RP's in Beirut and the line cards in Lagos, the industry quickly realized that was not entirely feasible.
If you are looking at over-the-top services, so-called cloud-native computing makes sense in order to deliver that value accordingly, and with agility. But as it pertains to actual network transport, I'm not yet sure the industry is at the stage where we are confident enough to decompose packet forwarding through a cloud.
Network operators are more likely to keep using kit that integrates forwarding hardware as well as a NOS, as no amount of cloud architecting is going to rival a 100Gbps purpose-built port, for example.
Suffice it to say, there was a time when folk were considering running their critical infrastructure (such as your route reflectors) in AWS or similar. I'm not quite sure public clouds are at that level of confidence yet. So if some kind of cloud-native infrastructure is to be considered for critical infrastructure, I highly suspect it will be in-house.
On the other hand, for any new budding entrepreneurs that want to get into the mobile game with as little cost as possible, there is a huge opportunity to do so by building all that infrastructure in an on-prem cloud-native architecture, and offer packet forwarding using general-purpose hardware provided they don't exceed their expectations. This way, they wouldn't have to deal with the high costs traditional vendors (Ericsson, Nokia, Huawei, Siemens, ZTE, e.t.c.) impose. Granted, it would be small scale, but maybe that is the business model. And in an industry where capex is fast out-pacing revenue, it would be the mobile network equivalent of low-cost carrier airlines.
I very well could be talking out the side of my neck, but my prediction is mobile operators will be optimistic but cautious. I reckon a healthy mix between cloud-native and tried & tested practices.
Mark.
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
Be careful not to confuse vendors pumping stuff with whats actually deployed.
Well yes, there's always the hype factor to discount. The reason why I'm asking this forum is to separate hype from hope. Also, AT&T has been doing virtualization for nearly 10 years now, so
perhaps you were just not paying attention
But the point is just that: how serious is this progression towards cloud-native, if so much effort was put in to virtualization? Incidentally, AT&T's Brian Bearden was present here <https://intelvs.on24.com/vshow/inteldcgevents/#content/2393080>: just listen to how he defended Intel's containerization drive @24:56.
On Sat, Aug 1, 2020 at 4:33 PM Ca By <cb.list6@gmail.com> wrote:
On Sat, Aug 1, 2020 at 7:21 AM Etienne-Victor Depasquale <edepa@ieee.org> wrote:
The surprise for me regards Intel's (and the entire Cloud Native Computing Foundation's?) readiness to move past network functions run on VMs and towards network functions run as microservices in containers.
See, for example, Azhar Sayeed's (Red Hat) contribution here <https://www.lightreading.com/webinar.asp?webinar_id=1608>@15:33.
Be careful not to confuse vendors pumping stuff with whats actually deployed.
Also, AT&T has been doing virtualization for nearly 10 years now, so perhaps you were just not paying attention
https://www.fiercetelecom.com/telecom/at-t-target-for-virtualizing-75-its-ne...
Not sure it has helped ATT in any meaningful way, their stock price is the same it was in 2015.
Cheers,
Etienne
On Sat, Aug 1, 2020 at 2:35 PM Mark Tinka <mark.tinka@seacom.com> wrote:
On 1/Aug/20 11:23, Etienne-Victor Depasquale wrote:
Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core).
I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here <https://www.telecomtv.com/content/intel-vsummit-5g-ran-5g-core/the-5g-core-is-vital-to-deliver-the-promise-of-5g-39164/> @10:30). I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
In the early dawn of SDN, where it was cool to have the RP's in Beirut and the line cards in Lagos, the industry quickly realized that was not entirely feasible.
If you are looking at over-the-top services, so-called cloud-native computing makes sense in order to deliver that value accordingly, and with agility. But as it pertains to actual network transport, I'm not yet sure the industry is at the stage where we are confident enough to decompose packet forwarding through a cloud.
Network operators are more likely to keep using kit that integrates forwarding hardware as well as a NOS, as no amount of cloud architecting is going to rival a 100Gbps purpose-built port, for example.
Suffice it to say, there was a time when folk were considering running their critical infrastructure (such as your route reflectors) in AWS or similar. I'm not quite sure public clouds are at that level of confidence yet. So if some kind of cloud-native infrastructure is to be considered for critical infrastructure, I highly suspect it will be in-house.
On the other hand, for any new budding entrepreneurs that want to get into the mobile game with as little cost as possible, there is a huge opportunity to do so by building all that infrastructure in an on-prem cloud-native architecture, and offer packet forwarding using general-purpose hardware provided they don't exceed their expectations. This way, they wouldn't have to deal with the high costs traditional vendors (Ericsson, Nokia, Huawei, Siemens, ZTE, e.t.c.) impose. Granted, it would be small scale, but maybe that is the business model. And in an industry where capex is fast out-pacing revenue, it would be the mobile network equivalent of low-cost carrier airlines.
I very well could be talking out the side of my neck, but my prediction is mobile operators will be optimistic but cautious. I reckon a healthy mix between cloud-native and tried & tested practices.
Mark.
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
On 1/Aug/20 16:52, Etienne-Victor Depasquale wrote:
But the point is just that: how serious is this progression towards cloud-native, if so much effort was put in to virtualization?
I suspect that if a significant amount of investment has already gone into classic NFV, and for the most part, it's working reasonably well, an operation would need to be seriously bored or have tons of cash and time around to uproot all of that work and change things around without some compelling technical or commercial reason to do so. Despite the NFV world being well bedded in, it's still an evolving piece of tech., and this is one field where operators are prone to spending multiple times on the same thing, as they realize the previous decision fell out of favour with the community or their favorite vendor. I've seen it happen right here in South Africa, when a company built an "SDN" platform 7 different times in 3 years as the industry kept oscillating; going through whatever "SDN" platform vendors pushed, what the open community was putting out, OpenStack, e.t.c. They eventually closed down that side of the business, this year. So for greenfield sites, maybe. But for existing installations that have been around a while, I guess the transition to "cloud-native" might be a bit of an ask, given the industry's history on this. Mark.
The short answer is that the "Cloud Native Computing" folks need to talk to the Intel Embedded Systems Application engineers to discover that micro services have been running on Intel hardware in (non-standard) containers for years. We call it real time computing, process control,... Current multi Terabit Ethernet interfaces require specialized hardware and interfaces that will connect fiber optics to clouds but cannot be run on clouds. Some comments on Software Controlled Telecomm (/datacom) networking. When DTMF was invented the telco used in band signaling for call control. Kevin Mitnick et. al. designed red and black boxes to control the telco systems so the telcos moved call control out of band. They created SIgnal Control Points which managed the actual circuit switch hardware to route calls or eventually 64kbps digital paths and this protocol was SS#7. There were six to seven volumes of CLASS services that were enabled by SS#7 which ran on UNIX systems developed by Bell Labs. In the mid seventies, I worked on VM systems from DEC and Apollo of which Apollo had the better virtualization that worked across the network and was the first "cloud" system that I worked on. In the mid nineties, I had worked on large Gigabit/Terabit routers but again the control plane was part of the data plane until ATM based networks could use out of band control to setup a SVC between input port and output port and switch the IP packets instead of routing them achieving network end to end delays of less than milliseconds. VLAN and MPLS protocols were developed to switch packets in the backbone of the networks and not to route them. In 2000 we put our first pre-standard cloud together with multi Gigabit routers and Sun workstations at 45 PoPs in the US, 3 in Asia and 6 in Europe and implemented a "cloud" O/S. Our fastest links were 10 Gbps. Now we can have 2-50 Tbps per fiber using Superchannel DWDM technology between PoP, data centers or cell towers. Network control functions can dynamically change by using Dynamic Reprogrammable EPROMs from companies like Xilinx and Intel to repurpose firmware control and device functions. Embedded systems have implemented "micro services" for years as that is how you handle interrupt driven real time control. We call this a context switch which is still hardware CPU dependent. As far as I know, current standard containers do not handle real time CPU interrupts or do they allow very tight timing reponse loops within the standard containers? Certain 5G proposals are discussing network slicing et al to virtualize control functions that can work better without virtualization. Current 5G protocol submissions that I have reviewed are way too complex to work out in the real world on real networks, maintained by union labor. (This is not a dig at union labor, as they are some of the best trained techs.) :) On Sat, Aug 1, 2020 at 8:35 AM Mark Tinka <mark.tinka@seacom.com> wrote:
On 1/Aug/20 11:23, Etienne-Victor Depasquale wrote:
Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core).
I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here <https://www.telecomtv.com/content/intel-vsummit-5g-ran-5g-core/the-5g-core-is-vital-to-deliver-the-promise-of-5g-39164/> @10:30). I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
In the early dawn of SDN, where it was cool to have the RP's in Beirut and the line cards in Lagos, the industry quickly realized that was not entirely feasible.
If you are looking at over-the-top services, so-called cloud-native computing makes sense in order to deliver that value accordingly, and with agility. But as it pertains to actual network transport, I'm not yet sure the industry is at the stage where we are confident enough to decompose packet forwarding through a cloud.
Network operators are more likely to keep using kit that integrates forwarding hardware as well as a NOS, as no amount of cloud architecting is going to rival a 100Gbps purpose-built port, for example.
Suffice it to say, there was a time when folk were considering running their critical infrastructure (such as your route reflectors) in AWS or similar. I'm not quite sure public clouds are at that level of confidence yet. So if some kind of cloud-native infrastructure is to be considered for critical infrastructure, I highly suspect it will be in-house.
On the other hand, for any new budding entrepreneurs that want to get into the mobile game with as little cost as possible, there is a huge opportunity to do so by building all that infrastructure in an on-prem cloud-native architecture, and offer packet forwarding using general-purpose hardware provided they don't exceed their expectations. This way, they wouldn't have to deal with the high costs traditional vendors (Ericsson, Nokia, Huawei, Siemens, ZTE, e.t.c.) impose. Granted, it would be small scale, but maybe that is the business model. And in an industry where capex is fast out-pacing revenue, it would be the mobile network equivalent of low-cost carrier airlines.
I very well could be talking out the side of my neck, but my prediction is mobile operators will be optimistic but cautious. I reckon a healthy mix between cloud-native and tried & tested practices.
Mark.
On 1/Aug/20 23:53, John Lee wrote:
In 2000 we put our first pre-standard cloud together with multi Gigabit routers and Sun workstations at 45 PoPs in the US, 3 in Asia and 6 in Europe and implemented a "cloud" O/S. Our fastest links were 10 Gbps. Now we can have 2-50 Tbps per fiber using Superchannel DWDM technology between PoP, data centers or cell towers. Network control functions can dynamically change by using Dynamic Reprogrammable EPROMs from companies like Xilinx and Intel to repurpose firmware control and device functions.
I believe that if a system has a single (and often simple) function, as in the case of DWDM, you can have an off-site control plane to decide what the network should transport. The problem with IP networks is that you get multiple services that they need to carry at various layers of the stack, that it becomes tricky not to have some kind of localized control plane to ensure the right intelligence is onboard to advise the data plane about what to do, in a changing network environment. While we can do this with a VM on a server, the server's NIC lets us down when we need to push 100's of Gbps or 10's of Tbps.
Certain 5G proposals are discussing network slicing et al to virtualize control functions that can work better without virtualization. Current 5G protocol submissions that I have reviewed are way too complex to work out in the real world on real networks, maintained by union labor. (This is not a dig at union labor, as they are some of the best trained techs.) :)
In a world where user traffic is exceedingly moving away from private networks and on to the the public Internet, I struggle to understand how 5G's "network slicing" is going to deliver what it promises, when the network is merely seen as a means to get users to what they want. In most cases, what they want will not be hosted locally within the mobile network, making discrete SLR's as prescribed by network slicing, somewhat useless. With all the bells & whistles 5G is claiming will change the world, I just don't see how that will work as more services move into over-the-top public clouds. Mark.
I reason that Intel's implication is that virtualization is becoming obsolete. Would anyone care to let me know his thoughts on this prediction?
Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around. The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation. Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option. Thx, R.
Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option.
That pretty much sums up Intel's view. To quote an Intel executive I was corresponding with: "The purpose of the paper was to showcase how Communication Service Providers can move to a more nimble and future proof microservices based network architecture with cloud native functions, via container deployment methodologies versus virtual machines. The paper cites many benefits of moving to a microservices architecture beyond whether it is done in a VM environment or cloud native. We believe the 5G networks of the future will benefit greatly by implementing such an approach to deploying new services." The paper referred to is this one <https://www.intel.in/content/www/in/en/communications/why-containers-and-cloud-native-functions-paper.html> . Cheers, Etienne On Sat, Aug 1, 2020 at 6:23 PM Robert Raszuk <robert@raszuk.net> wrote:
I reason that Intel's implication is that virtualization is becoming
obsolete. Would anyone care to let me know his thoughts on this prediction?
Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around.
The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation.
Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option.
Thx, R.
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
An operating system is just a high-level machine. That the M-plane in VM is implemented in software isn’t relevant, as pretty much all hardware CPUs are implemented in software as well, so VM is just virtualizing software already. Containerization is VM, but using the OS as the M-plane As long as the OS delivers all the functions needed by applications, it’s a perfectly reasonable, and even preferable, plane to virtualize. -mel On Aug 1, 2020, at 11:12 AM, Etienne-Victor Depasquale <edepa@ieee.org> wrote: Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option. That pretty much sums up Intel's view. To quote an Intel executive I was corresponding with: "The purpose of the paper was to showcase how Communication Service Providers can move to a more nimble and future proof microservices based network architecture with cloud native functions, via container deployment methodologies versus virtual machines. The paper cites many benefits of moving to a microservices architecture beyond whether it is done in a VM environment or cloud native. We believe the 5G networks of the future will benefit greatly by implementing such an approach to deploying new services." The paper referred to is this one<https://www.intel.in/content/www/in/en/communications/why-containers-and-cloud-native-functions-paper.html>. Cheers, Etienne On Sat, Aug 1, 2020 at 6:23 PM Robert Raszuk <robert@raszuk.net<mailto:robert@raszuk.net>> wrote: I reason that Intel's implication is that virtualization is becoming obsolete. Would anyone care to let me know his thoughts on this prediction? Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around. The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation. Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option. Thx, R. -- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
Wondering whether the industry will consider containerised data-plane in addition to control-plane (like cRDP). Having just control-plane and then hacking to kernel for doing the data-plane bit is …well not as straight forward as having a dedicated data-plane VM or potentially container. adam From: NANOG <nanog-bounces+adamv0025=netconsultings.com@nanog.org> On Behalf Of Etienne-Victor Depasquale Sent: Saturday, August 1, 2020 7:09 PM To: Robert Raszuk <robert@raszuk.net> Cc: NANOG <nanog@nanog.org> Subject: Re: Has virtualization become obsolete in 5G? Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option. That pretty much sums up Intel's view. To quote an Intel executive I was corresponding with: "The purpose of the paper was to showcase how Communication Service Providers can move to a more nimble and future proof microservices based network architecture with cloud native functions, via container deployment methodologies versus virtual machines. The paper cites many benefits of moving to a microservices architecture beyond whether it is done in a VM environment or cloud native. We believe the 5G networks of the future will benefit greatly by implementing such an approach to deploying new services." The paper referred to is this one <https://www.intel.in/content/www/in/en/communications/why-containers-and-cloud-native-functions-paper.html%20> . Cheers, Etienne On Sat, Aug 1, 2020 at 6:23 PM Robert Raszuk <robert@raszuk.net <mailto:robert@raszuk.net> > wrote: I reason that Intel's implication is that virtualization is becoming obsolete. Would anyone care to let me know his thoughts on this prediction? Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around. The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation. Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option. Thx, R. -- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
Intel definitely is pressing for containerized data plane. Here <https://intelvs.on24.com/vshow/inteldcgevents/#content/2393080>, @20:49 (registration required), I placed that very question and it took a bit of humming to obtain a straight answer :) Etienne On Tue, Aug 4, 2020 at 5:38 PM <adamv0025@netconsultings.com> wrote:
Wondering whether the industry will consider containerised data-plane in addition to control-plane (like cRDP).
Having just control-plane and then hacking to kernel for doing the data-plane bit is …well not as straight forward as having a dedicated data-plane VM or potentially container.
adam
*From:* NANOG <nanog-bounces+adamv0025=netconsultings.com@nanog.org> *On Behalf Of *Etienne-Victor Depasquale *Sent:* Saturday, August 1, 2020 7:09 PM *To:* Robert Raszuk <robert@raszuk.net> *Cc:* NANOG <nanog@nanog.org> *Subject:* Re: Has virtualization become obsolete in 5G?
Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option.
That pretty much sums up Intel's view.
To quote an Intel executive I was corresponding with:
"The purpose of the paper was to showcase how Communication Service Providers can move to a more nimble and future proof microservices based network architecture with cloud native functions, via container deployment methodologies versus virtual machines. The paper cites many benefits of moving to a microservices architecture beyond whether it is done in a VM environment or cloud native. We believe the 5G networks of the future will benefit greatly by implementing such an approach to deploying new services."
The paper referred to is this one <https://www.intel.in/content/www/in/en/communications/why-containers-and-cloud-native-functions-paper.html%20> .
Cheers,
Etienne
On Sat, Aug 1, 2020 at 6:23 PM Robert Raszuk <robert@raszuk.net> wrote:
I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around.
The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation.
Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option.
Thx,
R.
--
Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
On Tue, Aug 4, 2020 at 12:00 PM Etienne-Victor Depasquale <edepa@ieee.org> wrote:
Intel definitely is pressing for containerized data plane.
Here, @20:49 (registration required), I placed that very question and it took a bit of humming to obtain a straight answer :)
I'm shocked, shocked to discover that a company that sells CPUs thinks that a dataplane should run on a CPU... W
Etienne
On Tue, Aug 4, 2020 at 5:38 PM <adamv0025@netconsultings.com> wrote:
Wondering whether the industry will consider containerised data-plane in addition to control-plane (like cRDP).
Having just control-plane and then hacking to kernel for doing the data-plane bit is …well not as straight forward as having a dedicated data-plane VM or potentially container.
adam
From: NANOG <nanog-bounces+adamv0025=netconsultings.com@nanog.org> On Behalf Of Etienne-Victor Depasquale Sent: Saturday, August 1, 2020 7:09 PM To: Robert Raszuk <robert@raszuk.net> Cc: NANOG <nanog@nanog.org> Subject: Re: Has virtualization become obsolete in 5G?
Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option.
That pretty much sums up Intel's view.
To quote an Intel executive I was corresponding with:
"The purpose of the paper was to showcase how Communication Service Providers can move to a more nimble and future proof microservices based network architecture with cloud native functions, via container deployment methodologies versus virtual machines. The paper cites many benefits of moving to a microservices architecture beyond whether it is done in a VM environment or cloud native. We believe the 5G networks of the future will benefit greatly by implementing such an approach to deploying new services."
The paper referred to is this one.
Cheers,
Etienne
On Sat, Aug 1, 2020 at 6:23 PM Robert Raszuk <robert@raszuk.net> wrote:
I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around.
The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation.
Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option.
Thx,
R.
--
Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
-- I don't think the execution is relevant when it was obviously a bad idea in the first place. This is like putting rabid weasels in your pants, and later expressing regret at having chosen those particular rabid weasels and that pair of pants. ---maf
On 4/Aug/20 17:38, adamv0025@netconsultings.com wrote:
Wondering whether the industry will consider containerised data-plane in addition to control-plane (like cRDP).
Having just control-plane and then hacking to kernel for doing the data-plane bit is …well not as straight forward as having a dedicated data-plane VM or potentially container.
Well, there has been some discussion in the past 2 years about whether vendors can open up some of their data planes and allow those with enough energy and clue (that means not me, hehe) to have their take on what they can do with the chips in some kind of form factor, even without their OS. Outside of that, merchant silicon is the next step, before we try to hack it on general-purpose CPU's, as we've been doing for some time. Mark.
I was actually talking about routing on the host and virtual control-plane and virtualized data-plane. Currently we either have a VM combining both or a separate VM for each. Alternatively we can have a container for the control-plane. I was wondering if the idea behind containerization is to do virtual data-plane as a container as well. In terms of containerization on vendor HW or opening up data-plane, seems like XR7 from Cisco is leading the way: - System runs in containers on RE and Line-cards, allows one to run 3rd party containers, - Allows one to run 3rd party routing protocols to program RIB - Allows one to program FIB via Open Forwarding Abstraction (OFA) APIs - And XR itself can run on selected 3rd party HW. That pretty much covers all the avenues we as operators are interested in, of course it’s not all just roses and unicorns and there will be further development and streamlining necessary. adam From: NANOG <nanog-bounces+adamv0025=netconsultings.com@nanog.org> On Behalf Of Mark Tinka Sent: Wednesday, August 5, 2020 1:05 PM To: nanog@nanog.org Subject: Re: Has virtualization become obsolete in 5G? On 4/Aug/20 17:38, adamv0025@netconsultings.com <mailto:adamv0025@netconsultings.com> wrote: Wondering whether the industry will consider containerised data-plane in addition to control-plane (like cRDP). Having just control-plane and then hacking to kernel for doing the data-plane bit is …well not as straight forward as having a dedicated data-plane VM or potentially container. Well, there has been some discussion in the past 2 years about whether vendors can open up some of their data planes and allow those with enough energy and clue (that means not me, hehe) to have their take on what they can do with the chips in some kind of form factor, even without their OS. Outside of that, merchant silicon is the next step, before we try to hack it on general-purpose CPU's, as we've been doing for some time. Mark.
On 5/Aug/20 16:15, adamv0025@netconsultings.com wrote:
I was actually talking about routing on the host and virtual control-plane and virtualized data-plane.
Currently we either have a VM combining both or a separate VM for each. Alternatively we can have a container for the control-plane.
I was wondering if the idea behind containerization is to do virtual data-plane as a container as well.
Good question. My understanding of cloud-native that the mobile folk want is to deliver over-the-top services, and not necessarily turn containers into packet-forwarding routers at scale. However, the question is interesting, so we'll see.
In terms of containerization on vendor HW or opening up data-plane, seems like XR7 from Cisco is leading the way:
- System runs in containers on RE and Line-cards, allows one to run 3^rd party containers,
- Allows one to run 3^rd party routing protocols to program RIB
- Allows one to program FIB via Open Forwarding Abstraction (OFA) APIs
- And XR itself can run on selected 3^rd party HW.
That pretty much covers all the avenues we as operators are interested in, of course it’s not all just roses and unicorns and there will be further development and streamlining necessary.
That's a good start, indeed. Do we know if Cisco are opening up their own data plane, or Broadcom ones? Mark.
I think you'd be surprised how much of the 5G Core is containerized for both the data and control planes in the next generations providers are currently deploying. On Wed, Aug 5, 2020, 11:02 AM Mark Tinka <mark.tinka@seacom.com> wrote:
On 5/Aug/20 16:15, adamv0025@netconsultings.com wrote:
I was actually talking about routing on the host and virtual control-plane and virtualized data-plane.
Currently we either have a VM combining both or a separate VM for each. Alternatively we can have a container for the control-plane.
I was wondering if the idea behind containerization is to do virtual data-plane as a container as well.
Good question.
My understanding of cloud-native that the mobile folk want is to deliver over-the-top services, and not necessarily turn containers into packet-forwarding routers at scale. However, the question is interesting, so we'll see.
In terms of containerization on vendor HW or opening up data-plane, seems like XR7 from Cisco is leading the way:
- System runs in containers on RE and Line-cards, allows one to run 3rd party containers,
- Allows one to run 3rd party routing protocols to program RIB
- Allows one to program FIB via Open Forwarding Abstraction (OFA) APIs
- And XR itself can run on selected 3rd party HW.
That pretty much covers all the avenues we as operators are interested in, of course it’s not all just roses and unicorns and there will be further development and streamlining necessary.
That's a good start, indeed. Do we know if Cisco are opening up their own data plane, or Broadcom ones?
Mark.
On 5/Aug/20 17:07, Shane Ronan wrote:
I think you'd be surprised how much of the 5G Core is containerized for both the data and control planes in the next generations providers are currently deploying.
It's what I expect for new entrants that don't want to deal with traditional vendors. I'd be curious to see if legacy operators are shifting traffic away from iron to servers, and at what rate. Mark.
Yes they are for 5G core. On Wed, Aug 5, 2020, 11:28 AM Mark Tinka <mark.tinka@seacom.com> wrote:
On 5/Aug/20 17:07, Shane Ronan wrote:
I think you'd be surprised how much of the 5G Core is containerized for both the data and control planes in the next generations providers are currently deploying.
It's what I expect for new entrants that don't want to deal with traditional vendors.
I'd be curious to see if legacy operators are shifting traffic away from iron to servers, and at what rate.
Mark.
On 6/Aug/20 15:43, Shane Ronan wrote:
Yes they are for 5G core.
Right, but for legacy operators, or new entrants? If you know where we can find some info about deployment and experiences, that would be very interesting to read. We've all been struggling to make Intel CPU's shift 10's, 40's and 100's of Gbps of revenue traffic as a routing platform, so would like to know how the operators are getting on with this. Mark.
Mark, I don’t think you’re going to move those volumes with Intel X86 chips. For example, AT&T’s Open Compute Project whitebox architecture is based on Broadcom Jericho2 processors, with aggregate on-chip throughput of 9.6 Tbps, and which support 24 ports at 400 Gbps each. This is where AT&T’s 5G slicing is taking place. https://about.att.com/story/2019/open_compute_project.html Intel has developed nothing like this, and has had to resort to acquisition of multi-chip solutions to get these speeds (e.g. its purchase of Barefoot Networks Tofino2 IP). The X86 architecture is too complex and carries too much non-network-related baggage to be a serious player in 5G slicing. -mel On Aug 6, 2020, at 8:24 AM, Mark Tinka <mark.tinka@seacom.com> wrote: On 6/Aug/20 15:43, Shane Ronan wrote: Yes they are for 5G core. Right, but for legacy operators, or new entrants? If you know where we can find some info about deployment and experiences, that would be very interesting to read. We've all been struggling to make Intel CPU's shift 10's, 40's and 100's of Gbps of revenue traffic as a routing platform, so would like to know how the operators are getting on with this. Mark.
On 6/Aug/20 17:43, Mel Beckman wrote:
I don’t think you’re going to move those volumes with Intel X86 chips. For example, AT&T’s Open Compute Project whitebox architecture is based on Broadcom Jericho2 processors, with aggregate on-chip throughput of 9.6 Tbps, and which support 24 ports at 400 Gbps each. This is where AT&T’s 5G slicing is taking place.
My point exactly. If much of the cloud-native is happening on servers with Intel chips, and part of the micro-services is to also provide data plane functionality at that level, I don't see how it can scale for legacy mobile operators. It might make sense for niche, start-up mobile operators with little-to-no traffic serving some unique case, but not the classics we have today. Now, if they are writing their own bits of code on or for white boxes based on Broadcom et al, not sure that falls in the realm of "micro-services with Kubernetes". But I could be wrong.
Intel has developed nothing like this, and has had to resort to acquisition of multi-chip solutions to get these speeds (e.g. its purchase of Barefoot Networks Tofino2 IP).
The X86 architecture is too complex and carries too much non-network-related baggage to be a serious player in 5G slicing.
Which we, as network operators, can all agree on. But the 5G folk seem to have other ideas, so I just want to see what is actually truth, and what's noise. Mark.
On Thu, Aug 6, 2020 at 11:52 AM Mark Tinka <mark.tinka@seacom.com> wrote:
On 6/Aug/20 17:43, Mel Beckman wrote:
I don’t think you’re going to move those volumes with Intel X86 chips. For example, AT&T’s Open Compute Project whitebox architecture is based on Broadcom Jericho2 processors, with aggregate on-chip throughput of 9.6 Tbps, and which support 24 ports at 400 Gbps each. This is where AT&T’s 5G slicing is taking place.
My point exactly.
If much of the cloud-native is happening on servers with Intel chips, and part of the micro-services is to also provide data plane functionality at that level, I don't see how it can scale for legacy mobile operators. It might make sense for niche, start-up mobile operators with little-to-no traffic serving some unique case, but not the classics we have today.
Isn't this just, really: 1) some network gear with SDN bits that live on the next-rack over servers/kubes 2) services (microservices!) that do the SDN functions AND NFV functions AND billing (extending IMS to the edge etc)
Now, if they are writing their own bits of code on or for white boxes based on Broadcom et al, not sure that falls in the realm of "micro-services with Kubernetes". But I could be wrong.
the discussion (I think) got conflated here... there's: "network equipment" and "microservices equipment" (service equipment?) and really 'I need a fast, cheap network device I can dynamically program for things which don't really smell like 'DFZ size LPM routing"' is just code for: "sdn control the switch, sending traffic either at 'default' or based on 'service data' some microservice architecture of NFV things.
Intel has developed nothing like this, and has had to resort to acquisition of multi-chip solutions to get these speeds (e.g. its purchase of Barefoot Networks Tofino2 IP).
The X86 architecture is too complex and carries too much non-network-related baggage to be a serious player in 5G slicing.
Which we, as network operators, can all agree on.
But the 5G folk seem to have other ideas, so I just want to see what is actually truth, and what's noise.
5g folk seem to have lots of good marketing, and reasons to sell complexity to their carrier 'partners' (captive prisoners? maybe that's too pejorative :) )
On 6/Aug/20 21:05, Christopher Morrow wrote:
Isn't this just, really: 1) some network gear with SDN bits that live on the next-rack over servers/kubes 2) services (microservices!) that do the SDN functions AND NFV functions AND billing (extending IMS to the edge etc)
I can already see how we are going to spend the next 10 years defining this :-)...
the discussion (I think) got conflated here... there's: "network equipment" and "microservices equipment" (service equipment?)
and really 'I need a fast, cheap network device I can dynamically program for things which don't really smell like 'DFZ size LPM routing"'
is just code for: "sdn control the switch, sending traffic either at 'default' or based on 'service data' some microservice architecture of NFV things.
I think we've just given vendors job security for another decade, hehe.
5g folk seem to have lots of good marketing, and reasons to sell complexity to their carrier 'partners' (captive prisoners? maybe that's too pejorative :) )
Amen! Mark.
On Fri, 07 Aug 2020 07:29:49 +0200, Mark Tinka said:
On 6/Aug/20 21:05, Christopher Morrow wrote:
Isn't this just, really: 1) some network gear with SDN bits that live on the next-rack over servers/kubes 2) services (microservices!) that do the SDN functions AND NFV functions AND billing (extending IMS to the edge etc)
I can already see how we are going to spend the next 10 years defining this :-)...
With research consultant reports tagging along every step of the way. :)
From: Mark Tinka <mark.tinka@seacom.com> Cc: adamv0025@netconsultings.com; North American Network Operators'
On 6/Aug/20 15:43, Shane Ronan wrote:
Yes they are for 5G core.
Right, but for legacy operators, or new entrants?
If you know where we can find some info about deployment and experiences, that would be very interesting to read.
We've all been struggling to make Intel CPU's shift 10's, 40's and 100's of Gbps of revenue traffic as a routing platform, so would like to know how the operators are getting on with this.
Mark, 1) first you have your edge - lots of small instances that are meant to be horizontally scaled (not vertically- i.e. not 40's/100's of Gbps pushed via single Intel CPU) - that's your NFVI. - could be compute host in a DC "cloud", or in a customer office (acting as CPE), or at the rooftop of the office building i.e. (fog/edge computing) -e.g. hosting self-driving intersection apps via 5G -to your point regarding latency in metro), or in the same rack as core routers (acting as vRR), or actually inside a router as a routing engine card (hosting some containerized app). 2) Any of the compute hosts mentioned above can host one or more of any type of the network function you can think of ranging from EPG, SBC, PBX, all the way to PE-Router, LB, FW/WAF or IDS. 3) While inside a compute host it's CPU based forwarding, but as soon as you leave compute host's NICs there's world of solely NPU based forwarding (that's where you do 40's, 100's, or even 400's Gbps). 4) Now how you make changes to control-planes of these NFs (i.e. virtual CPU-based NFs and physical NPU-based NFs) programmatically, that's the realm of SDN. - If you want to do it right you do it in an abstracted declarative way (not exposing the complexity to a control program/user - but rather localizing it to a given abstraction layer) Performing tasks like: - Defining service topology/access control a.k.a. micro segmentation (e.g. A and B can both talk to C, but not to each other). - Traffic engineering a.k.a. service chaining, a.k.a. network slicing (e.g. traffic type x should pas through NF A, B and C, but traffic type Y should pass only through A and C) 5) And for completeness, in the virtual world you have the task of VNF lifecycle management (cause the VNFs and virtual networks connecting them can be instantiated on demand) adam
On 10/Aug/20 15:15, adamv0025@netconsultings.com wrote:
Mark, 1) first you have your edge - lots of small instances that are meant to be horizontally scaled (not vertically- i.e. not 40's/100's of Gbps pushed via single Intel CPU) - that's your NFVI. - could be compute host in a DC "cloud", or in a customer office (acting as CPE), or at the rooftop of the office building i.e. (fog/edge computing) -e.g. hosting self-driving intersection apps via 5G -to your point regarding latency in metro), or in the same rack as core routers (acting as vRR), or actually inside a router as a routing engine card (hosting some containerized app).
Nothing new. This happens today already. The main issue, as discussed earlier, was licensing options for CPE-type deployments. This is what killed our plan for the same. But as an RR, yes, since 2014.
2) Any of the compute hosts mentioned above can host one or more of any type of the network function you can think of ranging from EPG, SBC, PBX, all the way to PE-Router, LB, FW/WAF or IDS.
Yes, but the use-case determines the scale limitations. And there are many services that you cannot scale by offloading to several little boxes and expect the same predictability, e.g., a vArborTMS.
3) While inside a compute host it's CPU based forwarding, but as soon as you leave compute host's NICs there's world of solely NPU based forwarding (that's where you do 40's, 100's, or even 400's Gbps).
Yes, plenty of white boxes in the world ready to run an OS and ship with a version of Broadcom. It's a purpose-built device doing one thing and one thing only.
4) Now how you make changes to control-planes of these NFs (i.e. virtual CPU-based NFs and physical NPU-based NFs) programmatically, that's the realm of SDN. - If you want to do it right you do it in an abstracted declarative way (not exposing the complexity to a control program/user - but rather localizing it to a given abstraction layer) Performing tasks like: - Defining service topology/access control a.k.a. micro segmentation (e.g. A and B can both talk to C, but not to each other). - Traffic engineering a.k.a. service chaining, a.k.a. network slicing (e.g. traffic type x should pas through NF A, B and C, but traffic type Y should pass only through A and C)
This is the bit that I see working well on a per deployment basis, if operators aren't too concerned about standardizing the solution. Where the industry has kept falling over is wanting to standardize the entire orchestration piece, which is very noble, but ultimately, fraught with many a complication. I'm sure we'd all like to see a standard on how we orchestrate the network and services, but I'm not sure that is practical. After all, operators are autonomous systems.
5) And for completeness, in the virtual world you have the task of VNF lifecycle management (cause the VNFs and virtual networks connecting them can be instantiated on demand)
So I first read all about this in 2015, through a document Cisco published called "Cisco vMS 1.0 Introduction and Overview Design Guide". Safe to say not much as changed in the objective, since then :-). Mark.
From: Mark Tinka <mark.tinka@seacom.com> Sent: Tuesday, August 11, 2020 2:19 PM
On 10/Aug/20 15:15, adamv0025@netconsultings.com wrote:
Mark, 1) first you have your edge - lots of small instances that are meant to be horizontally scaled (not vertically- i.e. not 40's/100's of Gbps pushed via single Intel CPU) - that's your NFVI. - could be compute host in a DC "cloud", or in a customer office (acting as CPE), or at the rooftop of the office building i.e. (fog/edge computing) -e.g. hosting self-driving intersection apps via 5G -to your point regarding latency in metro), or in the same rack as core routers (acting as vRR), or actually inside a router as a routing engine card (hosting some containerized app).
Nothing new. This happens today already.
Yes apart from the fog/edge computing all aforementioned is business as usual.
The main issue, as discussed earlier, was licensing options for CPE-type deployments. This is what killed our plan for the same.
Yes vendors need to abandon the old physical unit types of licensing schemes for the horizontal scaling to make sense.
2) Any of the compute hosts mentioned above can host one or more of any type of the network function you can think of ranging from EPG, SBC, PBX, all the way to PE-Router, LB, FW/WAF or IDS.
Yes, but the use-case determines the scale limitations. And there are many services that you cannot scale by offloading to several little boxes and expect the same predictability, e.g., a vArborTMS.
Can you elaborate? Apart from licensing scheme what stops one from redirecting traffic to one vTMS instance per say each transit link or per destination /24 (i.e. horizontal scaling)? (vTMS is not stateful or is it?)
4) Now how you make changes to control-planes of these NFs (i.e. virtual CPU-based NFs and physical NPU-based NFs) programmatically, that's the realm of SDN. - If you want to do it right you do it in an abstracted declarative way (not exposing the complexity to a control program/user - but rather localizing it to a given abstraction layer) Performing tasks like: - Defining service topology/access control a.k.a. micro segmentation (e.g. A and B can both talk to C, but not to each other). - Traffic engineering a.k.a. service chaining, a.k.a. network slicing (e.g. traffic type x should pas through NF A, B and C, but traffic type Y should pass only through A and C)
This is the bit that I see working well on a per deployment basis, if operators aren't too concerned about standardizing the solution.
Where the industry has kept falling over is wanting to standardize the entire orchestration piece, which is very noble, but ultimately, fraught with many a complication.
Can you please point out any efforts where operators are trying to standardize the orchestration piece? I think industry is not falling over on this just progressing at steady rate while producing artefacts in the process that you may or may not want to use (I actually find them very useful and not impeding).
I'm sure we'd all like to see a standard on how we orchestrate the network and services, but I'm not sure that is practical. After all, operators are autonomous systems.
Personally, I don't need a standard on how I should orchestrate network services. There are very interesting and useful ideas, or better put "frameworks", that anyone can follow (and most are), but standardizing these, ...no point in my opinion.
5) And for completeness, in the virtual world you have the task of VNF lifecycle management (cause the VNFs and virtual networks connecting them can be instantiated on demand)
So I first read all about this in 2015, through a document Cisco published called "Cisco vMS 1.0 Introduction and Overview Design Guide".
Safe to say not much as changed in the objective, since then :-).
Really? Never mind then ;) adam
On 11/Aug/20 17:55, adamv0025@netconsultings.com wrote:
Can you elaborate? Apart from licensing scheme what stops one from redirecting traffic to one vTMS instance per say each transit link or per destination /24 (i.e. horizontal scaling)? (vTMS is not stateful or is it?)
In an effort to control costs, we considered a vTMS from Arbor. Even Arbor didn't recommend it, which was completely unsurprising. Arbor can flog you a TMS that can sweep 10Gbps, 20Gbps, 40Gbps or 100Gbps worth of traffic. I don't see how you can run that kind of traffic in a VM.
Can you please point out any efforts where operators are trying to standardize the orchestration piece?
NETCONF, YANG, LSO.
I think industry is not falling over on this just progressing at steady rate while producing artefacts in the process that you may or may not want to use (I actually find them very useful and not impeding).
What's 10 years between friends :-)...
Personally, I don't need a standard on how I should orchestrate network services. There are very interesting and useful ideas, or better put "frameworks", that anyone can follow (and most are), but standardizing these, ...no point in my opinion.
Now that's something we can agree on... and once folk realize that getting your solution going is the end-goal - rather than bickering over whether NETCONF or YANG or SSH or whatever should be the BCOP - is when we shall finally see some real progress. Personally, I don't really care of you choose to keep CLI or employ thousands of software heads to automate said CLI. As long as you are happy and not wasting time taking every meeting from every vendor about "automation". Mark.
Two more bits' worth ... About a year ago, during a discussion with a local network operator's CTO, I was told that dependency on the operator's employees for production of software gave the employees too much leverage over their employer (the operator, here). Perhaps industrial standardization of internal processes (including orchestration APIs) weakens this leverage. Cheers, Etienne On Tue, Aug 11, 2020 at 8:48 PM Mark Tinka <mark.tinka@seacom.com> wrote:
On 11/Aug/20 17:55, adamv0025@netconsultings.com wrote:
Can you elaborate? Apart from licensing scheme what stops one from redirecting traffic to one vTMS instance per say each transit link or per destination /24 (i.e. horizontal scaling)? (vTMS is not stateful or is it?)
In an effort to control costs, we considered a vTMS from Arbor.
Even Arbor didn't recommend it, which was completely unsurprising.
Arbor can flog you a TMS that can sweep 10Gbps, 20Gbps, 40Gbps or 100Gbps worth of traffic. I don't see how you can run that kind of traffic in a VM.
Can you please point out any efforts where operators are trying to standardize the orchestration piece?
NETCONF, YANG, LSO.
I think industry is not falling over on this just progressing at steady rate while producing artefacts in the process that you may or may not want to use (I actually find them very useful and not impeding).
What's 10 years between friends :-)...
Personally, I don't need a standard on how I should orchestrate network services. There are very interesting and useful ideas, or better put "frameworks", that anyone can follow (and most are), but standardizing these, ...no point in my opinion.
Now that's something we can agree on... and once folk realize that getting your solution going is the end-goal - rather than bickering over whether NETCONF or YANG or SSH or whatever should be the BCOP - is when we shall finally see some real progress.
Personally, I don't really care of you choose to keep CLI or employ thousands of software heads to automate said CLI. As long as you are happy and not wasting time taking every meeting from every vendor about "automation".
Mark.
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
On 12/Aug/20 09:49, Etienne-Victor Depasquale wrote:
Two more bits' worth ...
About a year ago, during a discussion with a local network operator's CTO, I was told that dependency on the operator's employees for production of software gave the employees too much leverage over their employer (the operator, here).
Perhaps industrial standardization of internal processes (including orchestration APIs) weakens this leverage.
I'm not sure that's a viable argument considering that any good employee (network, software, e.t.c.) will inherently have considerable leverage over their employer. And any good employer knows what to do when they realize they have good talent - either they do what is required to maintain that talent, or live with the risk of losing that talent to the competition. Moreover, an employer doesn't have to give in to the whims of a conceited employee; and most do not. Standardizing processes would do little to allay the fears of a CTO who is worried about being "cornered" by his/her staff. The real fear such a CTO would have is in implementation and operation of those processes at a technology level, i.e., where the rubber meets the actual road. If companies are going to be that scared by their employees, and if employees are going to play games with their employers, they each have other problems to solve, first :-). Mark.
Moreover, an employer doesn't have to give in to the whims of a conceited employee; and most do not.
This point plays straight up the path of the argument I recounted. Yes, I agree that there's a relational problem inherent to the situation I described. Wouldn't any wise employer playing the relationship game ensure that he's got cards to play? And wouldn't the standardization approach be part of the deck? Cheers, Etienne On Wed, Aug 12, 2020 at 10:00 AM Mark Tinka <mark.tinka@seacom.com> wrote:
On 12/Aug/20 09:49, Etienne-Victor Depasquale wrote:
Two more bits' worth ...
About a year ago, during a discussion with a local network operator's CTO, I was told that dependency on the operator's employees for production of software gave the employees too much leverage over their employer (the operator, here).
Perhaps industrial standardization of internal processes (including orchestration APIs) weakens this leverage.
I'm not sure that's a viable argument considering that any good employee (network, software, e.t.c.) will inherently have considerable leverage over their employer. And any good employer knows what to do when they realize they have good talent - either they do what is required to maintain that talent, or live with the risk of losing that talent to the competition.
Moreover, an employer doesn't have to give in to the whims of a conceited employee; and most do not.
Standardizing processes would do little to allay the fears of a CTO who is worried about being "cornered" by his/her staff. The real fear such a CTO would have is in implementation and operation of those processes at a technology level, i.e., where the rubber meets the actual road.
If companies are going to be that scared by their employees, and if employees are going to play games with their employers, they each have other problems to solve, first :-).
Mark.
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
On 12/Aug/20 10:50, Etienne-Victor Depasquale wrote:
This point plays straight up the path of the argument I recounted.
Yes, I agree that there's a relational problem inherent to the situation I described. Wouldn't any wise employer playing the relationship game ensure that he's got cards to play? And wouldn't the standardization approach be part of the deck?
In theory, yes. But the department most interested in this might be HR, while the ones most able to implement it will be the CTO team. Until NOG's start having breakout sessions on employee-employer dynamics, or until HR conferences start thinking about telecoms industry orchestration standardizations to mitigate employee-employer dynamics, it will remain a theory :-). Mark.
From: Mark Tinka <mark.tinka@seacom.com> Sent: Tuesday, August 11, 2020 7:45 PM
On 11/Aug/20 17:55, adamv0025@netconsultings.com wrote:
Can you elaborate? Apart from licensing scheme what stops one from redirecting traffic to one vTMS instance per say each transit link or per destination /24 (i.e. horizontal scaling)? (vTMS is not stateful or is it?)
In an effort to control costs, we considered a vTMS from Arbor.
Even Arbor didn't recommend it, which was completely unsurprising.
Arbor can flog you a TMS that can sweep 10Gbps, 20Gbps, 40Gbps or 100Gbps worth of traffic. I don't see how you can run that kind of traffic in a VM.
Fair enough, but you actually haven't answered my question about why you think that VNFs such as vTMS can not be implemented in a horizontal scaling model? In my opinion any NF virtual or physical can be horizontally scaled.
Can you please point out any efforts where operators are trying to standardize the orchestration piece?
NETCONF, YANG, LSO.
Right, and of these 3 you mentioned, what is it that you'd say operators are waiting for to get standardized, in order for them to start implementing network services orchestration?
I think industry is not falling over on this just progressing at steady rate while producing artefacts in the process that you may or may not want to use (I actually find them very useful and not impeding).
What's 10 years between friends :-)...
Personally, I don't need a standard on how I should orchestrate network services. There are very interesting and useful ideas, or better put "frameworks", that anyone can follow (and most are), but standardizing these, ...no point in my opinion.
Now that's something we can agree on... and once folk realize that getting your solution going is the end-goal - rather than bickering over whether NETCONF or YANG or SSH or whatever should be the BCOP - is when we shall finally see some real progress.
Personally, I don't really care of you choose to keep CLI or employ thousands of software heads to automate said CLI. As long as you are happy and not wasting time taking every meeting from every vendor about "automation".
Agreed, all I'm trying to understand is what makes you claim things like: progress is slow, or there's a lack of standardization, or operators need to wait till things get standardized in order to start doing network service orchestration... I'm asking cause I just don't see that. My personal experience is quite different to what you're claiming. Yes the landscape is quite diverse ranging from fire and forget CLI scrapers (Puppet, Chef, Ansible, SaltStack) through open network service orchestration frameworks all the way to a range of commercial products for network service orchestration, but the point is options are there and one can start today, no need to wait for anything to get standardized or things to settle. adam
On 12/Aug/20 19:10, adamv0025@netconsultings.com wrote:
Fair enough, but you actually haven't answered my question about why you think that VNFs such as vTMS can not be implemented in a horizontal scaling model? In my opinion any NF virtual or physical can be horizontally scaled.
The limitation is the VM i/o with the metal. Trying to shift 100Gbps of DoS traffic across smaller VNF's running on Intel CPU's is going to require quite a sizeable investment, and plenty of gymnastics in how you route traffic to and through them, vs. taking that cash and spending on just one or two purpose-built platforms that aren't scrubbing traffic in general-purpose CPU's. Needless to say, the ratio between the dirty traffic entering the system and the clean traffic coming out is often not 1:1, from a licensing standpoint. It's not unlike when we ran the numbers to see whether a VM running CSR1000v on a server connected to a dumb, cheap Layer 2 switch was cheaper than just buying an ASR920. The ASR920, even with the full license, was cheaper. Server + VMware license fees + considerations for NIC throughput just made it massively costly at scale.
Right, and of these 3 you mentioned, what is it that you'd say operators are waiting for to get standardized, in order for them to start implementing network services orchestration?
You miss my point. The existence of these data models doesn't mean that operators cannot automate without them. There are plenty of operators automating their procedures with, and without those open-based models. My point was if we are spending a lot of time trying to agree on these data models, so that Cisco can sell me their NSO, Juniper their Contrail, Ciena their Blue Planet, NEC their ProgrammableFlow or Nokia their Nuage - while several operators are deciding what automation means to them without trying to be boxed in these off-the-shelf solutions that promise vendor-agonstic integration - we may just blow another 10 years.
Agreed, all I'm trying to understand is what makes you claim things like: progress is slow, or there's a lack of standardization, or operators need to wait till things get standardized in order to start doing network service orchestration... I'm asking cause I just don't see that. My personal experience is quite different to what you're claiming.
Yes the landscape is quite diverse ranging from fire and forget CLI scrapers (Puppet, Chef, Ansible, SaltStack) through open network service orchestration frameworks all the way to a range of commercial products for network service orchestration, but the point is options are there and one can start today, no need to wait for anything to get standardized or things to settle.
Don't get me wrong - if NSO, Blue Planet, Nuage and all the rest are good for you, go for it. My concern is most engineers and commercial teams are confused about the best way forward because the industry keeps going back and forth on what the appropriate answer is, or worse, could be, or even more scary, is likely to be. In the end, either nothing is done, or costly mistakes happen. Only a handful of folk have the time, energy and skills to dig into the minutiae and follow the technical community on defining solutions at a very low level. Everybody else just wants to know if it will work and how much it will cost. Meanwhile, homegrown automation solutions that do not follow any standard continue to be seen as a "stop-gap", not realizing that, perhaps, what works for me now is what works for me, period. I'm not saying operators aren't automating. I'm saying my automating is not your automating. As long as we are both happy with the solutions we have settled on for automating, despite them not being the same or following a similar standard, what's wrong with that? There are other pressing matters that need our attention. Mark.
And for other stuff as well. adam From: Shane Ronan <shane@ronan-online.com> Sent: Thursday, August 6, 2020 2:43 PM To: Mark Tinka <mark.tinka@seacom.com> Cc: adamv0025@netconsultings.com; North American Network Operators' Group <nanog@nanog.org> Subject: Re: Has virtualization become obsolete in 5G? Yes they are for 5G core. On Wed, Aug 5, 2020, 11:28 AM Mark Tinka <mark.tinka@seacom.com <mailto:mark.tinka@seacom.com> > wrote: On 5/Aug/20 17:07, Shane Ronan wrote:
I think you'd be surprised how much of the 5G Core is containerized for both the data and control planes in the next generations providers are currently deploying.
It's what I expect for new entrants that don't want to deal with traditional vendors. I'd be curious to see if legacy operators are shifting traffic away from iron to servers, and at what rate. Mark.
On 1/Aug/20 18:23, Robert Raszuk wrote:
Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around.
The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation.
Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option.
I see cloud-native as NFV++. It requires some adjustment to how classic NFV has been deployed, and that comes down to whether operators (especially those who err on the side of network operations rather than services) see value in upgrading their stack to cloud-native. If you're a Netflix or an Uber, sure, a cloud-native architecture is probably the only way you can scale. But if you are simple network operators who focus more on pushing packets than over-the-top services, particularly if you already have some NFV, making the move to cloud-native/NFV++ is a whole consideration. Mark.
Not sure what you mean NFV is NFV,
From NFV perspective cRDP is no different than vMX -it’s just a virtualized router function nothing special…
Also with regards to NFV markets, it’s just CPE or telco-cloud (routing on host, FWs, LBs and other domain specific network devices like SBCs), and then RRs, no one sane would be replacing high throughput aggregation points like PEs or core nodes with NFV ,unless one wants to get into some serious horizontal scaling ;). adam From: NANOG <nanog-bounces+adamv0025=netconsultings.com@nanog.org> On Behalf Of Mark Tinka Sent: Saturday, August 1, 2020 9:51 PM To: nanog@nanog.org Subject: Re: Has virtualization become obsolete in 5G? On 1/Aug/20 18:23, Robert Raszuk wrote: Virtualization is not becoming obsolete ... quite reverse in fact in all types of deployments I can see around. The point is that VM provides hardware virtualization while kubernetes with containers virtualize OS apps and services are running on in isolation. Clearly to virtualize operating systems as long as your level of virtualization mainly in terms of security and resource consumption isolation & reservation is satisfactory is a much better and lighter option. I see cloud-native as NFV++. It requires some adjustment to how classic NFV has been deployed, and that comes down to whether operators (especially those who err on the side of network operations rather than services) see value in upgrading their stack to cloud-native. If you're a Netflix or an Uber, sure, a cloud-native architecture is probably the only way you can scale. But if you are simple network operators who focus more on pushing packets than over-the-top services, particularly if you already have some NFV, making the move to cloud-native/NFV++ is a whole consideration. Mark.
On 4/Aug/20 17:45, adamv0025@netconsultings.com wrote:
Not sure what you mean NFV is NFV,
From NFV perspective cRDP is no different than vMX -it’s just a virtualized router function nothing special…
What I meant that as we've been deploying NFV as a VM, cloud-native means we take that VM and containerize it further. It's a further diffusion of NFV, in my book. The benefits about the added de-layering (if one can call it that) are left as an exercise to the operator.
Also with regards to NFV markets, it’s just CPE or telco-cloud (routing on host, FWs, LBs and other domain specific network devices like SBCs), and then RRs, no one sane would be replacing high throughput aggregation points like PEs or core nodes with NFV ,unless one wants to get into some serious horizontal scaling ;).
Well, vCPE's and vBNG's have long been the holy grail for some of us, especially since it makes IPv6 roll-out significantly simpler. Mark.
What I meant that as we've been deploying NFV as a VM,
cloud-native means we take that VM and containerize it further. Umm, I don't think so. At least that's not the impression I got from the CNCF, Intel and Red Hat. They seem to be striving for K8s without the use of VM hypervisors. Etienne On Wed, Aug 5, 2020 at 2:12 PM Mark Tinka <mark.tinka@seacom.com> wrote:
On 4/Aug/20 17:45, adamv0025@netconsultings.com wrote:
Not sure what you mean NFV is NFV,
From NFV perspective cRDP is no different than vMX -it’s just a virtualized router function nothing special…
What I meant that as we've been deploying NFV as a VM, cloud-native means we take that VM and containerize it further. It's a further diffusion of NFV, in my book. The benefits about the added de-layering (if one can call it that) are left as an exercise to the operator.
Also with regards to NFV markets, it’s just CPE or telco-cloud (routing on host, FWs, LBs and other domain specific network devices like SBCs), and then RRs, no one sane would be replacing high throughput aggregation points like PEs or core nodes with NFV ,unless one wants to get into some serious horizontal scaling ;).
Well, vCPE's and vBNG's have long been the holy grail for some of us, especially since it makes IPv6 roll-out significantly simpler.
Mark.
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale
Containerization and k8s aren't so much a shift away from virtualization (horizontally), but a shift up from virtualization (vertically). It is a broader theme than 5G - initially gaining traction with SaaS companies, and recently appearing in NFV scenarios. Under the hood, k8s relies on an operating system which in turn typically runs inside a VM on a physical compute resource. Virtualization, thus, isn't obsolete - but its implementation specifics lose importance. The operator describes her desired configuration state once in the form of k8s objects, and is ready to deploy a service to any k8s platform instance. This can be an A-list k8s-as-a-service provider such as Amazon EKS, Google GKE, or Azure AKS. It can also be an in-house VMWare Tanzu or Mirantis Cloud Platform deployment that runs on the operator's own bare metal in their own data center. This additional abstraction, however, is only magical when someone else gets paid to deal with the detail. For an operator's in-house IT team, introducing k8s can be a net increase in complexity. Now, not only do they have to deal with all traditional IT challenges up to and including virtualization (life-cycle of hardware, physical network, storage, virtualization, operating system, licensing, backups, ...) - but also must map the k8s platform instance to these underlying elements and ensure the correct functioning of the k8s platform itself. Solutions are emerging (e.g. Amazon AWS Outposts, which allow an operator to bring a micro Amazon region in-house), but we'll likely continue to see NFV vendors supporting both VM-targetted and k8s-targetted deployment scenarios for some time. -- Sincerely, David Monosov On 01/08/2020 11:23, Etienne-Victor Depasquale wrote:
Hi folks,
Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core).
I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here <https://www.telecomtv.com/content/intel-vsummit-5g-ran-5g-core/the-5g-core-is-vital-to-deliver-the-promise-of-5g-39164/> @10:30). I reason that Intel's implication is that virtualization is becoming obsolete.
Would anyone care to let me know his thoughts on this prediction?
Cheers all,
Etienne
-- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasqualeI
Buzzwords have a limited life before the vendors need to make up something else to invoice you for. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest-IX http://www.midwest-ix.com ----- Original Message ----- From: "Etienne-Victor Depasquale" <edepa@ieee.org> To: "NANOG" <nanog@nanog.org> Sent: Saturday, August 1, 2020 4:23:00 AM Subject: Has virtualization become obsolete in 5G? Hi folks, Over the past few weeks, I've attended webinars and watched videos organized by Intel. These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core). I am stunned (no hyperbole) by the emphasis on Kubernetes in particular, and cloud-native computing in general. Equally stunning (for me), public telecommunications networks have been portrayed as having a history that moved from integrated software and hardware, to virtualization and now to cloud-native computing. See, for example Alex Quach, here @10:30). I reason that Intel's implication is that virtualization is becoming obsolete. Would anyone care to let me know his thoughts on this prediction? Cheers all, Etienne -- Ing. Etienne-Victor Depasquale Assistant Lecturer Department of Communications & Computer Engineering Faculty of Information & Communication Technology University of Malta Web. https://www.um.edu.mt/profile/etiennedepasquale I
participants (13)
-
adamv0025@netconsultings.com
-
Ca By
-
Christopher Morrow
-
David Monosov
-
Etienne-Victor Depasquale
-
John Lee
-
Mark Tinka
-
Mel Beckman
-
Mike Hammett
-
Robert Raszuk
-
Shane Ronan
-
Valdis Klētnieks
-
Warren Kumari