The short answer is that the "Cloud Native Computing" folks need to talk to the Intel Embedded Systems Application engineers to discover that micro services have been running on Intel hardware in (non-standard) containers for years. We call it real time computing, process control,... Current multi Terabit Ethernet interfaces require specialized hardware and interfaces that will connect fiber optics to clouds but cannot be run on clouds.

Some comments on Software Controlled Telecomm (/datacom) networking. When DTMF was invented the telco used in band signaling for call control. Kevin Mitnick et. al. designed red and black boxes to control the telco systems so the telcos moved call control out of band. They created SIgnal Control Points which managed the actual circuit switch hardware to route calls or eventually 64kbps digital paths and this protocol was SS#7. There were six to seven volumes of CLASS services that were enabled by SS#7 which ran on UNIX systems developed by Bell Labs. In the mid seventies, I worked on VM systems from DEC and Apollo of which Apollo had the better virtualization that worked across the network and was the first "cloud" system that I worked on.

In the mid nineties, I had worked on large Gigabit/Terabit routers but again the control plane was part of the data plane until ATM based networks  could use out of band control to setup a SVC between input port and output port and switch the IP packets instead of routing them achieving network end to end delays of less than milliseconds. VLAN and MPLS protocols were developed to switch packets in the backbone of the networks and not to route them.

In 2000 we put our first pre-standard cloud together with multi Gigabit routers and Sun workstations at 45 PoPs in the US, 3 in Asia and 6 in Europe and implemented a "cloud" O/S. Our fastest links were 10 Gbps. Now we can have 2-50 Tbps per fiber using Superchannel DWDM technology between PoP, data centers or cell towers. Network control functions can dynamically change by using Dynamic Reprogrammable EPROMs from companies like Xilinx and Intel to repurpose firmware control and device functions.

Embedded systems have implemented "micro services" for years as that is how you handle interrupt driven real time control. We call this a context switch which is still hardware CPU dependent. As far as I know, current standard containers do not handle real time CPU interrupts or do they allow very tight timing reponse loops within the standard containers? 

Certain 5G proposals are discussing network slicing et al to virtualize control functions that can work better without virtualization. Current 5G protocol submissions that I have reviewed are way too complex to work out in the real world on real networks, maintained by union labor. (This is not a dig at union labor, as they are some of the best trained techs.) :)

On Sat, Aug 1, 2020 at 8:35 AM Mark Tinka <mark.tinka@seacom.com> wrote:


On 1/Aug/20 11:23, Etienne-Victor Depasquale wrote:
Over the past few weeks, I've attended webinars and watched videos organized by Intel. 
These activities have centred on 5G and examined applications (like "visual cloud" and "gaming"), 
as well as segment-oriented aspects (like edge networks, 5G RAN and 5G Core).

I am stunned (no hyperbole) by the emphasis on Kubernetes in particular,
and cloud-native computing in general. 
Equally stunning (for me), public telecommunications networks have been portrayed 
as having a history that moved from integrated software and hardware, 
to virtualization and now to cloud-native computing. 
See, for example Alex Quach, here @10:30). I reason that Intel's implication is that virtualization is becoming obsolete.

Would anyone care to let me know his thoughts on this prediction?

In the early dawn of SDN, where it was cool to have the RP's in Beirut and the line cards in Lagos, the industry quickly realized that was not entirely feasible.

If you are looking at over-the-top services, so-called cloud-native computing makes sense in order to deliver that value accordingly, and with agility. But as it pertains to actual network transport, I'm not yet sure the industry is at the stage where we are confident enough to decompose packet forwarding through a cloud.

Network operators are more likely to keep using kit that integrates forwarding hardware as well as a NOS, as no amount of cloud architecting is going to rival a 100Gbps purpose-built port, for example.

Suffice it to say, there was a time when folk were considering running their critical infrastructure (such as your route reflectors) in AWS or similar. I'm not quite sure public clouds are at that level of confidence yet. So if some kind of cloud-native infrastructure is to be considered for critical infrastructure, I highly suspect it will be in-house.

On the other hand, for any new budding entrepreneurs that want to get into the mobile game with as little cost as possible, there is a huge opportunity to do so by building all that infrastructure in an on-prem cloud-native architecture, and offer packet forwarding using general-purpose hardware provided they don't exceed their expectations. This way, they wouldn't have to deal with the high costs traditional vendors (Ericsson, Nokia, Huawei, Siemens, ZTE, e.t.c.) impose. Granted, it would be small scale, but maybe that is the business model. And in an industry where capex is fast out-pacing revenue, it would be the mobile network equivalent of low-cost carrier airlines.

I very well could be talking out the side of my neck, but my prediction is mobile operators will be optimistic but cautious. I reckon a healthy mix between cloud-native and tried & tested practices.

Mark.