Regarding BGP offloading
Hello NANOG! I have seen limited talks about offloading of BGP as a whole into containers/VMs etc. Take e.g this old Google blog post <https://www.blog.google/products/google-cloud/making-google-cloud-faster-more-available-and-cost-effective-extending-sdn-public-internet-espresso/> from 2017. Quoting from that: *Second, we separate the logic and control of traffic management from the
confines of individual router “boxes.” Rather than relying on thousands of individual routers to manage and learn from packet streams, we push the functionality to a distributed system that extracts the aggregate information. We leverage our large-scale computing infrastructure and signals from the application itself to learn how individual flows are performing, as determined by the end user’s perception of quality.*
If I am reading this correctly, it gives an impression of just BGP signalling offload (to VMs/containers...). Is that understanding correct? Speaking from network topology wise anyone here has an idea or could point to a resource on how it is actually achieved? If the frontend device simply starts passing TCP 179 requests to some backend server running say bird, frr etc, how will that information be passed back to the forwarding plane? Are there more public deployments of this sort of setup where BGP as a whole (that is sessions, route calculation, policies, filtering etc) is offloaded to some x86 device in the backend? Or am I just reading it wrong and it's actually smaller VM/containers will full router functionality and BGP alone is not being offloaded? So the logical L3 endpoint here is VMs? What sort of config the device sitting in frontend would have at the interface level to achieve that? Appreciate your responses! Thanks. -- Anurag Bhatia anuragbhatia.com
One way to do it https://inog.net/files/iNOG14v_oliver_sourcerouting.pdf On Wed, Mar 16, 2022 at 6:14 PM Anurag Bhatia <me@anuragbhatia.com> wrote:
Hello NANOG!
I have seen limited talks about offloading of BGP as a whole into containers/VMs etc. Take e.g this old Google blog post from 2017. Quoting from that:
Second, we separate the logic and control of traffic management from the confines of individual router “boxes.” Rather than relying on thousands of individual routers to manage and learn from packet streams, we push the functionality to a distributed system that extracts the aggregate information. We leverage our large-scale computing infrastructure and signals from the application itself to learn how individual flows are performing, as determined by the end user’s perception of quality.
If I am reading this correctly, it gives an impression of just BGP signalling offload (to VMs/containers...). Is that understanding correct? Speaking from network topology wise anyone here has an idea or could point to a resource on how it is actually achieved? If the frontend device simply starts passing TCP 179 requests to some backend server running say bird, frr etc, how will that information be passed back to the forwarding plane? Are there more public deployments of this sort of setup where BGP as a whole (that is sessions, route calculation, policies, filtering etc) is offloaded to some x86 device in the backend?
Or am I just reading it wrong and it's actually smaller VM/containers will full router functionality and BGP alone is not being offloaded? So the logical L3 endpoint here is VMs? What sort of config the device sitting in frontend would have at the interface level to achieve that?
Appreciate your responses!
Thanks.
-- Anurag Bhatia anuragbhatia.com
more details on the particular implementation https://www.cs.princeton.edu/courses/archive/fall17/cos561/papers/espresso17... On Wed, Mar 16, 2022 at 6:14 PM Anurag Bhatia <me@anuragbhatia.com> wrote:
Hello NANOG!
I have seen limited talks about offloading of BGP as a whole into containers/VMs etc. Take e.g this old Google blog post from 2017. Quoting from that:
Second, we separate the logic and control of traffic management from the confines of individual router “boxes.” Rather than relying on thousands of individual routers to manage and learn from packet streams, we push the functionality to a distributed system that extracts the aggregate information. We leverage our large-scale computing infrastructure and signals from the application itself to learn how individual flows are performing, as determined by the end user’s perception of quality.
If I am reading this correctly, it gives an impression of just BGP signalling offload (to VMs/containers...). Is that understanding correct? Speaking from network topology wise anyone here has an idea or could point to a resource on how it is actually achieved? If the frontend device simply starts passing TCP 179 requests to some backend server running say bird, frr etc, how will that information be passed back to the forwarding plane? Are there more public deployments of this sort of setup where BGP as a whole (that is sessions, route calculation, policies, filtering etc) is offloaded to some x86 device in the backend?
Or am I just reading it wrong and it's actually smaller VM/containers will full router functionality and BGP alone is not being offloaded? So the logical L3 endpoint here is VMs? What sort of config the device sitting in frontend would have at the interface level to achieve that?
Appreciate your responses!
Thanks.
-- Anurag Bhatia anuragbhatia.com
IOS- XR and Junos (don’t know about others) expose service level APIs that allow offbox best path selection and consequently injecting these back into RIB. FRR is in process of implementing customized best path using lua scripts. Cheers, Jeff
On Mar 16, 2022, at 16:15, Anurag Bhatia <me@anuragbhatia.com> wrote:
Hello NANOG!
I have seen limited talks about offloading of BGP as a whole into containers/VMs etc. Take e.g this old Google blog post from 2017. Quoting from that:
Second, we separate the logic and control of traffic management from the confines of individual router “boxes.” Rather than relying on thousands of individual routers to manage and learn from packet streams, we push the functionality to a distributed system that extracts the aggregate information. We leverage our large-scale computing infrastructure and signals from the application itself to learn how individual flows are performing, as determined by the end user’s perception of quality.
If I am reading this correctly, it gives an impression of just BGP signalling offload (to VMs/containers...). Is that understanding correct? Speaking from network topology wise anyone here has an idea or could point to a resource on how it is actually achieved? If the frontend device simply starts passing TCP 179 requests to some backend server running say bird, frr etc, how will that information be passed back to the forwarding plane? Are there more public deployments of this sort of setup where BGP as a whole (that is sessions, route calculation, policies, filtering etc) is offloaded to some x86 device in the backend?
Or am I just reading it wrong and it's actually smaller VM/containers will full router functionality and BGP alone is not being offloaded? So the logical L3 endpoint here is VMs? What sort of config the device sitting in frontend would have at the interface level to achieve that?
Appreciate your responses!
Thanks.
-- Anurag Bhatia anuragbhatia.com
participants (3)
-
Anurag Bhatia
-
Jeff Tantsura
-
Yang Yu