On 06/14/2018 09:22 PM, Michael Thomas wrote:
So I have to ask, why is it advantageous to put this in a container rather than just run it directly on the container's host?
Most any host now-a-days has quite a bit of horse power to run services. All those services could be run natively all in one namespace on the same host, or ... I tend to gravitate towards running services individually in LXC containers. This creates a bit more overhead than running chroot style environments, but less than running full fledged kvm style virtualization for each service. I typically automate the provisioning and the spool up of the container and its service. This makes it easy to up-keep/rebuild/update/upgrade/load-balance services individually and enmasse across hosts. By running BGP within each container, as someone else mentioned, BGP can be used to advertise the loopback address of the service. I go one step further: for certain services I will anycast some addresses into bgp. This provides an easy way to load balance and provide resiliency of like service instances across hosts, Therefore, by running BGP within the container, and on the host, routes can be distributed across a network with all the policies available within the bgp protocol. I use Free Range Routing, which is a fork of Quagga, to do this. I use the eBGP variant for the hosts and containers, which allows for the elimination of OSPF or similar internal gateway protocol. Stepping away a bit, this means that BGP is used in tiered scenario. There is the regular eBGP with the public ASN for handling DFZ style public traffic. For internal traffic, private eBGP ASNs are used for routing traffic between and within hosts and containers. With recent improvements to Free Range Routing and the Linux Kernel, various combinations of MPLS, VxLAN, EVPN, and VRF configurations can be used to further segment and compartmentalize traffic within a host, and between containers. It is now very easy to run vlan-less between hosts through various easy to configure encapsulation mechanisms. To be explicit, this relies on a resilient layer 3 network between hosts, and eliminates the bothersome layer 2 redundancy headaches. That was a very long winded way to say: keep a very basic host configuration running a minimal set of functional services, and re-factor the functionality and split it across multiple containers to provide easy access to and maintenance of individual services like dns, smtp, database, dashboards, public routing, private routing, firewalling, monitoring, management, ... There is a higher up-front configuration cost, but over the longer term, if configured via automation tools like Salt or similar, maintenance and security is improved. It does require a different level of sophistication with operational staff.
Mike
On 06/14/2018 05:03 PM, Richard Hicks wrote:
I'm happy with GoBGP in a docker container for my BGP Dashboard/LookingGlass project. https://github.com/rhicks/bgp-dashboard
Its just piping RIB updates, as JSON, to script to feed into MongoDB container.
At work we also looked at GoBGP as a route-server for a small IXP type of setup, but ran into few issues that we didn't have the time to fully debug. So we switched to BIRD for that project. We are happy with both.
On Thu, Jun 14, 2018 at 11:56 AM, james jones <james.voip@gmail.com> wrote:
I am working on an personal experiment and was wondering what is the best option for running BGP in a docker base container. I have seen a lot blogs and docs referencing Quagga. I just want to make sure I am not over looking any other options before I dive in. Any thoughts or suggestions?
-James
-- Raymond Burkholder ray@oneunified.net https://blog.raymond.burkholder.net -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.