On Wed, Feb 13, 2002 at 06:15:17PM +0000, Eric Brandwine wrote:
"js" == Jesper Skriver <jesper@skriver.dk> writes:
js> On Wed, Feb 13, 2002 at 03:55:25PM +0000, Eric Brandwine wrote:
Without control plane seperation (and it's not possible with Cisco, Juniper, or most other routers out there), management services are listening on the public network, and that makes this very scary, regardless of filtering policies, etc.
js> interfaces { ... js> } js> firewall { ... js> }
OK, but that's filtering. The telnet/ssh/snmp daemon is still listening on all interfaces. You can't get there, as long as your filter stands, but those are some hard filters to write. They're simple when they're simple, but they're very complex when they're not.
Ah, I wrote our filters in an hour or so, from the principle default is deny, only allow what is strictly needed for the routing protocols etc to work.
You're relying on your filters, rather than on proper configuration of the daemon. On a UNIX system, you can bind a service to all interfaces (e.g. *.161) or just to a specific interface (10.1.2.3:161). This should be possible in general, on all routers.
Agree, there is no reason why one cannot configure services to only listen to a specified interface/address.
We HAVE an OOB management network. This is where all our console servers, switches (there is no Ethernet in the backbone, don't shove VLANs at me), etc all live. This address space is not routed to you. We like this. There's no cost issues, we've already paid for it, and need it for our layer 1/2 network anyway.
But then you plug an IP port on the router (vs. a console port) into the mgmt net, and you've bridged the public net and the mgmt net.
A Juniper will not forward packets between fxp0 and "real interfaces" or the other way around. But if a hacker does get control over the router, he/she can use it as a jump station towards the mngt. network.
Virtual routers are capable of maintaining multiple routing tables, but last I checked, Juniper was not. So how do you route this?
Use non routed addresses on the mngt. network, and enforce filters on the routers making up the mngt network, so the only traffic allowed from the routers are return traffic to requests from the mngt. systems to the routers.
I send an SNMP query to the device. It comes in over the mgmt net (because for me, in my datacenter, the loopback for that device (or it's mgmt IP) is routed across the mgmt net). The device recieves, digests, and decides to respond to this query. Where does it send it? My datacenter is routed on the internet, so does it send it out the public interface?
Just make sure the request is seen from the routers as comming from private non routed addresses, either by assigning them to the mngt. servers, or by using NAT.
Or do I route my datacenter over the mgmt net? You can start filtering, but then those filters are suddenly very important, crucial to the proper operation of the network. Better not fat finger anything. Ever.
Or do I move all my backbone facing datacenters into a network that is not routed on the Internet, but only on the mgmt net? That has it's own host of problems.
And you still have to convince the router not to propagate routes that it learns from the mgmt net into the public network. This can be done with filters, but when you have a global mgmt network spread over many netblocks, regions, etc, it's ugly.
routing-options { static { route 172.16.0.0/16 { next-hop 172.16.0.1; no-readvertise; } } } Where 172.16.0.1 is a router on the mngt. network, and 172.16.0.0/16 is the entire mngt. network.
The router needs to act as a router to the public network. But it needs to act as a host (with only 1 interface) to the mgmt net. This is not how current routers work.
Correct, that would be ideal, but there are ways to work around it ...
Been there, done that, it's not that simple.
But not impossible either ... /Jesper -- Jesper Skriver, jesper(at)skriver(dot)dk - CCIE #5456 Work: Network manager @ AS3292 (Tele Danmark DataNetworks) Private: FreeBSD committer @ AS2109 (A much smaller network ;-) One Unix to rule them all, One Resolver to find them, One IP to bring them all and in the zone to bind them.