NIST has released SP800-119, "Guidelines for the Secure Deployment of IPv6". While I don't agree with everything in it, it is an excellent overview of IPv6, differences from IPv4, and security advice. While the title sounds like a security document, the security implications are only a part of it. I've not finished reading it, but my first reaction is that this is a good source of information. Well written, fairly detailed (at 188 pages) with lots of references. The PDF is available at: http://csrc.nist.gov/publications/nistpubs/800-119/sp800-119.pdf -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
On Tue, Jan 4, 2011 at 11:35 PM, Kevin Oberman <oberman@es.net> wrote:
The PDF is available at:
I notice that this document, in its nearly 200 pages, makes only casual mention of ARP/NDP table overflow attacks, which may be among the first real DoS challenges production IPv6 networks, and equipment vendors, have to resolve. Some platforms have far worse failure modes than others when subjected to such an attack, and information on this subject is not widely-available. Unless operators press their vendors for information, and more knobs, to deal with this problem, we may all be waiting for some group like "Anonymous" to take advantage of this vulnerability in IPv6 networks with large /64 subnets configured on LANs; at which point we may all find ourselves scrambling to request knobs, or worse, redesigning and renumbering our LANs. RFC5157 does not touch on this topic at all, and that is the sole reference I see in the NIST publication to scanning attacks. I continue to believe that a heck of a lot of folks are missing the boat on this issue, including some major equipment vendors. It has been pointed out to me that I should have been more vocal when IPv6 was still called IPng, but in 16 years, there has been nothing done about this problem other than water-cooler talk. I suspect that will continue to be the case until those of us who have configured our networks carefully are having a laugh at the networks who haven't. However, until that time, it's also been pointed out to me that customers will expect /64 LANs, and not offering it may put networks at a competitive disadvantage. Vendor solutions are needed before scanning IPv6 LANs becomes a popular way to inconvenience (at best) or disable (at worst) service providers and their customers. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
Dear Jeff, In my opinion the real challenges already in IPv6 networks the following: SPAM and attacking over IPv6; DoS; track back hosts with privacy enhanced addresses. Do you have some methods in your mind to resolve ARP/ND overflow problem? I think limiting mac address per port on switches both efficient on IPv4 and IPv6. Equivalent of DHCP snooping and Dynamic ARP Inspection should be implemented by the switch vendors.... But remember DHCP snooping et al. implemented in IPv4 after the first serious attacks...Make pressure on your switch vendors.... Janos Mohacsi Head of HBONE+ project Network Engineer, Deputy Director of Network Planning and Projects NIIF/HUNGARNET, HUNGARY Key 70EF9882: DEC2 C685 1ED4 C95A 145F 4300 6F64 7B00 70EF 9882 On Wed, 5 Jan 2011, Jeff Wheeler wrote:
On Tue, Jan 4, 2011 at 11:35 PM, Kevin Oberman <oberman@es.net> wrote:
The PDF is available at:
I notice that this document, in its nearly 200 pages, makes only casual mention of ARP/NDP table overflow attacks, which may be among the first real DoS challenges production IPv6 networks, and equipment vendors, have to resolve. Some platforms have far worse failure modes than others when subjected to such an attack, and information on this subject is not widely-available.
Unless operators press their vendors for information, and more knobs, to deal with this problem, we may all be waiting for some group like "Anonymous" to take advantage of this vulnerability in IPv6 networks with large /64 subnets configured on LANs; at which point we may all find ourselves scrambling to request knobs, or worse, redesigning and renumbering our LANs.
RFC5157 does not touch on this topic at all, and that is the sole reference I see in the NIST publication to scanning attacks.
I continue to believe that a heck of a lot of folks are missing the boat on this issue, including some major equipment vendors. It has been pointed out to me that I should have been more vocal when IPv6 was still called IPng, but in 16 years, there has been nothing done about this problem other than water-cooler talk. I suspect that will continue to be the case until those of us who have configured our networks carefully are having a laugh at the networks who haven't. However, until that time, it's also been pointed out to me that customers will expect /64 LANs, and not offering it may put networks at a competitive disadvantage.
Vendor solutions are needed before scanning IPv6 LANs becomes a popular way to inconvenience (at best) or disable (at worst) service providers and their customers.
-- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Wed, Jan 5, 2011 at 3:31 AM, Mohacsi Janos <mohacsi@niif.hu> wrote:
Do you have some methods in your mind to resolve ARP/ND overflow problem? I think limiting mac address per port on switches both efficient on IPv4 and IPv6. Equivalent of DHCP snooping and Dynamic ARP Inspection should be implemented by the switch vendors.... But remember DHCP snooping et al. implemented in IPv4 after the first serious attacks...Make pressure on your switch vendors....
Equipment vendors, and most operators, seem to be silent on this issue, not wishing to admit it is a serious problem, which would seem to be a required step before addressing it. Without more knobs on switches or routers, I believe there are only two possible solutions for production networks today: 1) do not configure /64 LANs, and instead, configure smaller subnets which will reduce the impact of scanning attacks This is not desirable, as customers may be driven to expect a /64, or even believe it is necessary for proper functioning. I brought this up with a colleague recently, who simply pointed to the RFC and said, "that's the way you have to do it." Unfortunately, configuring the network the way the standard says, and accepting the potential DoS consequences, will likely be less acceptable to customers than not offering them /64 LAN subnets. This is a foolish position and will not last long once reality sets in, unless vendors provide more knobs. 2) use link-local addressing on LANs, and static addressing to end hosts. This prevents a subset of attacks originated from "the Internet," by making it impossible for NDP to be initiated by scanning activity; but again, is not what customers will expect. It may have operational disadvantages with broken user-space software, is not easy for customers to configure, and does not permit easy porting of addresses among host machines. It requires much greater configuration effort, is likely not possible by way of DHCP. It also does not solve NDP table overflow attacks initiated by a compromised host on the LAN, which makes it a half-way solution. The knobs/features required to somewhat-mitigate the impact of an NDP table overflow attack are, at minimum: * keep NDP/ARP entry alive based on normal traffic flow, do not expire a host that is exchanging traffic + this is not the case with some major platforms, it surprised me to learn who does not do this + may require data plane changes on some boxes to inform control plane of on-going traffic from active addresses * have configurable per-interface limits for NDP/ARP resource consumption, to prevent attack on one interface/LAN from impacting all interfaces on a router + basically no one has this capability + typically requires only control plane modifications * have configurable minimum per-interface NDP/ARP resource reservation + typically requires only control plane modifications * have per-interface policer for NDP/ARP traffic to prevent control plane from becoming overwhelmed + because huge subnets may increase the frequency of scanning attacks, and breaking one interface by reaching a policer limit is much better than breaking the whole box if it runs out of CPU, or breaking NDP/ARP function on the whole box if whole-box policer is reached * learn new ARP/NDP entry when new transit traffic comes from a host on the LAN + even if NDP function is impared on the LAN due to on-going scan attack + again, per-interface limitations must be honored to protect whole box from breaking from one misconfigured / malicious LAN host * have sane defaults for all and allow all to be modified as-needed I am sure we can all agree that, as IPv6 deployment increases, many unimagined security issues may appear and be dealt with. This is one that a lot of smart people agree is a serious design flaw in any IPv6 network where /64 LANs are used, and yet, vendors are not doing anything about it. If customers don't express this concern to them, they won't do anything about it until it becomes a popular DoS attack. In addition, if you design your network around /64 LANs, and especially if you take misguided security-by-obscurity advice and randomize your host addresses so they can't be found in a practical time by scanning, you may have a very difficult time if the ultimate solution to this must be to change the typical subnet size from /64 to something that can fit within a practical NDP/ARP table. Deploying /64 networks because customers demand it and your competitors are doing it is understandable. Doing it "because it's the standard" is very stupid. Anyone doing that on, for example, SONET interfaces or point-to-point Ethernet VLANs in their infrastructure, is making a bad mistake. Doing it toward CE routers, the sort that have an IPv4 /30, is even more foolish; and many major ISPs already know this and are using small subnets such as /126 or /124. If you are still reading, but do not have any idea what I'm talking about, ask yourself these questions: 1) do I know what happens when my router's ARP table gets 100% full? 2) do I know what happens to my ARP/NDP functionality if my router receives a 20k PPS random scan towards an attached IPv6 subnet? will it eat all my CPU and drop my BGP, or just make it impossible to learn new ARP/NDP entries? will it eventually allow old entries to expire such that they perhaps cannot be re-learnt? 3) am I deploying IPv6 in a way that is vulnerable to a trivial attack method? 4) will my network design need fundamental change if my equipment vendor does not add necessary knobs? This is a very serious problem which our industry is actively ignoring, hoping it will just go away. If you are in the group who believes it is a non-issue, I urge you to take your head out of the sand. If you are waiting for your vendor to add more knobs or come up with a magic solution, stop clapping your hands and saying "I believe in fairies," and express your concern to your vendor sales channel. If you are a black-hat script kiddie, please go ahead and start scanning attacks now, while IPv6 largely does not matter and dual-stack infrastructure is somewhat limited (although there will be some spill-over to IPv4 on dual-stack boxes) to motivate change. Finally, if you operate a major IXP with a /64 peering LAN, please explain why this is in any way better than operating the same LAN with a subnet similar in size to its existing IPv4 subnets, e.g. a /120. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Jan 5, 2011, at 7:21 PM, Jeff Wheeler wrote:
please explain why this is in any way better than operating the same LAN with a subnet similar in size to its existing IPv4 subnets, e.g. a /120.
Using /64s is insane because a) it's unnecessarily wasteful (no lectures on how large the space is, I know, and reject that argument out of hand) and b) it turns the routers/switches into sinkholes. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On 1/5/2011 6:29 AM, Dobbins, Roland wrote:
Using /64s is insane because a) it's unnecessarily wasteful (no lectures on how large the space is, I know, and reject that argument out of hand) and b) it turns the routers/switches into sinkholes.
Except someone was kind enough to develop a protocol that requires /64 to work. So then there is the SLAAC question. When might it be used? With routers, I usually don't use SLAAC. The exception is end user networks, which makes using SLAAC + DHCPv6-PD extremely dangerous for my edge routers. DHCPv6 IA_TA + DHCPv6-PD would be more sane, predictable, and filterable (and support longer than /64) thought my current edge layout can't support this (darn legacy IOS). I would love a dynamic renumbering scheme for routers, but until all routing protocols (especially iBGP) support shifting from one prefix to the next without a problem, it's a lost cause and manual renumbering is still required. Things like abstracting the router id from the transport protocol would be nice. I could be wrong, but I think ISIS is about it for protocols that won't complain. All that said, routers should be /126 or similar for links, with special circumstances and layouts for customer edge. For server subnets, I actually prefer leaving it /64 and using SLAAC with token assignments. This is easily mitigated with ACLs to filter any packets that don't fall within the range I generally use for the tokens, with localized exceptions for non-token devices which haven't been fully initialized yet (ie, stay behind stateful firewall until I've changed my IP to prefix::0-2FF). I haven't tried it, but I highly suspect it would fail, but it would be nice to use SLAAC with longer than /64. Jack
On Jan 5, 2011, at 7:04 AM, Jack Bates wrote:
On 1/5/2011 6:29 AM, Dobbins, Roland wrote:
Using /64s is insane because a) it's unnecessarily wasteful (no lectures on how large the space is, I know, and reject that argument out of hand) and b) it turns the routers/switches into sinkholes.
Except someone was kind enough to develop a protocol that requires /64 to work. So then there is the SLAAC question. When might it be used?
With routers, I usually don't use SLAAC. The exception is end user networks, which makes using SLAAC + DHCPv6-PD extremely dangerous for my edge routers. DHCPv6 IA_TA + DHCPv6-PD would be more sane, predictable, and filterable (and support longer than /64) thought my current edge layout can't support this (darn legacy IOS).
I would love a dynamic renumbering scheme for routers, but until all routing protocols (especially iBGP) support shifting from one prefix to the next without a problem, it's a lost cause and manual renumbering is still required. Things like abstracting the router id from the transport protocol would be nice. I could be wrong, but I think ISIS is about it for protocols that won't complain.
All that said, routers should be /126 or similar for links, with special circumstances and layouts for customer edge.
Why shouldn't I use /64 for links if I want to? I can see why you can say you want /126s, and that's fine, as long as you are willing to deal with the fall-out, your network, your problem, but, why tell me that my RFC-compliant network is somehow wrong?
For server subnets, I actually prefer leaving it /64 and using SLAAC with token assignments. This is easily mitigated with ACLs to filter any packets that don't fall within the range I generally use for the tokens, with localized exceptions for non-token devices which haven't been fully initialized yet (ie, stay behind stateful firewall until I've changed my IP to prefix::0-2FF). I haven't tried it, but I highly suspect it would fail, but it would be nice to use SLAAC with longer than /64.
SLAAC cannot function with longer than /64 because SLAAC depends on prefix + EUI-64 = address. Owen
On 1/5/2011 11:31 PM, Owen DeLong wrote:
Why shouldn't I use /64 for links if I want to? I can see why you can say you want /126s, and that's fine, as long as you are willing to deal with the fall-out, your network, your problem, but, why tell me that my RFC-compliant network is somehow wrong?
You can. My problem with that is primarily that using an ACL for the predictable addresses gets messy. Filtering based on <prefix><multiple assignments>::<1-2> isn't possible in most routers, and an acl to filter every /64 used for a link address is one heck of a long list.
SLAAC cannot function with longer than /64 because SLAAC depends on prefix + EUI-64 = address.
It depends on supporting it. EUI-64 address is not required for the globally routed prefixes, and many servers static the token as ::0xxx. Jack
Is there any reason we really need to care what size other people use for their Point to Point links? Personally, I think /64 works just fine. I won't criticize anyone for using it. It's what I choose to use. However, if someone else wants to keep track of /112s, /120s, /124s, /126s, or even /127s on their own network, so be it. The protocol allows for all of that. If vendors build stuff that depends on /64, that stuff is technically broken and it's between the network operator and the vendor to get it resolved. Owen On Jan 5, 2011, at 4:29 AM, Dobbins, Roland wrote:
On Jan 5, 2011, at 7:21 PM, Jeff Wheeler wrote:
please explain why this is in any way better than operating the same LAN with a subnet similar in size to its existing IPv4 subnets, e.g. a /120.
Using /64s is insane because a) it's unnecessarily wasteful (no lectures on how large the space is, I know, and reject that argument out of hand) and b) it turns the routers/switches into sinkholes.
------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com>
Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves.
-- Alan Kay
Owen DeLong <owen@delong.com> writes:
Personally, I think /64 works just fine.
I continue to believe that the "allocate the /64, configure the /127 as a workaround for the router vendors' unevolved designs" approach, which Igor and I discovered we were in violent agreement on when on a panel a few NANOGs ago, is the right one. With only between 2^34 and 2^35 neocortical neurons in the average human brain (*), perhaps we ought to be engineering for conserving human brain cells rather than IPv6 addresses. That means encoding metadata in the IPv6 address in an easily visible fashion (such as pop aggregation on nybble boundaries) and having a scheme where all subnets are the same size (and just happen to hold an arbitrary number of hosts). -r (*) a figure that is as irrelevant to the current conversation as Roland's grandstanding about rejecting arguments about the vastness of the IPv6 space out of hand).
On Thu, Jan 6, 2011 at 7:34 AM, Robert E. Seastrom <rs@seastrom.com> wrote:
I continue to believe that the "allocate the /64, configure the /127 as a workaround for the router vendors' unevolved designs" approach,
As a point of information, I notice that Level3 has deployed without doing this, e.g. they have densely packed /126 subnets which are conceptually identical to /30s for infrastructure and point-to-point. I am still taking your conservative approach, as I see no operational disadvantage to it; but opinions must differ. On Thu, Jan 6, 2011 at 11:28 AM, <Valdis.Kletnieks@vt.edu> wrote:
And the "ZOMG they can overflow the ARP/ND/whatever table" is a total red herring - you know damned well that if a script kiddie with a 10K node botnet wants to hose down your network, you're going to be looking at a DDoS, and it really doesn't matter whether it's SYN packets, or ND traffic, or forged ICMP echo-reply mobygrams.
I agree, botnets are large enough to DDoS most things. However, with the current state of ARP/ND table implementations in some major equipment vendors' routers, combined with standards-compliant configuration, it doesn't take a botnet. I could DoS your subnet (or whole router) without a botnet, say, using an IPv6 McDonald's Wi-Fi hotspot. That is very much a legitimate concern. A few hundred packets per second will be enough to cripple some platforms. On Thu, Jan 6, 2011 at 1:20 PM, Owen DeLong <owen@delong.com> wrote:
And there are ways to mitigate ND attacks as well.
No, Owen, there aren't. The necessary router knobs do not exist. The "Cisco approach" is currently to police NDP on a per-interface basis (either with per-int or global configuration knob) and break NDP on the interface once that policer is exceeded. This is good (thanks, Cisco) because it limits damage to one subnet; but bad because it exemplifies the severity of the issue: the "Cisco solution" is known to be bad, but is less bad than letting the whole box break. Cisco is not going to come up with a magic knob because there isn't any -- with the current design, you have to pick your failure modes and live with them. That's not good and it is not a Cisco failing by any means, it is a design failing brought on by the standards bodies. I would also like to reply in public to a couple of off-list questions, because the questions are common ones, the answers are not necessarily cut-and-dry (my opinion is only one approach; there are others) and this is the kind of useful discussion needed to address this matter. I will leave out the names of the people asking since they emailed me in private, but I'm sure they won't mind me pasting their questions. Anonymous Questioner:
What do you think about using only link-local ip addresses on the infrastructure links please? I can't think of any potential drawbacks do you?
This can be done, but then you don't have a numbered next-hop for routing protocols. That's okay if you re-write it to something else. Note that link-local subnets still have an NDP table, and if that resource is exhausted due to attacks on the router, neighbors with link-local addressing are not immune. link-local scope offers numerous advantages which are mostly out-weighed by more practical concerns, like, how hard it is going to be to convince the average Windows sysadmin to configure his machine to suit such a design, instead of just taking his business elsewhere. Not a problem for enterprise/gov/academia so much, but a problem for service providers. On Thu, Jan 6, 2011 at 3:43 PM, Jack Bates <jbates@brightok.net> wrote:
Given that the incomplete age is to protect the L2 network from excessive broadcast/multicast, I agree that aging them out fast would be a wiser
I agree that it would be nice to have such a knob. I bet you and I would make different choices about how to use it, which is the whole point of having a knob instead of a fixed value.
I'm still a proponent for removing as needed requests like this, though. It would have been better to send a global "everyone update me" request periodically, even if triggered by an unknown entry, yet limited to only broadcasting once every 10-30 seconds.
Given that all requests for an unknown arp/ND entry results in all hosts on the network checking, it only makes sense for all hosts to respond. There
This isn't a new idea and it has its own set of problems, which are well-understood. IPv6 NS messages are more clever than ARP, though, and are sent to a computed multicast address. This means that the number of hosts which receive the message is minimized. See RFC2461 section 2.3 for the quick introduction. NDP is better than ARP. However, your statement that NDP has all (I'd like to say some) of the same problems as ARP but the increased subnet size has magnified them, is basically correct. NDP does some things a lot better than ARP, but not this. It's important to realize that when this stuff was designed, there were few hardware-based layer-3 routers for IPv4. The biggest networks (Sprint/UUnet/etc) were exchanging a few hundred megabits per second on PNIs and the tier-2 guys had DS3s to IXPs. The network has come a long way in terms of user-base, bandwidth, and sophistication since then. Second Anonymous Questioner:
That is, I don't see why a smart rate-limiting implementation doesn't solve most of it.
This is a good question, which I have tried to cover in much earlier posts, perhaps without explaining it effectively. There currently are major vendors shipping IPv6 routers which age out ARP/NDP entries even if they have continuous traffic -- these routers do an occasional ARP/NDP "refresh" even though the neighbor is constantly sending and receiving traffic. For this reason, an attack can trigger your rate-limit and prevent the "refresh" from working. So over a period of time, all hosts attached to the router will stop working. Smarter platforms are still unable to learn about new neighbors when your rate-limit is being triggered. How bad this is depends on the implementation. Finally, any compromised host on the LAN can fill up the NDP table for the interface (which on most platforms, really means fill up the combined NDP/ARP table for the whole box) and either prevent any new neighbor associations, or far worse, cause churn that will disable the entire router and all traffic that needs to transit through it. This is extremely bad, and yet, it is trivial to execute if you have access to a machine on the LAN.
Yes, during this period, lots of unreachables will not be sent,
Unreachables are really of no concern. The router staying up, and its interfaces continuing to function correctly, are in question. Actually, they're not so much in question ... because all existing IPv6 routers can be broken with the trivial method we are discussing if /64 subnets are in use. What's implementation-specific is "how broken," but as you can read above, Cisco has given customers a knob to control the damage, but it's very, very far from a complete solution. Unlike Cisco, some other vendors haven't given this the first thought, let alone added a knob; but even Cisco must do more. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Thu, Jan 6, 2011 at 1:20 PM, Owen DeLong <owen@delong.com> wrote:
And there are ways to mitigate ND attacks as well.
No, Owen, there aren't. The necessary router knobs do not exist. The "Cisco approach" is currently to police NDP on a per-interface basis (either with per-int or global configuration knob) and break NDP on the interface once that policer is exceeded. This is good (thanks, Cisco) because it limits damage to one subnet; but bad because it exemplifies the severity of the issue: the "Cisco solution" is known to be bad, but is less bad than letting the whole box break. Cisco is not going to come up with a magic knob because there isn't any -- with the current design, you have to pick your failure modes and live with them. That's not good and it is not a Cisco failing by any means, it is a design failing brought on by the standards bodies.
Saying this over and over doesn't make it so... 1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be expecting your routers to respond to packets sent to the router specifically. 2. For networks that aren't intended to receive inbound requests from the outside, limit such requests to the live hosts that exist on those networks with a stateful firewall. 3. Police the ND issue rate on the router. Yes, it means that an ND attack could prevent some legitimate ND requests from getting through, but, at least it prevents ND overflow and the working hosts with existing ND entries continue to function. In most cases, this will be virtually all of the active hosts on the network. All of these things can be done today with the knobs that exist. The combination of them pretty much takes the wind out of any ND table overflow attack. Yes, it involves some tradeoffs and isn't a perfect solution. However, it is an effective mitigation. Owen
On Thu, Jan 6, 2011 at 8:47 PM, Owen DeLong <owen@delong.com> wrote:
1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be
Most networks do not do this today. Whether or not that is wise is questionable, but I don't think those networks want NDP to be the reason they choose to make this change.
2. For networks that aren't intended to receive inbound requests from the outside, limit such requests to the live hosts that exist on those networks with a stateful firewall.
Again, this doesn't fix the problem of misconfigured hosts on the LAN. I can and shall say it over and over, as long as folks continue to miss the potential for one compromised machine to disable the entire LAN, and in many cases, the entire router. It is bad that NDP table overflow can be triggered externally, but even if you solve that problem (which again does not require a stateful firewall, why do you keep saying this?) you still haven't made sure one host machine can't disable the LAN/router.
3. Police the ND issue rate on the router. Yes, it means that an ND attack could prevent some legitimate ND requests from getting through, but, at least it prevents ND overflow and the working hosts with existing ND entries continue to function. In most cases, this will be virtually all of the active hosts on the network.
You must understand that policing will not stop the NDCache from becoming full almost instantly under an attack. Since the largest existing routers have about 100k entries at most, an attack can fill that up in *one second.* On some platforms, existing entries must be periodically refreshed by actual ARP/NDP exchange. If they are not refreshed, the entries go away, and traffic stops flowing. This is extremely bad because it can break even hosts with constant traffic flows, such as a server or infrastructure link to a neighboring router. Depending on the attack PPS and policer configuration, such hosts may remain broken for the duration of the attack. Implementations differ greatly in this regard. All of them break under an attack. Every single current implementation breaks in some fashion. It's just a question of how bad. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Thu, 6 Jan 2011 21:13:52 -0500 Jeff Wheeler <jsw@inconcepts.biz> wrote:
On Thu, Jan 6, 2011 at 8:47 PM, Owen DeLong <owen@delong.com> wrote:
1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be
Most networks do not do this today. Whether or not that is wise is questionable, but I don't think those networks want NDP to be the reason they choose to make this change.
2. For networks that aren't intended to receive inbound requests from the outside, limit such requests to the live hosts that exist on those networks with a stateful firewall.
Again, this doesn't fix the problem of misconfigured hosts on the LAN. I can and shall say it over and over, as long as folks continue to miss the potential for one compromised machine to disable the entire LAN, and in many cases, the entire router. It is bad that NDP table overflow can be triggered externally, but even if you solve that problem (which again does not require a stateful firewall, why do you keep saying this?) you still haven't made sure one host machine can't disable the LAN/router.
Doesn't this risk already exist in IPv4? Any device attached to a LAN can send any traffic it likes to any other device attached to a LAN, whether that be spoofed ARP updates, intentionally created duplicate IP addresses, or simple flat out traffic based denial of service attacks using valid IPv4 addresses. Just relying on ARP means you're trusting other LAN attached devices not be lying. If you really think a LAN attached device being malicious to another LAN attached device is an unacceptable risk, then you're going to need to abandon your peer-to-peer traffic forwarding topology provided by a multi-access LAN, and adopt a hub-and-spoke one, with the hub (router/firewall) acting as an inspection and quarantining device for all traffic originated by spokes. PPPoE or per-device VLANs would be the way to do that, while still gaining the price benefits of commodity Ethernet. I definitely think there is an issue with IPv6 ND cache state being exploitable from off-link sources e.g. the Internet. I think, however, targetting on-link devices on a LAN is far less of an issue - you've already accepted the risk that LAN other devices can send malicious traffic, and those LAN devices also have a vested interest in their default router being available, so they have far less of an incentive to maliciously disable it.
3. Police the ND issue rate on the router. Yes, it means that an ND attack could prevent some legitimate ND requests from getting through, but, at least it prevents ND overflow and the working hosts with existing ND entries continue to function. In most cases, this will be virtually all of the active hosts on the network.
You must understand that policing will not stop the NDCache from becoming full almost instantly under an attack. Since the largest existing routers have about 100k entries at most, an attack can fill that up in *one second.*
On some platforms, existing entries must be periodically refreshed by actual ARP/NDP exchange. If they are not refreshed, the entries go away, and traffic stops flowing. This is extremely bad because it can break even hosts with constant traffic flows, such as a server or infrastructure link to a neighboring router. Depending on the attack PPS and policer configuration, such hosts may remain broken for the duration of the attack.
Implementations differ greatly in this regard. All of them break under an attack. Every single current implementation breaks in some fashion. It's just a question of how bad.
-- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Jan 7, 2011, at 4:14 PM, Mark Smith wrote:
Doesn't this risk already exist in IPv4?
There are various vendor knobs/features to ameliorate ARP-level issues in switching gear. Those same knobs aren't viable in IPv6 due to the way ND/NS work, and as you mention, the ND stuff is layer-3-routable. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Fri, 7 Jan 2011 09:38:32 +0000 "Dobbins, Roland" <rdobbins@arbor.net> wrote:
On Jan 7, 2011, at 4:14 PM, Mark Smith wrote:
Doesn't this risk already exist in IPv4?
There are various vendor knobs/features to ameliorate ARP-level issues in switching gear. Those same knobs aren't viable in IPv6 due to the way ND/NS work,
I was commenting on the mentality the OP seemed to be expressing, about *both* onlink and off link sources triggering address resolution for lots of non-existent destinations, and that this was a new and unacceptable threat. I think the offlink risk is a far more significant one. I think the onlink risk pretty much no worse than any of the other things that LAN attached devices can do if they choose to.
and as you mention, the ND stuff is layer-3-routable.
The issue isn't that ND is layer 3 routable - it isn't unless a router implementation is broken. The problem is that somebody on the Internet could send 1000s of UDP packets (i.e. an offlink traffic source) towards destinations that don't exist on the target subnet. The upstream router will then perform address resolution, sending ND NSes for those unknown destinations, and while waiting to see if ND NAs are returned, will maintain state for each outstanding ND NS. This state is what is exploitable remotely, unless the state table size is large enough to hold all possible outstanding ND NA entries for all possible addresses on the target subnet. I think this problem can be avoided by getting rid of this offlink traffic triggered address resolution state. The purpose of the state (from the Neighbor Discovery RFC) is two fold - - to allow the ND NS to be resent up to two times if no ND NA response is received to ND NSes. A number of triggering packets (e.g. UDP ones or TCP SYNs) are queued as well so that they can be resent if and when ND NS/NA completes. - to allow ICMP destination unreachables to be generated if the ND NS/NA transaction fails, even after resending. I think it is acceptable to compromise on these requirements. ND NS/NA transactions are going to be successful most of the time, so the ND NS/NA retransmit mechanism is going to be rarely used. Original traffic sources have to be prepared for it to fail anyway - the Internet is a best effort network, so if a source node wants to be sure a packet gets to the original destination it needs to be prepared to retransmit it. This has actually proved not to be a problem in IPv4 as Cisco routers have for many years dumped the data packet that triggers ARP, which I'm pretty sure is the reason why the ARP timeout is 4 hours, rather than the more common 5 minutes. Time outs are pretty much moot anyway, because active Neighbor Unreachability Detection is usually performed these days instead of using simple timeouts for existing ARP entries and is required to be performed by IPv6. If you don't maintain state for outstanding ND NS transactions, then that means that the ND NS issuing device will have to just blindly accept any ND NAs it receives at any time, and put them in the neighbor cache, assuming they are correct. That is an vulnerability, as a local node could fill up the neighbor cache with bogus entries, but one that is far less of a risk than the Internet sourced one we're talking about, as it is only onlink devices that can exploit it. As a LAN is already a trusted environment for basic protocol operations, and devices have a vested interest not disabling other devices that provide them with services e.g. default routers, I think it is a reasonable and acceptable risk given those we already accept in LANs, such as the IPv4 ones I mentioned. If it isn't, implement static address resolution entries, PPPoE, per-device VLANs, SEND etc. I doubt I need to go into much detail about whether ICMP destination unreachables need to be reliably received, the reality is that they aren't in IPv4 and I doubt that will change much in IPv6. I think they're a "nice to have" not a "need to have". Regards, Mark.
On Jan 7, 2011, at 1:28 PM, Mark Smith wrote:
On Fri, 7 Jan 2011 09:38:32 +0000 "Dobbins, Roland" <rdobbins@arbor.net> wrote:
On Jan 7, 2011, at 4:14 PM, Mark Smith wrote:
Doesn't this risk already exist in IPv4?
There are various vendor knobs/features to ameliorate ARP-level issues in switching gear. Those same knobs aren't viable in IPv6 due to the way ND/NS work,
I was commenting on the mentality the OP seemed to be expressing, about *both* onlink and off link sources triggering address resolution for lots of non-existent destinations, and that this was a new and unacceptable threat. I think the offlink risk is a far more significant one. I think the onlink risk pretty much no worse than any of the other things that LAN attached devices can do if they choose to.
and as you mention, the ND stuff is layer-3-routable.
The issue isn't that ND is layer 3 routable - it isn't unless a router implementation is broken. The problem is that somebody on the Internet could send 1000s of UDP packets (i.e. an offlink traffic source) towards destinations that don't exist on the target subnet. The upstream router will then perform address resolution, sending ND NSes for those unknown destinations, and while waiting to see if ND NAs are returned, will maintain state for each outstanding ND NS. This state is what is exploitable remotely, unless the state table size is large enough to hold all possible outstanding ND NA entries for all possible addresses on the target subnet.
I think this problem can be avoided by getting rid of this offlink traffic triggered address resolution state. The purpose of the state (from the Neighbor Discovery RFC) is two fold -
- to allow the ND NS to be resent up to two times if no ND NA response is received to ND NSes. A number of triggering packets (e.g. UDP ones or TCP SYNs) are queued as well so that they can be resent if and when ND NS/NA completes.
- to allow ICMP destination unreachables to be generated if the ND NS/NA transaction fails, even after resending.
I think it is acceptable to compromise on these requirements.
I'm inclined to agree with you, but... I think it might also make sense to eliminate the ND NS/NA transaction altogether for addresses that do not begin with xxxx:xxxx:xxxx:xxxx:000x. In other words, for non SLAAC addresses, we need the ND NS/NA process (even if we do it stateless which isn't an entirely bad idea), but, for SLAAC addresses, the MAC is embedded in the IP address, so, why not just use that? Owen
On Fri, 7 Jan 2011 14:53:02 -0800 Owen DeLong <owen@delong.com> wrote:
On Jan 7, 2011, at 1:28 PM, Mark Smith wrote:
On Fri, 7 Jan 2011 09:38:32 +0000 "Dobbins, Roland" <rdobbins@arbor.net> wrote:
On Jan 7, 2011, at 4:14 PM, Mark Smith wrote:
Doesn't this risk already exist in IPv4?
There are various vendor knobs/features to ameliorate ARP-level issues in switching gear. Those same knobs aren't viable in IPv6 due to the way ND/NS work,
I was commenting on the mentality the OP seemed to be expressing, about *both* onlink and off link sources triggering address resolution for lots of non-existent destinations, and that this was a new and unacceptable threat. I think the offlink risk is a far more significant one. I think the onlink risk pretty much no worse than any of the other things that LAN attached devices can do if they choose to.
and as you mention, the ND stuff is layer-3-routable.
The issue isn't that ND is layer 3 routable - it isn't unless a router implementation is broken. The problem is that somebody on the Internet could send 1000s of UDP packets (i.e. an offlink traffic source) towards destinations that don't exist on the target subnet. The upstream router will then perform address resolution, sending ND NSes for those unknown destinations, and while waiting to see if ND NAs are returned, will maintain state for each outstanding ND NS. This state is what is exploitable remotely, unless the state table size is large enough to hold all possible outstanding ND NA entries for all possible addresses on the target subnet.
I think this problem can be avoided by getting rid of this offlink traffic triggered address resolution state. The purpose of the state (from the Neighbor Discovery RFC) is two fold -
- to allow the ND NS to be resent up to two times if no ND NA response is received to ND NSes. A number of triggering packets (e.g. UDP ones or TCP SYNs) are queued as well so that they can be resent if and when ND NS/NA completes.
- to allow ICMP destination unreachables to be generated if the ND NS/NA transaction fails, even after resending.
I think it is acceptable to compromise on these requirements.
I'm inclined to agree with you, but...
I think it might also make sense to eliminate the ND NS/NA transaction altogether for addresses that do not begin with xxxx:xxxx:xxxx:xxxx:000x. In other words, for non SLAAC addresses, we need the ND NS/NA process (even if we do it stateless which isn't an entirely bad idea), but, for SLAAC addresses, the MAC is embedded in the IP address, so, why not just use that?
I think then you'd only be able to avoid the issue by maintaining "MAC/SLAAC" only segments, with no stateful DHCPv6, privacy address or static addressed nodes, so that (stateful or stateless) ND NS/NA could be disabled. While it would work for those types of nodes, it wouldn't restore the properties we'd have to give up for the stateless ND NS/NA idea, as they need state to be performed, regardless of the destination address type. I think there are a variety of valid and legitimate reasons to preserve the ability to have different layer 3 and layer 2 node addresses, rather than 1-to-1 mapping them. Therefore I think an ideal solution to this problem solves it for all cases, rather than having different solutions or mitigations for different situations. With your suggestion, in theory a solution would now exist for point-to-point links via /127 configured prefix lengths and for "MAC/SLAAC" only LAN segments. Neither of those solutions can be used for stateful DHCPv6 segments, or segments where privacy addresses are useful and could be used - so we'd only be at 2 out of (at least) 4 situations to mitigate it. I think when you start going down the path of creating a number of different special cases with fairly different methods of addressing them, it is worth sitting back and saying is there a single general solution that will cover all cases. That is what has been done in IPv6 with ND address resolution, rather than having link specific methods of configuring and/or discovering IPv6 addresses (e.g. in IPv4 IPCP, ARP etc.), so if there is a solution that can be applied within ND address resolution, it automatically applies to all existing and future link types that IPv6 operates over. Regards, Mark.
On Jan 8, 2011, at 4:28 AM, Mark Smith wrote:
The problem is that somebody on the Internet could send 1000s of UDP packets (i.e. an offlink traffic source) towards destinations that don't exist on the target subnet.
I meant to type 'ND-triggering stuff', concur 100%. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Thu, Jan 6, 2011 at 21:13, Jeff Wheeler <jsw@inconcepts.biz> wrote:
On Thu, Jan 6, 2011 at 8:47 PM, Owen DeLong <owen@delong.com> wrote:
1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be
Most networks do not do this today. Whether or not that is wise is questionable, but I don't think those networks want NDP to be the reason they choose to make this change.
Today (IPv4) they may not, but many recommendations for tomorrow (IPv6) are to use discrete network allocations for your infrastructure (loopbacks and PtP links, specifically) and to filter traffic destined to those at your edges ...
3. Police the ND issue rate on the router. Yes, it means that an ND attack could prevent some legitimate ND requests from getting through, but, at least it prevents ND overflow and the working hosts with existing ND entries continue to function. In most cases, this will be virtually all of the active hosts on the network.
You must understand that policing will not stop the NDCache from becoming full almost instantly under an attack. Since the largest existing routers have about 100k entries at most, an attack can fill that up in *one second.*
On some platforms, existing entries must be periodically refreshed by actual ARP/NDP exchange. If they are not refreshed, the entries go away, and traffic stops flowing. This is extremely bad because it can break even hosts with constant traffic flows, such as a server or infrastructure link to a neighboring router. Depending on the attack PPS and policer configuration, such hosts may remain broken for the duration of the attack.
Implementations differ greatly in this regard. All of them break under an attack. Every single current implementation breaks in some fashion. It's just a question of how bad.
And I am not saying there isn't a concern here that we should get vendors to allow us to mitigate, I think we just disagree on the severity of the issue at hand and the complexity of the solution. /TJ
On Jan 7, 2011, at 9:30 PM, TJ wrote:
Today (IPv4) they may not, but many recommendations for tomorrow (IPv6) are to use discrete network allocations for your infrastructure (loopbacks and PtP links, specifically) and to filter traffic destined to those at your edges ...
Actually, this has been an IPv4 BCP for the last decade or so, in order to allow for scalable use of iACLs, CoPP, et. al. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Fri, Jan 7, 2011 at 09:56, Dobbins, Roland <rdobbins@arbor.net> wrote:
On Jan 7, 2011, at 9:30 PM, TJ wrote:
Today (IPv4) they may not, but many recommendations for tomorrow (IPv6) are to use discrete network allocations for your infrastructure (loopbacks and PtP links, specifically) and to filter traffic destined to those at your edges ...
Actually, this has been an IPv4 BCP for the last decade or so, in order to allow for scalable use of iACLs, CoPP, et. al.
True - but some places don't have enough IPv4 address space or willingness to renumber, nor did some have the forethought, to accomplish this ... /TJ
On Thu, 6 Jan 2011, Jeff Wheeler wrote:
On Thu, Jan 6, 2011 at 8:47 PM, Owen DeLong <owen@delong.com> wrote:
1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be
Most networks do not do this today. Whether or not that is wise is questionable, but I don't think those networks want NDP to be the reason they choose to make this change.
Correct me if I'm wrong, but wouldn't blocking all traffic destined for your infrastructure at the borders also play havoc with PTMUD? Limiting the traffic allowed to just the necessary types would seem like a reasonable alternative. jms
On Jan 7, 2011, at 7:12 AM, Justin M. Streiner wrote:
On Thu, 6 Jan 2011, Jeff Wheeler wrote:
On Thu, Jan 6, 2011 at 8:47 PM, Owen DeLong <owen@delong.com> wrote:
1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be
Most networks do not do this today. Whether or not that is wise is questionable, but I don't think those networks want NDP to be the reason they choose to make this change.
Correct me if I'm wrong, but wouldn't blocking all traffic destined for your infrastructure at the borders also play havoc with PTMUD? Limiting the traffic allowed to just the necessary types would seem like a reasonable alternative.
jms
It would only play havoc if your infrastructure is originating packets destined to the outside world from it's link addresses. Generally this shouldn't happen. Remember, I'm only blocking traffic TO the point-to-point LINK networks. Not to the servers, loopbacks, etc. Owen
On 7 Jan 2011, at 15:12, Justin M. Streiner wrote:
On Thu, 6 Jan 2011, Jeff Wheeler wrote:
On Thu, Jan 6, 2011 at 8:47 PM, Owen DeLong <owen@delong.com> wrote:
1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be
Most networks do not do this today. Whether or not that is wise is questionable, but I don't think those networks want NDP to be the reason they choose to make this change.
Correct me if I'm wrong, but wouldn't blocking all traffic destined for your infrastructure at the borders also play havoc with PTMUD? Limiting the traffic allowed to just the necessary types would seem like a reasonable alternative.
Recommendations for PTMUD-friendly filtering are described in RFC 4890. Tim
On Jan 10, 2011, at 5:56 AM, Tim Chown wrote:
On 7 Jan 2011, at 15:12, Justin M. Streiner wrote:
On Thu, 6 Jan 2011, Jeff Wheeler wrote:
On Thu, Jan 6, 2011 at 8:47 PM, Owen DeLong <owen@delong.com> wrote:
1. Block packets destined for your point-to-point links at your borders. There's no legitimate reason someone should be
Most networks do not do this today. Whether or not that is wise is questionable, but I don't think those networks want NDP to be the reason they choose to make this change.
Correct me if I'm wrong, but wouldn't blocking all traffic destined for your infrastructure at the borders also play havoc with PTMUD? Limiting the traffic allowed to just the necessary types would seem like a reasonable alternative.
Recommendations for PTMUD-friendly filtering are described in RFC 4890.
Tim
Unless my point-to-point links are originating packets to the outside world (they should not be, in general), then I should not expect any PMTU-D responses directed at them. As such, blocking even those packets TO my point-to-point interfaces should not be problematic. Owen
On 1/10/2011 5:04 PM, Owen DeLong wrote:
Unless my point-to-point links are originating packets to the outside world (they should not be, in general), then I should not expect any PMTU-D responses directed at them.
As such, blocking even those packets TO my point-to-point interfaces should not be problematic.
Unless the router responds to traceroutes with it's loopback, I'd expect ping traffic to p2p addresses. I'd also expect to see customer traceroutes originating from the p2p address (useful in troubleshooting as it uses an address outside the customer's BGP addresses. Jack
On 5 jan 2011, at 13:21, Jeff Wheeler wrote:
customers may be driven to expect a /64, or even believe it is necessary for proper functioning.
RFC 3513 says: For all unicast addresses, except those that start with binary value 000, Interface IDs are required to be 64 bits long and to be constructed in Modified EUI-64 format. Nobody has been able or willing to tell why that's in there, though. All the same, beware of the anycast addresses if you want to use a smaller block for point-to-point and for LANs, you break stateless autoconfig and very likely terminally confuse DHCPv6 if your prefix length isn't /64.
This is one that a lot of smart people agree is a serious design flaw in any IPv6 network where /64 LANs are used
It's not a design flaw, it's an implementation flaw. The same one that's in ARP (or maybe RFC 894 wasn't published on april first by accident after all). And the internet managed to survive. A (relatively) easy way to avoid this problem is to either use a stateful firewall that only allows internally initiated sessions, or a filter that lists only addresses that are known to be in use.
and yet, vendors are not doing anything about it.
Then don't give them your business. And maybe a nice demonstration on stage at a NANOG meeting will help a bit?
In addition, if you design your network around /64 LANs, and especially if you take misguided security-by-obscurity advice and randomize your host addresses so they can't be found in a practical time by scanning, you may have a very difficult time if the ultimate solution to this must be to change the typical subnet size from /64 to something that can fit within a practical NDP/ARP table.
Sparse subnets in IPv6 are a feature, not a bug. They're not going to go away.
On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
that a lot of smart people agree is a serious design flaw in any IPv6 network where /64 LANs are used
It's not a design flaw, it's an implementation flaw. The same one that's in ARP (or maybe RFC 894 wasn't published on april first by accident after all). And the internet managed to survive.
It appears you want to have a semantic argument. I could grant that, and every point in my message would still stand. However, given that the necessary knobs to protect the network with /64 LANs do not exist on any platform today, vendors are not talking about whether or not they may in the future, and that no implementation with /64 LANs connected to the Internet, or any other routed network which may have malicious or compromised hosts, "design flaw" is correct. This is a much smaller issue with IPv4 ARP, because routers generally have very generous hardware ARP tables in comparison to the typical size of an IPv4 subnet. You seem to think the issue is generating NDP NS. While that is a part of the problem, even if a router can generate NS at an unlimited rate (say, by implementing it in hardware) it cannot store an unlimited number of entries. The failure modes of routers that have a full ARP or NDP table obviously vary, but it is never a good thing. In addition, the high-rate NS inquiries will be received by some or all of the hosts on the LAN, consuming their resources and potentially congesting the LAN. Further, if the router's NDP implementation depends on tracking the status of "incomplete" on-going inquiries, the available resource for this can very easily be used up, preventing the router from learning about new neighbors (or worse.) If it does not depend on that, and blindly learns any entry heard from the LAN, then its NDP table can be totally filled by any compromised / malicious host on the LAN, again, breaking the router. Either way is bad. This is a fundamentally different and much larger problem than those experienced with ARP precisely because the typical subnet size is now, quite literally, seventy-quadrillion times as large as the typical IPv4 subnet. On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
A (relatively) easy way to avoid this problem is to either use a stateful firewall that only allows internally initiated sessions, or a filter that lists only addresses that are known to be in use.
It would certainly be nice to have a stateful firewall on every single LAN connection. Were there high-speed, stateful firewalls in 1994? Perhaps the IPng folks had this solution in mind, but left it out of the standards process. No doubt they all own stock in SonicWall and are eagerly awaiting the day when "Anonymous" takes down a major ISP every day with a simple attack that has been known to exist, but not addressed, for many years. You must also realize that the stateful firewall has the same problems as the router. It must include a list of allocated IPv6 addresses on each subnet in order to be able to ignore other traffic. While this can certainly be accomplished, it would be much easier to simply list those addresses in the router, which would avoid the expense of any product typically called a "stateful firewall." In either case, you are now maintaining a list of valid addresses for every subnet on the router, and disabling NDP for any other addresses. I agree with you, this knob should be offered by vendors in addition to my list of possible vendor solutions. On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
Sparse subnets in IPv6 are a feature, not a bug. They're not going to go away.
I do not conceptually disagree with sparse subnets. With the equipment limitations of today, they are a plan to fail. Let's hope that all vendors catch up to this before malicious people/groups. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On 1/5/11 8:49 AM, Jeff Wheeler wrote:
On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
that a lot of smart people agree is a serious design flaw in any IPv6 network where /64 LANs are used
It's not a design flaw, it's an implementation flaw. The same one that's in ARP (or maybe RFC 894 wasn't published on april first by accident after all). And the internet managed to survive.
It appears you want to have a semantic argument. I could grant that, and every point in my message would still stand. However, given that the necessary knobs to protect the network with /64 LANs do not exist on any platform today, vendors are not talking about whether or not they may in the future, and that no implementation with /64 LANs connected to the Internet, or any other routed network which may have malicious or compromised hosts, "design flaw" is correct.
This is a much smaller issue with IPv4 ARP, because routers generally have very generous hardware ARP tables in comparison to the typical size of an IPv4 subnet.
no it isn't, if you've ever had your juniper router become unavailable because the arp policer caused it to start ignoring updates, or seen systems become unavailable due to an arp storm you'd know that you can abuse arp on a rather small subnet.
You seem to think the issue is generating NDP NS. While that is a part of the problem, even if a router can generate NS at an unlimited rate (say, by implementing it in hardware) it cannot store an unlimited number of entries. The failure modes of routers that have a full ARP or NDP table obviously vary, but it is never a good thing. In addition, the high-rate NS inquiries will be received by some or all of the hosts on the LAN, consuming their resources and potentially congesting the LAN. Further, if the router's NDP implementation depends on tracking the status of "incomplete" on-going inquiries, the available resource for this can very easily be used up, preventing the router from learning about new neighbors (or worse.) If it does not depend on that, and blindly learns any entry heard from the LAN, then its NDP table can be totally filled by any compromised / malicious host on the LAN, again, breaking the router. Either way is bad.
This is a fundamentally different and much larger problem than those experienced with ARP precisely because the typical subnet size is now, quite literally, seventy-quadrillion times as large as the typical IPv4 subnet.
On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
A (relatively) easy way to avoid this problem is to either use a stateful firewall that only allows internally initiated sessions, or a filter that lists only addresses that are known to be in use.
It would certainly be nice to have a stateful firewall on every single LAN connection. Were there high-speed, stateful firewalls in 1994? Perhaps the IPng folks had this solution in mind, but left it out of the standards process. No doubt they all own stock in SonicWall and are eagerly awaiting the day when "Anonymous" takes down a major ISP every day with a simple attack that has been known to exist, but not addressed, for many years.
You must also realize that the stateful firewall has the same problems as the router. It must include a list of allocated IPv6 addresses on each subnet in order to be able to ignore other traffic. While this can certainly be accomplished, it would be much easier to simply list those addresses in the router, which would avoid the expense of any product typically called a "stateful firewall." In either case, you are now maintaining a list of valid addresses for every subnet on the router, and disabling NDP for any other addresses. I agree with you, this knob should be offered by vendors in addition to my list of possible vendor solutions.
On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
Sparse subnets in IPv6 are a feature, not a bug. They're not going to go away.
I do not conceptually disagree with sparse subnets. With the equipment limitations of today, they are a plan to fail. Let's hope that all vendors catch up to this before malicious people/groups.
On Wed, Jan 5, 2011 at 12:04 PM, Joel Jaeggli <joelja@bogus.com> wrote:
no it isn't, if you've ever had your juniper router become unavailable because the arp policer caused it to start ignoring updates, or seen systems become unavailable due to an arp storm you'd know that you can abuse arp on a rather small subnet.
These conditions can only be triggered by malicious hosts on the LAN. With IPv6, it can be triggered by scanning attacks originated from "the Internet." No misconfiguration or compromised machine on your network is necessary. This is why it is a fundamentally different, and much larger, problem. Since you seem confused about the basic nature of this issue, I will explain it for you in more detail: IPv4) I can scan your v4 subnet, let's say it's a /24, and your router might send 250 ARP requests and may even add 250 "incomplete" entries to its ARP table. This is not a disaster for that LAN, or any others. No big deal. I can also intentionally send a large amount of traffic to unused v4 IPs on the LAN, which will be handled as unknown-unicast and sent to all hosts on the LAN via broadcasting, but many boxes already have knobs for this, as do many switches. Not good, but also does not affect any other interfaces on the router. IPv6) I can scan your v6 /64 subnet, and your router will have to send out NDP NS for every host I scan. If it requires "incomplete" entries in its table, I will use them all up, and NDP learning will be broken. Typically, this breaks not just on that interface, but on the entire router. This is much worse than the v4/ARP sitation. I trust you will understand the depth of this problem once you realize that no device has enough memory to prevent these attacks without knobs that make various compromises available via configuration. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
Jeff Wheeler (jsw) writes:
IPv4)
[...]
Not good, but also does not affect any other interfaces on the router.
You're assuming that all routing devices have per-interface ARP tables.
IPv6) Typically, this breaks not just on that interface, but on the entire router. This is much worse than the v4/ARP sitation.
Inverse assumption here. Doesn't change much to the case scenario you've put forward as a cause to the problem, but still wanted to point it out. Cheers, Phil
On Wed, Jan 5, 2011 at 12:26 PM, Phil Regnauld <regnauld@nsrc.org> wrote:
Jeff Wheeler (jsw) writes:
Not good, but also does not affect any other interfaces on the router. You're assuming that all routing devices have per-interface ARP tables.
No, Phil, I am assuming that the routing device has a larger ARP table than 250 entries. To be more correct, I am assuming that the routing device has a large enough ARP table that any one subnet could go from 0 ARP entries to 100% ARP entries without using up all the remaining ARP resources on the box. This is usually true. Further, routing devices usually have enough ARP table space that every subnet attached to them could be 100% full of active ARP entries without using up all the ARP resources. This is also often true. To give some figures, a Cisco 3750 "pizza box" layer-3 switch has room for several thousand ARP entries, and I have several with 3000 - 5000 active ARPs. Most people probably do not have more than a /20 worth of subnets for ARPing to a pizza box switch like this, but it does basically work. As we all know, a /64 is a lot more than a few thousand possible addresses. It is more addresses than anyone can store in memory, so state information for "incomplete" can't be tracked in software without creating problems there. Being fully stateless for new neighbor learning is possible and desirable, but a malicious host on the LAN can badly impact the router. This is why per-interface knobs are badly needed. The largest current routing devices have room for about 100,000 ARP/NDP entries, which can be used up in a fraction of a second with a gigabit of malicious traffic flow. What happens after that is the problem, and we need to tell our vendors what knobs we want so we can "choose our own failure mode" and limit damage to one interface/LAN. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
Jeff Wheeler (jsw) writes:
are badly needed. The largest current routing devices have room for about 100,000 ARP/NDP entries, which can be used up in a fraction of a second with a gigabit of malicious traffic flow. What happens after that is the problem, and we need to tell our vendors what knobs we want so we can "choose our own failure mode" and limit damage to one interface/LAN.
Well there are *some* knobs: http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-addrg_bsc_c... Not very smart, as it just controls how fast you run out of entries. I haven't read all entries in this thread yet, but I wonder if http://tools.ietf.org/html/draft-jiang-v6ops-nc-protection-01 has been mentioned ? Seems also that this topic has been brought up here a year ago give or take a couple of weeks: http://www.mail-archive.com/nanog@nanog.org/msg18841.html Cheers, Phil
On Wed, 5 Jan 2011 18:57:50 +0100 Phil Regnauld <regnauld@nsrc.org> wrote:
Jeff Wheeler (jsw) writes:
are badly needed. The largest current routing devices have room for about 100,000 ARP/NDP entries, which can be used up in a fraction of a second with a gigabit of malicious traffic flow. What happens after that is the problem, and we need to tell our vendors what knobs we want so we can "choose our own failure mode" and limit damage to one interface/LAN.
Well there are *some* knobs:
http://www.cisco.com/en/US/docs/ios/ipv6/configuration/guide/ip6-addrg_bsc_c...
Not very smart, as it just controls how fast you run out of entries.
I haven't read all entries in this thread yet, but I wonder if http://tools.ietf.org/html/draft-jiang-v6ops-nc-protection-01 has been mentioned ?
The problem fundamentally is the outstanding state while the NS/NA transaction takes place. IPX had big subnets (i.e. /32s out of 80 bit addresses), but as it didn't use a layer 3 to layer 2 address resolution protocol (layer 2 addresses were the layer 3 node addresses), requiring transaction state, it didn't (or wouldn't have) had this issue. I think the answer is to go stateless for the NS/NA transaction, either blindly trusting the received NAs (initially compatible with current NS/NA mechanisms), which reduces the set of nodes that can exploit neighbor cache tables to those onlink, and then eventually moving towards a nonce based verification of received NAs, which in effect carries the NS/NA transaction state within the packet, rather than storing it within the NS'ing node's memory. Going stateless means losing ICMPv6 destination unreachables for non-existent neighbors however (a) vendors aren't implementing those on P2P links already because they switch off ND address resolution, (b) the /127 P2P proposal switches them off because it proposes switching off ND address resolution, and (c) firewalls commonly drop them inbound from the Internet anyway. Other possible options - http://www.ietf.org/mail-archive/web/ipv6/current/msg12400.html
Seems also that this topic has been brought up here a year ago give or take a couple of weeks:
http://www.mail-archive.com/nanog@nanog.org/msg18841.html
Cheers, Phil
On Jan 5, 2011, at 9:44 AM, Jeff Wheeler wrote:
On Wed, Jan 5, 2011 at 12:26 PM, Phil Regnauld <regnauld@nsrc.org> wrote:
Jeff Wheeler (jsw) writes:
Not good, but also does not affect any other interfaces on the router. You're assuming that all routing devices have per-interface ARP tables.
No, Phil, I am assuming that the routing device has a larger ARP table than 250 entries. To be more correct, I am assuming that the routing device has a large enough ARP table that any one subnet could go from 0 ARP entries to 100% ARP entries without using up all the remaining ARP resources on the box. This is usually true. Further, routing devices usually have enough ARP table space that every subnet attached to them could be 100% full of active ARP entries without using up all the ARP resources. This is also often true.
But, Jeff, if the router has a bunch of /24s attached to it and you scan them all, the problem is much larger than 250 arp entries. I think that's what Phil was getting at. Owen
Owen DeLong (owen) writes:
But, Jeff, if the router has a bunch of /24s attached to it and you scan them all, the problem is much larger than 250 arp entries.
I think that's what Phil was getting at.
And so did Joel. If you've got a crapload of VLANs attached to a box, and you're routing these for customers (say, virtual firewall instances), you'll see this easily. I do understand the argument that sweeping a /64 will mean more L3->L2 lookups for directly connected subnets than in v4, but the problem domain remains the same, and I think it's been already explained here that there are various strategies to mitigate this. Additionnally I believe the size of typical recommended IPv6 space will probably discourage idle scanning, though this may change as the resources available increase, as Joe G. pointed out. If it does not, we'll have to address it if the existing mitigation techniques aren't sufficient. Phil
On 1/5/2011 11:19 AM, Jeff Wheeler wrote:
IPv6) I can scan your v6 /64 subnet, and your router will have to send out NDP NS for every host I scan. If it requires "incomplete" entries in its table, I will use them all up, and NDP learning will be broken. Typically, this breaks not just on that interface, but on the entire router. This is much worse than the v4/ARP sitation.
I haven't checked of late for v6, but I'd expect the same NDP security we have for ARP these days, which reduces the need to even send unsolicited ND requests. In this day and age, sending unsolicited neighbor requests from a router seems terribly broken. Even with SLAAC, one could quickly design a model that doesn't require unsolicited ND from the router to find the remove computer. This could possibly utilize DAD checks or even await the first packet from the node (similar to how we fill our MAC forwarding tables in switches, and not all switches will broadcast when a MAC is unknown). Jack
IPv6) I can scan your v6 /64 subnet, and your router will have to send out NDP NS for every host I scan. If it requires "incomplete" entries in its table, I will use them all up, and NDP learning will be broken. Typically, this breaks not just on that interface, but on the entire router. This is much worse than the v4/ARP sitation.
I'm guessing you're referring to this paragraph of RFC 4861: " When a node has a unicast packet to send to a neighbor, but does not know the neighbor's link-layer address, it performs address resolution. For multicast-capable interfaces, this entails creating a Neighbor Cache entry in the INCOMPLETE state and transmitting a Neighbor Solicitation message targeted at the neighbor. The solicitation is sent to the solicited-node multicast address corresponding to the target address. " <http://tools.ietf.org/html/rfc4861#section-7.2.2> It's worth noting that nothing in this paragraph is normative (there's no RFC 2119 language), so implementations are free to ignore it. I haven't read the NIST document, but it wouldn't conflict with the RFC if they recommended ignoring this paragraph and just relying on the ND cache they already have when a packet arrives. --Richard
IPv4) I can scan your v4 subnet, let's say it's a /24, and your router might send 250 ARP requests and may even add 250 "incomplete" entries to its ARP table. This is not a disaster for that LAN, or any others. No big deal. I can also intentionally send a large amount of traffic to unused v4 IPs on the LAN, which will be handled as unknown-unicast and sent to all hosts on the LAN via broadcasting, but many boxes already have knobs for this, as do many switches. Not good, but also does not affect any other interfaces on the router.
IPv6) I can scan your v6 /64 subnet, and your router will have to send out NDP NS for every host I scan. If it requires "incomplete" entries in its table, I will use them all up, and NDP learning will be broken. Typically, this breaks not just on that interface, but on the entire router. This is much worse than the v4/ARP sitation.
Many would argue that the version of IP is irrelevant, if you are permitting external hosts the ability to scan your internal network in an unrestricted fashion (no stateful filtering or rate limiting) you have already lost, you just might not know it yet. Even granting that, for the sake of argument - it seems like it would not be hard for $vendor to have some sort of "emergency garbage collection" routines within their NDP implementations ... ? /TJ
On Wed, Jan 5, 2011 at 1:02 PM, TJ <trejrco@gmail.com> wrote:
Many would argue that the version of IP is irrelevant, if you are permitting external hosts the ability to scan your internal network in an unrestricted fashion (no stateful filtering or rate limiting) you have already lost, you
How do you propose to rate-limit this scanning traffic? More router knobs are needed. This also does not solve problems with malicious hosts on the LAN. A stateful firewall on every router interface has been suggested already on this thread. It is unrealistic.
Even granting that, for the sake of argument - it seems like it would not be hard for $vendor to have some sort of "emergency garbage collection" routines within their NDP implementations ... ?
How do you propose the router know what entries are "garbage" and which are needed? Eliminating active, "good" entries to allow for more churn would make the problem much worse, not better. -- Jeff S Wheeler <jsw@inconcepts.biz> +1-212-981-0607 Sr Network Operator / Innovative Network Concepts
On Jan 6, 2011, at 1:14 AM, Jeff Wheeler wrote:
A stateful firewall on every router interface has been suggested already on this thread. It is unrealistic.
It isn't just unrealistic, it's highly undesirable, since it represents an huge DoS state vector. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Wed, Jan 5, 2011 at 13:14, Jeff Wheeler <jsw@inconcepts.biz> wrote:
Many would argue that the version of IP is irrelevant, if you are
On Wed, Jan 5, 2011 at 1:02 PM, TJ <trejrco@gmail.com> wrote: permitting
external hosts the ability to scan your internal network in an unrestricted fashion (no stateful filtering or rate limiting) you have already lost, you
How do you propose to rate-limit this scanning traffic? More router knobs are needed. This also does not solve problems with malicious hosts on the LAN.
Off the top of my head, maybe just slow down the generation of new NS attempts when under attack (without impacting the NUD-based NS).
A stateful firewall on every router interface has been suggested already on this thread. It is unrealistic.
Even granting that, for the sake of argument - it seems like it would not be hard for $vendor to have some sort of "emergency garbage collection" routines within their NDP implementations ... ?
How do you propose the router know what entries are "garbage" and which are needed? Eliminating active, "good" entries to allow for more churn would make the problem much worse, not better.
Again, off the top of my head, maybe - when under duress - age out the incomplete ND table entries faster. /TJ
On 1/6/2011 2:17 PM, TJ wrote:
Again, off the top of my head, maybe - when under duress - age out the incomplete ND table entries faster.
Given that the incomplete age is to protect the L2 network from excessive broadcast/multicast, I agree that aging them out fast would be a wiser solution, if you must have it to begin with. It is better to increase traffic loads. I'm still a proponent for removing as needed requests like this, though. It would have been better to send a global "everyone update me" request periodically, even if triggered by an unknown entry, yet limited to only broadcasting once every 10-30 seconds. Given that all requests for an unknown arp/ND entry results in all hosts on the network checking, it only makes sense for all hosts to respond. There may be other concerns, but I'm actually not against all hosts responding via multicast to all other hosts, so that a full mesh can be established ahead of time. The idea of minimizing the table to an as-needed basis should not have continued with IPv6. Special provisions could be handled when dealing with proxy-ND, but I'm not sure that is needed either. Jack
On 1/5/2011 10:02, TJ wrote:
Many would argue that the version of IP is irrelevant, if you are permitting external hosts the ability to scan your internal network in an unrestricted fashion (no stateful filtering or rate limiting) you have already lost, you just might not know it yet.
Stateful filtering introduces its own set of scaling issues. ~Seth
On Jan 6, 2011, at 1:02 AM, TJ wrote:
if you are permitting external hosts the ability to scan your internal network in an unrestricted fashion
DCN aside, how precisely does one define 'internal network' in, say, the context of the production network of a broadband access SP, or hosting/colocation/VPS/IaaS SP? Surely you aren't advocating wedging stateful firewalls into broadband access networks or in front of servers, with all the DoS chokepoint breakage that implies? ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Wed, Jan 5, 2011 at 17:44, Dobbins, Roland <rdobbins@arbor.net> wrote:
On Jan 6, 2011, at 1:02 AM, TJ wrote:
if you are permitting external hosts the ability to scan your internal network in an unrestricted fashion
DCN aside, how precisely does one define 'internal network' in, say, the context of the production network of a broadband access SP, or hosting/colocation/VPS/IaaS SP?
In that scenario, "internal" would be the SP's network itself - traffic actually directed to their infrastructure.
This is a much smaller issue with IPv4 ARP, because routers generally have very generous hardware ARP tables in comparison to the typical size of an IPv4 subnet.
no it isn't, if you've ever had your juniper router become unavailable because the arp policer caused it to start ignoring updates, or seen systems become unavailable due to an arp storm you'd know that you can abuse arp on a rather small subnet.
It may also be worth noting that "typical size of an IPv4 subnet" is a bit of a red herring; a v4 router that's responsible for /16 of directly attached /24's is still able to run into some serious issues. What's more important is the rate at which scanning can occur, which is largely a function of (for a remote attacker) speed of connection to an upstream; this problem is getting worse. A practical lesson is the so-called "Kaminsky DNS vulnerability" (which Kaminsky didn't actually discover - This issue was known back around 2000, at least, but at the time was deemed impractical to exploit due to bandwidth and processing limitations). We do need to be aware that continued increases in the available resources will change the viability of attacks in the future. The switch from IPv4 to IPv6 itself is such a change; it renders random trolling through IP space much less productive. We should not lose sight of the fact that this is generally a very positive feature; calls for packing IPv6 space more tightly serve merely to marginalize that win. We should be figuring out ways to make /64's work optimally, because in ten years everyone's going to have gigabit Internet links and we're going to need all the tricks we can muster to make an attacker's job harder. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 8:57 AM, Joe Greco wrote:
The switch from IPv4 to IPv6 itself is such a change; it renders random trolling through IP space much less productive.
And renders hinted trolling far more productive/necessary, invariably leading to increased strain on already-brittle/-overloaded DNS, whois, route servers, et. al., not to mention ND/multicast abuse.
We should not lose sight of the fact that this is generally a very positive feature; calls for packing IPv6 space more tightly serve merely to marginalize that win.
Far from being a 'win', I believe it's either neutral or a net negative, due to the above implications. If we're looking at a near-future world filled with spimes, where every molecule in every nanomanufactured soda can has its own IPv6 address it uses to communicate via NFC or ZigBee or whatever during the assembly/recycling process, unnecessarily wasting IPv6 space isn't an optimal strategy. The alleged security benefits of sparse IPv6 addressing plans are a canard, IMHO.
We should be figuring out ways to make /64's work optimally, because in ten years everyone's going to have gigabit Internet links and we're going to need all the tricks we can muster to make an attacker's job harder.
These are diametrically-opposed, mutually-exclusive goals, IMHO. All in all, IPv6 is a net security negative. It has all the same problems of IPv4, plus new, IPv6-specific problems - *in hex*. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
The switch from IPv4 to IPv6 itself is such a change; it renders random t= rolling through IP space much less productive.
And renders hinted trolling far more productive/necessary, invariably leadi= ng to increased strain on already-brittle/-overloaded DNS, whois, route ser= vers, et. al., not to mention ND/multicast abuse.
Of course, you *want* them attacking the lower layers, you don't want them attacking the more easily defended higher layers... got an investment in Cisco stock there? :-) But seriously, if your solution is to eliminate sparseness, then you've also just make attacking networks a whole lot easier.
We should not lose sight of the fact that this is generally a very positi= ve feature; calls for packing IPv6 space more tightly serve merely to margi= nalize that win.
Far from being a 'win', I believe it's either neutral or a net negative, du= e to the above implications.
Then you need to re-evaluate; I'd much prefer having to protect resources like DNS servers. With a DNS server, I can monitor access trends, or set off excessive query alarms, and I can even write the code to do all that without having to create custom silicon to implement it. One can only imagine how frustrating a $GENERATE must be to a PTR-scanner. Why do we have to repeat all the mistakes of IPv4 in v6? Packing everything densely is an obvious problem with IPv4; we learned early on that having a 48-bit (32 address, 16 port) space to scan made port-scanning easy, attractive, productive, and commonplace. If there are operational problems with IPv6, now's a great time to figure them out and figure out how to make it work well. Re-engineering the protocol at this late hour is unlikely to be productive; it took many years to get IPv6 into the state it is, and if we are going to go and change it all because you don't like sparseness, will it be ready to deploy before 2020? ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 10:08 AM, Joe Greco wrote:
Packing everything densely is an obvious problem with IPv4; we learned early on that having a 48-bit (32 address, 16 port) space to scan made port-scanning easy, attractive, productive, and commonplace.
I don't believe that host-/port-scanning is as serious a problem as you seem to think it is, nor do I think that trying to somehow prevent host from being host-/port-scanned has any material benefit in terms of security posture, that's our fundamental disagreement. If I've done what's necessary to secure my hosts/applications, host-/port-scanning isn't going to find anything to exploit (overly-aggressive scanning can be a DoS vector, but there are ways to ameliorate that, too). If I haven't done what's necessary to secure my hosts/applications, one way or another, they *will* end up being exploited - and the faux security-by-obscurity offered by sparse addressing won't matter a bit. This whole focus on sparse addressing is just another way to tout security-by-obscurity. We already know that security-by-obscurity is a fundamentally-flawed concept, so it doesn't make sense to try and keep rationalizing it in various domain-specific instantiations. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
From: Dobbins, Roland Sent: Wednesday, January 05, 2011 7:19 PM To: Nanog Operators' Group Subject: Re: NIST IPv6 document
On Jan 6, 2011, at 10:08 AM, Joe Greco wrote:
I don't believe that host-/port-scanning is as serious a problem as you seem to think it is, nor do I think that trying to somehow prevent host from being host-/port-scanned has any material benefit in terms of security posture, that's our fundamental disagreement.
It will be a problem if people learn they can DoS routers by doing it by maxing out the neighbor table.
If I've done what's necessary to secure my hosts/applications, host- /port-scanning isn't going to find anything to exploit (overly- aggressive scanning can be a DoS vector, but there are ways to ameliorate that, too).
I don't think you are understanding the problem. The problem comes from addressing hosts that don't even exist. This causes the router to attempt to find that host. The v6 equivalent of ARP. At some point that table becomes full of entries for hosts that don't exist so there isn't room for hosts that do exist.
This whole focus on sparse addressing is just another way to tout security-by-obscurity. We already know that security-by-obscurity is a fundamentally-flawed concept, so it doesn't make sense to try and keep rationalizing it in various domain-specific instantiations.
No, it was designed to accommodate EUI-64 addresses which are to replace MAC-48 addresses at layer2. We currently create an EUI-64 address out of a MAC-48 in many cases during SLAAC but at some point the interfaces will be shipping with EUI-64 addresses. The world is running out of MAC-48 addresses. So at some point the "MAC" address will be the host address and it will be 64-bits long. It has nothing to do with "security by obscurity".
On Jan 6, 2011, at 10:42 AM, George Bonser wrote:
It will be a problem if people learn they can DoS routers by doing it by maxing out the neighbor table.
I understand this - that's a completely separate issue from the supposed benefits of sparse addressing for endpoint host security.
I don't think you are understanding the problem.
I've understood the problem for years, thanks, and have commented on it in other portions of this thread, as well as in may earlier threads around this general set of issues - and it's completely orthogonal to this particular discussion. Or are you saying that you think that the miscreants will simply and contritely abandon host-/port-scanning as a) a host-discovery mechanism and b) as a DoS mechanism if everyone magically adopts sparse addressing? Somehow, I don't think that's very likely. ;> Also, see my previous comments in re the negative implications of hinted scanning.
It has nothing to do with "security by obscurity".
You may wish to re-read what Joe was saying - he was positing sparse addressing as a positive good because it will supposedly make it more difficult for attackers to locate endpoints in the first place, i.e., security through obscurity. I think that's an invalid argument. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
I've understood the problem for years, thanks, and have commented on
it
in other portions of this thread, as well as in may earlier threads around this general set of issues - and it's completely orthogonal to this particular discussion.
I suppose what confused me was this: " I don't believe that host-/port-scanning is as serious a problem as you seem to think it is, nor do I think that trying to somehow prevent host from being host-/port-scanned has any material benefit in terms of security posture, that's our fundamental disagreement. If I've done what's necessary to secure my hosts/applications, host-/port-scanning isn't going to find anything to exploit (overly-aggressive scanning can be a DoS vector, but there are ways to ameliorate that, too). " I thought the entire notion of actually getting to a host was orthogonal to the discussion as that wasn't the point. It wasn't about exploitation of anything on the host, the discussion was about the act of scanning a network itself being the problem. If network devices can be degraded simply by scanning the network, it is going to become *very* commonplace. But the sets of problems are different for an end user network vs. a service provider network. For a transit link you might disable ND and configure static neighbors which would inoculate that link from such a neighbor table exhaustion attack. For an end network, the problems are different.
On Jan 6, 2011, at 11:16 AM, George Bonser wrote:
I thought the entire notion of actually getting to a host was orthogonal to the discussion as that wasn't the point. It wasn't about exploitation of anything on the host, the discussion was about the act of scanning a network itself being the problem.
That's a separate sub-thread. Joe was specifically talking about sparse addressing as a way to keep the attackers from finding end-hosts. My view is that a) nothing will keep the attackers from finding the end-hosts, b) they'll scan, anyways, c) they'd do hinted scanning (DNS/whois/routing tables) which will have its own negative second-order effects, and therefore c) the scanning issue in terms of endpoint security is a red herring.
If network devices can be degraded simply by scanning the network, it is going to become *very* commonplace.
They already can be, and it's going to become more commonplace as a DoS attack vector, concur w/you 100%.
But the sets of problems are different for an end user network vs. a service provider network. For a transit link you might disable ND and configure static neighbors which would inoculate that link from such a neighbor table exhaustion attack.
If you're using /64s for your p2p links, the router's still been turned into a sinkhole, though.
For an end network, the problems are different.
Concur again. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
It has nothing to do with "security by obscurity".
You may wish to re-read what Joe was saying - he was positing sparse addres= sing as a positive good because it will supposedly make it more difficult f= or attackers to locate endpoints in the first place, i.e., security through= obscurity. I think that's an invalid argument.
That's not necessarily security through obscurity. A client that just picks a random(*) address in the /64 and sits on it forever could be reasonably argued to be doing a form of security through obscurity. However, that's not the only potential use! A client that initiates each new outbound connection from a different IP address is doing something Really Good. It may help to think of your Internet address plus port number as being just a single quantity in some senses. As it stands with IPv4, when you "see" packets from 12.34.56.78, you pretty much know there's a host or something interesting probably living there. You can then try to probe one of ~64K ports, or better yet, all of them, and you have a good chance of finding something of interest. If you have potentially 80 bits of space to probe (16 bits of ports on each of 64 bits of address), you're making a hell of a jump. If you don't understand the value of such an increase in magnitude, I invite you to switch all your ssh keys to 56 bit. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 12:17 PM, Joe Greco wrote:
If you don't understand the value of such an increase in magnitude,
I can count as well as you can, I assure you.
I invite you to switch all your ssh keys to 56 bit.
The difference is that if someone compromises/brute-forces one of my ssh keys, he has something of value. OTOH, if he can find my host and send some packets to it, since I've done all the host OS/app/service BCPs, plus I'm enforcing policy via stateless ACLs in hardware-based routers/switches and tcpwrappers on my host, so what? I could care less. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 6, 2011, at 12:17 PM, Joe Greco wrote:
If you don't understand the value of such an increase in magnitude,
I can count as well as you can, I assure you.
I invite you to switch all your ssh keys to 56 bit.
The difference is that if someone compromises/brute-forces one of my ssh ke= ys, he has something of value. =20
OTOH, if he can find my host and send some packets to it, since I've done a= ll the host OS/app/service BCPs, plus I'm enforcing policy via stateless AC= Ls in hardware-based routers/switches and tcpwrappers on my host, so what? = I could care less.
Generally speaking, security professionals prefer for there to be more roadblocks rather than fewer. That's why they call it layers of security; occasionally your belt may snap and you may find yourself reliant on the suspenders. The fact that you're confident that your belt is great is only relevant to yourself and any others who are similarly confident in their choice of belt. You start off with the assumption that the knowledge of the host address is not something of value; while I agree that it *shouldn't* be of value, the sad fact of the matter is that we've seen numerous examples of where it *is* of value. I'm starting off with the assumption that knowledge of the host address *might* be something of value. If it isn't, no harm done. If it is, and the address becomes virtually impossible to find, then we've just defeated an attack, and it's hard to see that as anything but positive. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 12:54 PM, Joe Greco wrote:
Generally speaking, security professionals prefer for there to be more roadblocks rather than fewer.
The soi-disant security 'professionals' who espouse layering unnecessary multiple, inefficient, illogical, and iatrogenic roadblocks in preference to expending the time and effort to learn enough about *actual* security (in contrast to security theater) to Do Things Right The First Time, aren't worthy of the title and ought to be ignored, IMHO.
If it is, and the address becomes virtually impossible to find, then we've just defeated an attack, and it's hard to see that as anything but positive.
If we had some cheese, we could make a ham-and-cheese sandwich, if we had some ham. ;> We must face up to the reality that the endpoint *will be found*, irrespective of the relative sparseness or density of the addressing plan. It will be found via DNS, via narrowing the search scope via examining routing advertisements, via narrowing the search scope via perusing whois, via the attackers simply throwing more of their near-infinite scanning resources (i.e., bots) at these dramatically-reduced search scopes. So, the endpoint will be found, no attack will be prevented, and we end up a) wasting wide swathes of address space for no good reason whilst b) making the routing/switching infrastructure elements far more vulnerable to DoS by turning them into sinkholes. No positive benefits, two negative drawbacks. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 6, 2011, at 12:54 PM, Joe Greco wrote:
Generally speaking, security professionals prefer for there to be more ro= adblocks rather than fewer. =20
The soi-disant security 'professionals' who espouse layering unnecessary mu= ltiple, inefficient, illogical, and iatrogenic roadblocks in preference to = expending the time and effort to learn enough about *actual* security (in c= ontrast to security theater) to Do Things Right The First Time, aren't wort= hy of the title and ought to be ignored, IMHO.
If it is, and the address becomes virtually impossible to find, then we'v= e just defeated an attack, and it's hard to see that as anything but positi= ve.
If we had some cheese, we could make a ham-and-cheese sandwich, if we had s= ome ham.
;>
We must face up to the reality that the endpoint *will be found*, irrespect= ive of the relative sparseness or density of the addressing plan. It will = be found via DNS, via narrowing the search scope via examining routing adve= rtisements, via narrowing the search scope via perusing whois, via the atta= ckers simply throwing more of their near-infinite scanning resources (i.e.,= bots) at these dramatically-reduced search scopes.
So, the endpoint will be found, no attack will be prevented, and we end up = a) wasting wide swathes of address space for no good reason whilst b) makin= g the routing/switching infrastructure elements far more vulnerable to DoS = by turning them into sinkholes.
That's, simply put, a poor argument. And here's why. There are numerous parallels between physical and electronic security. Let's just concede that for a moment. You put up a screen door. I've got a knife. You put up a wood door. I've got steel toed boots. You put up a metal door. I've got a crowbar. You put up a bank vault door. I (can find someone who can get) explosives. The thing is, it may not make a whole heck of a lot of sense to put a screen door on a bank's vault, or a vault door on your front screen porch. Even so, while you can increase the strength of a particular countermeasure, maybe it isn't smart to rely entirely on that one countermeasure, or even two or three countermeasures. A bank may have an armed guard, a silent alarm, video surveillance, bulletproof glass, dye packs in the tills, cash in a timelocked vault, and all sorts of other countermeasures to address specific areas of threat. Not all countermeasures are going to be effective against every threat, and there is no requirement that only one countermeasure be applied towards a given threat. Further, there's no guarantee that the countermeasures are going to be properly installed or appropriate to the task - which seems to be your objection to "soi-disant security 'professionals'" - but on the other hand, in many cases, they *are* properly installed and well considered. To say that "the endpoint *will be found*" is a truism, in the same way that a bank *will* be robbed. You're not trying to guarantee that it will never happen. You're trying to *deter* the bad guys. You want the bad guy to go across the street to the less-well-defended bank across the street. You can't be sure that they'll do that. Someone who has it out for you and your bank will rob your bank (or end up in jail or dead or whatever). But you can scare off the guy who's just looking to score a few thousand in easy cash. Making it harder to scan a network *can* and *does* deter certain classes of attacks. That it doesn't prevent every attack isn't a compelling proof that it doesn't prevent some, and I have to call what you said a poor argument for that reason. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wed, Jan 5, 2011 at 10:51 PM, Joe Greco <jgreco@ns.sol.net> wrote:
On Jan 6, 2011, at 12:54 PM, Joe Greco wrote: ... To say that "the endpoint *will be found*" is a truism, in the same way that a bank *will* be robbed. You're not trying to guarantee that it will never happen. You're trying to *deter* the bad guys. You want the bad guy to go across the street to the less-well-defended bank across the street. You can't be sure that they'll do that. Someone who has it out for you and your bank will rob your bank (or end up in jail or dead or whatever). But you can scare off the guy who's just looking to score a few thousand in easy cash.
Making it harder to scan a network *can* and *does* deter certain classes of attacks. That it doesn't prevent every attack isn't a compelling proof that it doesn't prevent some, and I have to call what you said a poor argument for that reason.
Hi Joe, I think what people are trying to say is that it doesn't matter whether or not your host is easily findable or not, if I can trivially take out your upstream router. With your upstream router out of commission, the findability of your host on the subnet really doesn't matter. Once the router is gone, so is your host, no matter how well hidden on the subnet it was. So, the push here is to prevent the trivial ability to take out the upstream routers, so that the host-level issues will still matter, and be worth discussing. Hope this helps clarify the reason for the overarching concern about the /64 subnet size. Thanks!! Matt
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 2:03 PM, Matthew Petach wrote:
I think what people are trying to say is that it doesn't matter whether or not your host is easily findable or not, if I can trivially take out your upstream router.
That's part of it - the other part is that the host will be found, irrespective of attempts to 'hide' it. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 6, 2011, at 2:03 PM, Matthew Petach wrote:
I think what people are trying to say is that it doesn't matter whether o= r not your host is easily findable or not, if I can trivially take out your upstream router.
A good reason to see if there's a way to solve that (which there is, I'm sure).
That's part of it - the other part is that the host will be found, irrespec= tive of attempts to 'hide' it.
Sorry, but I see this as not grasping a fundamental security concept. You're not trying for DHS/TSA-style all-threats-always-prevented threat elimination. How many times do we have to learn that this isn't a practical goal? You want to make things more difficult for an attacker while providing usability for authorized users. Making a host harder to find (or more specifically to address from remote) is a worthwhile goal. Learn from history. Ten years ago, we *knew* DNS was "weak" due to the ultimate reliance on the transaction ID. There weren't enough bits there to protect a DNS server against certain types of attacks but that were deemed to be impractical at the time; time passes, cpu's got faster, upstream connections got faster, and suddenly some guy "discovers" that he can get a DNS server to do bad things if he floods it. So now our best current practices now have us using more bits, in the form of random source ports, to help out there. Even that's not a comprehensive fix - definitely won't be in another 20 years, when bandwidth, cpu, and pps rates have all seen a factor of 10000 increase again - but it's helpful for the time being. Things like 4941 take that a lot further, and provide enough bits to make both range scanning and scanning via learned addresses less useful techniques. The fact that you might be able to find a host somehow anyways doesn't lessen the value of making it harder for an attacker to find that host to begin with. This is basic security, whether or not you approve of it. You're trying to make it harder for bad guys. There are lots of security techniques that I don't like, too, or may disapprove of for one reason or another. NAT anyone? :-) ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 9:29 PM, Joe Greco wrote:
Sorry, but I see this as not grasping a fundamental security concept.
I see it as avoiding a common security misconception.
Making a host harder to find (or more specifically to address from remote) is a worthwhile goal.
As I've stated repeatedly, I don't think that sparse addressing makes hosts harder to find, because hinted scanning will reveal them.
Things like 4941 take that a lot further, and provide enough bits to make both range scanning and scanning via learned addresses less useful techniques.
I believe RFC4941 to be positively evil, that the harm it will do in terms of complicating traceback and attribution far outweigh any supposed benefits (which are questionably, anyways, IMHO).
This is basic security, whether or not you approve of it. You're trying to make it harder for bad guys.
My view is that it's basic security theater, which a) makes nothing harder for the bad guys, and b) has unpleasant side-effects which have the net effect of degrading one's overall security posture. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On 1/6/2011 6:23 PM, Dobbins, Roland wrote:
On Jan 6, 2011, at 9:29 PM, Joe Greco wrote:
Sorry, but I see this as not grasping a fundamental security concept.
I see it as avoiding a common security misconception.
I find that the security "Layers" advocates tend not to look at the differing value of each of those layers. Going back to the physical door analogy, it's like saying that a bank vault protected by a bank vault door is less secure than a vault with the bank vault door AND a screen door. -- Dave
On Friday, January 07, 2011 09:25:59 am David Sparro wrote:
I find that the security "Layers" advocates tend not to look at the differing value of each of those layers.
Different layers very much have different values, and, yes, this is often glossed over.
Going back to the physical door analogy, it's like saying that a bank vault protected by a bank vault door is less secure than a vault with the bank vault door AND a screen door.
More analogous would be the safe with glass relockers and a vial of tear gas behind the ideal drill point. Yes, those do exist, and, should you want to see a photo of such a vial, I can either provide one (have to take the photo with the safe door open next time I'm on that site, which may be a while with all this snow and ice on the ground) or you can find pics through google. Even physical locks have layered security principles. Think Medeco locks with chisel-pointed pins and the associated sidebar in the center, or ASSA's Twin double-stack pin technology, or the use of spool pins in locks, or Schlage's Primus system (also sidebar driven) or anti-drill armor in front of the pin stack (to prevent drilling the shear line), etc. The use of layers in the physical security realm is a proven concept, and the synergy of the layers has been shown effective over time. Not totally secure, of course, but as the number of layers increases the security becomes better and better.
On Mon, Jan 10, 2011 at 02:52:56PM -0500, Lamar Owen wrote:
On Friday, January 07, 2011 09:25:59 am David Sparro wrote:
I find that the security "Layers" advocates tend not to look at the differing value of each of those layers.
Different layers very much have different values, and, yes, this is often glossed over.
Going back to the physical door analogy, it's like saying that a bank vault protected by a bank vault door is less secure than a vault with the bank vault door AND a screen door.
More analogous would be the safe with glass relockers and a vial of tear gas behind the ideal drill point. Yes, those do exist, and, should you want to see a photo of such a vial, I can either provide one (have to take the photo with the safe door open next time I'm on that site, which may be a while with all this snow and ice on the ground) or you can find pics through google.
Even physical locks have layered security principles. Think Medeco locks with chisel-pointed pins and the associated sidebar in the center, or ASSA's Twin double-stack pin technology, or the use of spool pins in locks, or Schlage's Primus system (also sidebar driven) or anti-drill armor in front of the pin stack (to prevent drilling the shear line), etc. The use of layers in the physical security realm is a proven concept, and the synergy of the layers has been shown effective over time. Not totally secure, of course, but as the number of layers increases the security becomes better and better.
My father used to tell me that "Locks keep the honest people out." He was right; the clever non-honest are the ones we have to deal with at that level. Computers are so great a force multiplier that we are having to do the same sorts of things to defend against assaults from them. -- Mike Andrews, W5EGO mikea@mikea.ath.cx Tired old sysadmin
On Jan 10, 2011, at 11:52 AM, Lamar Owen wrote:
On Friday, January 07, 2011 09:25:59 am David Sparro wrote:
I find that the security "Layers" advocates tend not to look at the differing value of each of those layers.
Different layers very much have different values, and, yes, this is often glossed over.
Going back to the physical door analogy, it's like saying that a bank vault protected by a bank vault door is less secure than a vault with the bank vault door AND a screen door.
More analogous would be the safe with glass relockers and a vial of tear gas behind the ideal drill point. Yes, those do exist, and, should you want to see a photo of such a vial, I can either provide one (have to take the photo with the safe door open next time I'm on that site, which may be a while with all this snow and ice on the ground) or you can find pics through google.
Even physical locks have layered security principles. Think Medeco locks with chisel-pointed pins and the associated sidebar in the center, or ASSA's Twin double-stack pin technology, or the use of spool pins in locks, or Schlage's Primus system (also sidebar driven) or anti-drill armor in front of the pin stack (to prevent drilling the shear line), etc. The use of layers in the physical security realm is a proven concept, and the synergy of the layers has been shown effective over time. Not totally secure, of course, but as the number of layers increases the security becomes better and better.
Nonetheless, NAT remains an opaque screen door at best. If the bad guy is behind the door, it helps hide him. If the bad guy is outside the door, the time it takes for his knife to cut through it is so small as to be meaningless. Owen
On 1/10/2011 6:55 PM, Owen DeLong wrote:
Nonetheless, NAT remains an opaque screen door at best.
If the bad guy is behind the door, it helps hide him.
If the bad guy is outside the door, the time it takes for his knife to cut through it is so small as to be meaningless.
For a "server" expected to be open to anyone, anywhere, anytime... yes. Otherwise no. NAT overload (many to 1), and 1-to-1 NAT with some timeout value both serve to disconnect the potential targets from the network, absent any static NAT or port mapping (for "servers"). RFC-1918 behind NAT insures this (notwithstanding pivot attacks). It is a decreasing risk, given the typical user initiated compromise of today (click here to infect your computer), but a non-zero one. The whole IPv6 / no-NAT philosophy of "always connected and always directly addressable" eliminates this layer. Jeff
On Mon, 10 Jan 2011 19:22:46 EST, Jeff Kell said:
It is a decreasing risk, given the typical user initiated compromise of today (click here to infect your computer), but a non-zero one.
The whole IPv6 / no-NAT philosophy of "always connected and always directly addressable" eliminates this layer.
I'd say on the whole, it's a net gain - the added ease of tracking down the click-here-to-infect machines that are no longer behind a NAT outweighs the little added security the NAT adds (above and beyond the statefulness that both NAT and a good firewall both add).
On 1/10/2011 6:33 PM, Valdis.Kletnieks@vt.edu wrote:
I'd say on the whole, it's a net gain - the added ease of tracking down the click-here-to-infect machines that are no longer behind a NAT outweighs the little added security the NAT adds (above and beyond the statefulness that both NAT and a good firewall both add).
Really? Which machine was using the privacy extension address on the /64? I don't see how it's made it any easier to track. In some ways, on provider edges that don't support DHCPv6 IA_TA and relay on slaac, it's one extra nightmare. Jack
On Jan 10, 2011, at 8:22 PM, Jack Bates wrote:
On 1/10/2011 6:33 PM, Valdis.Kletnieks@vt.edu wrote:
I'd say on the whole, it's a net gain - the added ease of tracking down the click-here-to-infect machines that are no longer behind a NAT outweighs the little added security the NAT adds (above and beyond the statefulness that both NAT and a good firewall both add).
Really? Which machine was using the privacy extension address on the /64? I don't see how it's made it any easier to track. In some ways, on provider edges that don't support DHCPv6 IA_TA and relay on slaac, it's one extra nightmare.
Jack
At least I can tell which segment the pwn3d machine is on. As it currently stands, I'm lucky if I can tell which state the pwn3d machine inside $ENTERPRISE is located in. Sometimes, you can't even tell which country. Owen
On Mon, 10 Jan 2011 22:22:32 CST, Jack Bates said:
Really? Which machine was using the privacy extension address on the /64? I don't see how it's made it any easier to track. In some ways, on provider edges that don't support DHCPv6 IA_TA and relay on slaac, it's one extra nightmare.
The same exact way you currently track down an IP address that some machine has started using without bothering to ask your DHCP server for an allocation, of course. Remember - the privacy extension was so that somebody far away on the Internet couldn't easily correlate "all these hits on websites were from the same box". It gives a user approximately *zero* protection against their own ISP dumping the ARP tables off every switch 5 minutes and keeping the data handy in case they have to track a specific MAC or IP address down. And if you know how to do that sort of thing for rogue/unexpected stuff on IPv4, doing it for IPv6 is trivial.
On 1/11/2011 10:57 AM, Valdis.Kletnieks@vt.edu wrote:
The same exact way you currently track down an IP address that some machine has started using without bothering to ask your DHCP server for an allocation, of course.
But it's no easier. Especially when you hit the customer equipment. NAT may be gone there, but knowing which computer it is will likely be impossible (as it won't be standard policy for the customer to grab arp tables).
Remember - the privacy extension was so that somebody far away on the Internet couldn't easily correlate "all these hits on websites were from the same box". It gives a user approximately *zero* protection against their own ISP dumping the ARP tables off every switch 5 minutes and keeping the data handy in case they have to track a specific MAC or IP address down.
I dislike this method, though. It works, but I much prefer to correlate with radius accounting logs backended on a DHCP server. Sadly, even in v4, implementations are not always available. Of course, I don't run NAT at the provider edge, but customer's often do, and while I will be able to track the customer, knowing which machine will be just as impossible as it is with NAT. Jack
On Jan 10, 2011, at 4:22 PM, Jeff Kell wrote:
On 1/10/2011 6:55 PM, Owen DeLong wrote:
Nonetheless, NAT remains an opaque screen door at best.
If the bad guy is behind the door, it helps hide him.
If the bad guy is outside the door, the time it takes for his knife to cut through it is so small as to be meaningless.
For a "server" expected to be open to anyone, anywhere, anytime... yes. Otherwise no.
Uh, yes. For a server, it's a transparent hole in the wall.
NAT overload (many to 1), and 1-to-1 NAT with some timeout value both serve to disconnect the potential targets from the network, absent any static NAT or port mapping (for "servers").
No, they don't, really. Once the host becomes compromised via other means, it readily opens whatever necessary holes in the NAT to permit the undesirable traffic in. Additionally, even an un-compromised host may open the needed holes in NAT through processes like 6to4 and Teredo.
RFC-1918 behind NAT insures this (notwithstanding pivot attacks).
Stateful inspection without address mangling does just as much to insure this as NAT. You, like so many others, are confusing the security benefits of stateful inspection with the misapplication of the term NAT.
It is a decreasing risk, given the typical user initiated compromise of today (click here to infect your computer), but a non-zero one.
The whole IPv6 / no-NAT philosophy of "always connected and always directly addressable" eliminates this layer.
No, it doesn't. A good stateful firewall in front of an IPv6 host without NAT does every bit as much to protect it as the NAT box in your RFC-1918 scenario can. The problem is that everyone assumes directly addressable means directly reachable because they've become so ingrained in this world of NAT that they forget that it is possible to implement effective stateful security without it. The big difference between stateful inspection without NAT and with overloaded NAT is that in the overloaded NAT case, it will help hide the bad guy from the audit trails whereas the non-NAT approach does not do so. Owen
On 1/5/11 11:03 PM, Matthew Petach wrote:
On Wed, Jan 5, 2011 at 10:51 PM, Joe Greco <jgreco@ns.sol.net> wrote: Hi Joe,
I think what people are trying to say is that it doesn't matter whether or not your host is easily findable or not, if I can trivially take out your upstream router. With your upstream router out of commission, the findability of your host on the subnet really doesn't matter. Once the router is gone, so is your host, no matter how well hidden on the subnet it was.
So, the push here is to prevent the trivial ability to take out the upstream routers, so that the host-level issues will still matter, and be worth discussing.
icmp6 rate limiting both reciept and origination is not rocket science. The attack that's being described wasn't exactly dreamed up last week, is as observed not unique to ipv6, and can be mitigated. I'd encourage you to go look at rfc 3756 rfc 4443 and probably elsewhere including, the manual for your router os of choice and possibly your account rep if you don't get the kind of satisfaction you expect. We can probably do better still...
Hope this helps clarify the reason for the overarching concern about the /64 subnet size.
You can still have this problem when you assign a bunch of /112s how many neighbor unreachable entries per interface can your fib hold?
Thanks!!
Matt
... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 2:42 PM, Joel Jaeggli wrote:
icmp6 rate limiting both reciept and origination is not rocket science.
But it's *considerably* more complex and has far more potential implications than ICMP rate-limiting in IPv4 (which in and of itself is more complex and has more implications than a lot of folks realize, present company excepted). ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Thu, Jan 6, 2011 at 2:42 AM, Joel Jaeggli <joelja@bogus.com> wrote:
icmp6 rate limiting both reciept and origination is not rocket science. The attack that's being described wasn't exactly dreamed up last week, is as observed not unique to ipv6, and can be mitigated.
That does not solve the problem. Your response, like most on this thread, clearly indicates that you do not understand the underlying issue, or how traffic is actually forwarded. Neither IPv6 or IPv4 packets are simply forwarded onto the Ethernet, which is why the ARP/NDP table resource is required; a mapping from layer-3 to layer-2 address is needed. If the table resource for these entries is exhausted, no new mappings can be learned, and bad things happen. Either hosts on the specif interface, or the entire box, can no longer exchange traffic through the router. If an artificial rate-limit on discovery traffic is reached, discovery of mappings will also be impeded, meaning the denial-of-service condition exists and persists until the attack ceases. This may also affect either just that interface, or all interfaces on the router, depending on its failure mode. Rate-limiting discovery traffic still breaks the attached LANs. How badly it breaks them is implementation-specific. It does avoid using up all the router's CPU, but that doesn't help the hosts which can't exchange traffic. Again, depending on the router implementation, the fraction of hosts which cannot exchange traffic may reach 100%, and in effect, the router might as well be down.
You can still have this problem when you assign a bunch of /112s how many neighbor unreachable entries per interface can your fib hold?
You are correct, but the device can hold a significant number of entries compared to the size of a /112 subnet, just like it can hold a significant number of v4 ARP entries compared to a v4 /22 subnet. The difference is, ARP/NDP mappings for one /64 subnet can fill all the TCAM resources of any router that will ever exist. This is why more knobs are needed, and until we have that, the /64 approach is fundamentally broken. Again, until this problem is better-understood, it will not be solved. Right now, there are many vulnerable networks; and in some platforms running a dual-stack configuration, filling up the v6 NDP table will also impact v4 ARP. This means the problem is not confined to a cute beta-test that your customers are just starting to ask about; it will also take out your production v4 network. If you are running a dual-stack network with /64 LANs, you had better start planning. It's not just a problem on the horizon, it's a problem right now for many early-adopters. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On 1/6/11 12:24 AM, Jeff Wheeler wrote:
On Thu, Jan 6, 2011 at 2:42 AM, Joel Jaeggli <joelja@bogus.com> wrote:
icmp6 rate limiting both reciept and origination is not rocket science. The attack that's being described wasn't exactly dreamed up last week, is as observed not unique to ipv6, and can be mitigated.
That does not solve the problem. Your response, like most on this thread, clearly indicates that you do not understand the underlying issue, or how traffic is actually forwarded. Neither IPv6 or IPv4 packets are simply forwarded onto the Ethernet, which is why the ARP/NDP table resource is required; a mapping from layer-3 to layer-2 address is needed.
You seem hell bent on telling me what I do and don't know. I know how they're created.
If the table resource for these entries is exhausted, no new mappings can be learned,
Which at a minimum is why you want to police the number of nd messages that the device sends and unreachable entries do not simply fill up the nd cache, such that new mappings in fact can be learned because there are free entries. you need to do this on a per subnet basis rather than globally. as in ipv4 policing, global limits will kill the boxes ability to learn new entries everywhere.
and bad things happen. Either hosts on the specif interface, or the entire box, can no longer exchange traffic through the router. If an artificial rate-limit on discovery traffic is reached, discovery of mappings will also be impeded, meaning the denial-of-service condition exists and persists until the attack ceases. This may also affect either just that interface, or all interfaces on the router, depending on its failure mode.
Rate-limiting discovery traffic still breaks the attached LANs. How badly it breaks them is implementation-specific. It does avoid using up all the router's CPU, but that doesn't help the hosts which can't exchange traffic. Again, depending on the router implementation, the fraction of hosts which cannot exchange traffic may reach 100%, and in effect, the router might as well be down.
You can still have this problem when you assign a bunch of /112s how many neighbor unreachable entries per interface can your fib hold?
You are correct, but the device can hold a significant number of entries compared to the size of a /112 subnet,
a /112 is 16 bits. just like it can hold a
significant number of v4 ARP entries compared to a v4 /22 subnet.
I've got this lovely ex8208 here, a quick glance as the specs says 100k arp entries per linecard. that requires some sensible configuration from the outset otherwise shooting yourself in the foot with ipv4 is possible. I've done it both deliberately and on accident and I have better rules now.
The difference is, ARP/NDP mappings for one /64 subnet can fill all the TCAM resources of any router that will ever exist.
the arp mappings for an ipv4 /15 will fill up the device today.
This is why more knobs are needed, and until we have that, the /64 approach is fundamentally broken.
as with anything sent to into the control plane it needs to be policed there are sensible ways to do that today, they can improve.
Again, until this problem is better-understood, it will not be solved. Right now, there are many vulnerable networks; and in some platforms running a dual-stack configuration, filling up the v6 NDP table will also impact v4 ARP. This means the problem is not confined to a cute beta-test that your customers are just starting to ask about; it will also take out your production v4 network. If you are running a dual-stack network with /64 LANs, you had better start planning. It's not just a problem on the horizon, it's a problem right now for many early-adopters.
On Thu, Jan 6, 2011 at 4:32 AM, Joel Jaeggli <joelja@bogus.com> wrote:
Which at a minimum is why you want to police the number of nd messages that the device sends and unreachable entries do not simply fill up the nd cache, such that new mappings in fact can be learned because there
Your solution is to break the router (or subnet) with a policer, instead of breaking it with a full table. That is not better; both result in a broken subnet or router. If NDP requires an NDCache with "incomplete" entries to learn new adjacencies, then preventing it from filling up will ... prevent it from learning new adjacencies. Do you see how this is not a solution?
are free entries. you need to do this on a per subnet basis rather than globally. as in ipv4 policing, global limits will kill the boxes ability to learn new entries everywhere.
Yes, per-subnet/interface limits are absolutely needed, to prevent the entire box from being killed when one subnet/interface becomes impaired. That won't stop attackers from breaking your network, both because one subnet that presumably has production services on it is broken, and because, presumably, the attacker can identify other subnets on the router. Even if not, the problem remains that ... some subnets/interfaces are broken. On Thu, Jan 6, 2011 at 5:20 AM, Owen DeLong <owen@delong.com> wrote:
You must also realize that the stateful firewall has the same problems Uh, not exactly...
Of course it does. The stateful firewall must either 1) be vulnerable to the same form of NDP attack; or 2) have a list of allocated v6 addresses on the LAN. The reason is simple; a "stateful firewall" is no more able to store a 2**64 table than is a "router." Calling it something different doesn't change the math. If you choose to solve the problem by disabling NDP or allowing NS only for a list of "valid" addresses on the subnet, this can be done by a stateless router just like on a stateful firewall.
Uh, no it doesn't. It just needs a list of the hosts which are permitted to receive inbound connections from the outside. That's the whole
This solution falls apart as soon as there is a compromised host on the LAN, in which case the firewall (or router) NDP table can again be filled completely by that compromised/malicious host. In addition, the "stateful firewall," by virtue of having connection state, does not solve the inbound NDP attack issue. The list of hosts which can result in an NDP NS is whats causes this, and such a list may be present in a stateless router; but in both cases, it needs to be configured. "Stateful firewall" is unfortunately not magic dust that you can sprinkle on any network problem.
Except that routers don't (usually) have the ability to do dynamic outbound filtration which means that you have the scaling problem you've described of having to list every host on the net. If the router does have this ability, then, the router is, by definition, a stateful firewall.
As I state above, this capability does not "fix" the problem because the "stateful firewall" can just as easily be subject to DoS. You must have a list of valid v6 hosts on the subnet for your solution to work.
There are risks with sparse subnets that have been inadequately addressed for some of their failure modes at this time. I wouldn't go so far as saying they are a plan to fail. In most cases, most networks shouldn't be susceptible to an externally initiated ND attack in the first place because those should mostly be blocked at your border except for hosts that provide services to the larger internet.
As you have pointed out, without state information, you can't block this external attack. Even if you have a "stateful firewall," that firewall must itself have a solution for the internally-originated NDP attack. While the problems are slightly different and the internally-originated attack is less likely, both must be solved to ensure a reliable v6 network. The "stateful firewall" only solves half the problem and remains entirely vulnerable to the other half. On Thu, Jan 6, 2011 at 5:29 AM, Owen DeLong <owen@delong.com> wrote:
But, Jeff, if the router has a bunch of /24s attached to it and you scan them all, the problem is much larger than 250 arp entries.
That depends on how many is "a bunch" and how large is the ARP table on the device. A pizza box layer-3 switch, as I have mentioned, easily holds several thousand entries, modern units upwards of 10,000. That's enough room for several v4 /24 networks. No device has enough for one v6 /64 network. In addition, most of the IPs on your typical /24 networks will be in use routinely (in a hosting environment, anyway) which means that a scan really doesn't generate many new WHO-HAS and incomplete entries, because most layer-2 mappings are already known. Those that aren't fit within the cheap router's resource limitations somewhat easily. On Thu, Jan 6, 2011 at 5:41 AM, Phil Regnauld <regnauld@nsrc.org> wrote:
Additionnally I believe the size of typical recommended IPv6 space will probably discourage idle scanning, though this may change as the resources available increase, as Joe G. pointed out. If it does not, we'll have to address it if the existing mitigation techniques aren't sufficient.
It may indeed reduce "idle" or random scanning by agents such as worms. However, it will make intentional, DoS-oriented scanning quite popular. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Thu, Jan 6, 2011 at 5:54 AM, Jeff Wheeler <jsw@inconcepts.biz> wrote:
On Thu, Jan 6, 2011 at 5:20 AM, Owen DeLong <owen@delong.com> wrote:
You must also realize that the stateful firewall has the same problems Uh, not exactly...
Of course it does. The stateful firewall must either 1) be vulnerable to the same form of NDP attack; or 2) have a list of allocated v6 addresses on the LAN. The reason is simple; a "stateful firewall" is no more able to store a 2**64 table than is a "router." Calling it something different doesn't change the math. If you choose to solve the problem by disabling NDP or allowing NS only for a list of "valid" addresses on the subnet, this can be done by a stateless router just like on a stateful firewall.
Uh, no it doesn't. It just needs a list of the hosts which are permitted to receive inbound connections from the outside. That's the whole
This solution falls apart as soon as there is a compromised host on the LAN, in which case the firewall (or router) NDP table can again be filled completely by that compromised/malicious host. In addition, the "stateful firewall," by virtue of having connection state, does not solve the inbound NDP attack issue. The list of hosts which can result in an NDP NS is whats causes this, and such a list may be present in a stateless router; but in both cases, it needs to be configured.
Err, almost everything falls apart once you allow a compromised/malicious host on the local LAN. If you have circumstances where this may happen on anything like a regular basis, you really need all kinds of control/monitoring of traffic that go far beyond any local NDP overflow issues. Bill Bogstad
In article <AANLkTin10qow6Tt+YMfX8OienxixCqH57movhRj3uvSZ@mail.gmail.com> you write:
On Thu, Jan 6, 2011 at 4:32 AM, Joel Jaeggli <joelja@bogus.com> wrote:
Which at a minimum is why you want to police the number of nd messages that the device sends and unreachable entries do not simply fill up the nd cache, such that new mappings in fact can be learned because there
Your solution is to break the router (or subnet) with a policer, instead of breaking it with a full table. That is not better; both result in a broken subnet or router. If NDP requires an NDCache with "incomplete" entries to learn new adjacencies, then preventing it from filling up will ... prevent it from learning new adjacencies. Do you see how this is not a solution?
If all nodes implemented RFC4620 (IPv6 Node Information Queries), then you could ratelimit ND queries and, when ratelimiting, just regularly (say every few seconds) refresh the neighbor list with a multicast NI Node Addresses Query . In fact a router can still do this, it's just the nodes that do not implement RFC4620 that suffer the most, and perhaps that will serve as an incentive to get RFC4620 implemented on those nodes. Mike.
On Wed, Jan 5, 2011 at 10:51 PM, Joe Greco <jgreco@ns.sol.net> wrote:
On Jan 6, 2011, at 12:54 PM, Joe Greco wrote: ... To say that "the endpoint *will be found*" is a truism, in the same way that a bank *will* be robbed. =A0You're not trying to guarantee that it will never happen. =A0You're trying to *deter* the bad guys. =A0You wa= nt the bad guy to go across the street to the less-well-defended bank across the street. =A0You can't be sure that they'll do that. =A0Someone who has it out for you and your bank will rob your bank (or end up in jail or dead or whatever). =A0But you can scare off the guy who's just looking to score a few thousand in easy cash.
Making it harder to scan a network *can* and *does* deter certain classes of attacks. =A0That it doesn't prevent every attack isn't a compelling proof that it doesn't prevent some, and I have to call what you said a poor argument for that reason.
Hi Joe,
I think what people are trying to say is that it doesn't matter whether or not your host is easily findable or not, if I can trivially take out you= r upstream router. With your upstream router out of commission, the findability of your host on the subnet really doesn't matter. Once the router is gone, so is your host, no matter how well hidden on the subnet it was.
So, the push here is to prevent the trivial ability to take out the upstream routers, so that the host-level issues will still matter, and be worth discussing.
Hope this helps clarify the reason for the overarching concern about the /64 subnet size.
I completely understand that issue. However, I feel that this is not a hopeless quagmire. We've frequently run into major problems in IP engineering in the past, and have coped with them with varying degrees of success (smurf and CIDR pop to mind). We've also shown that the underlying protocols and technology are open to tinkering where necessary; consider 802.3az. I basically agree that there may be some issues with the current IPv6 NDP (etc) system. We actually have issues with the current IPv4 ARP system, too (think spoofing etc). However, I think we might be looking at this the wrong way. Instead of vendor knobs to change the IPv6 protocol, which may interfere with both ethernet *and* v6, it seems to me that maybe the problem can be solved just for the ethernet portions of the problem. For example, while there might be a /64's worth of address space on an interface, I'm *pretty* sure that the design goal isn't actually to support all that. Further, a router's need to talk to that network will be isolated to that interface. I wouldn't be too shocked to see the NDP protocol morph a little to allow routers to more easily maintain a neighbor list in local (per-ifc) silicon, rather than in expensive router TCAM, since the best use of TCAM isn't ARP^WNDP anyways. The fact of the matter is that silicon is cheap and getting cheaper. We will see ethernet switches allowing the learning of MAC info for NDP purposes, just as we currently have ethernet switches policing MAC's for IPv4 sanity purposes. Routers will probably need to do some policing of NDP rates, but also we may see better silicon to help with ethernet. There may be ways to fix the *ethernet* /64 issues in IPv6. We could be talking constructively about that. Instead, I'm seeing what seems to me to be dislike of /64's in general. I don't really care to get into arguments about that, that ship sailed long ago, and those who missed the boat fighting it at the time aren't going to get any joy here. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Jan 6, 2011, at 1:51 PM, Joe Greco wrote:
There are numerous parallels between physical and electronic security. Let's just concede that for a moment.
I can't, and here's why: 1. In the physical world, attackers run a substantial risk of being caught, and of tangible, severe penalties if that eventuality comes to pass; in the online world, the risk of being caught is nil. 2. In the physical world, attackers have a limited number and variety of resources they can bring to bear; in the online world, the attackers have near-infinite resources, for all practical purposes. 3. In the physical world, the attackers generally don't posses the ability nor the desire to bring the whole neighborhood crashing down around the ears of the defenders; in the online world, they almost always have the ability, and often the desire, to do just that.
Making it harder to scan a network *can* and *does* deter certain classes of attacks.
But as I've tried to make clear, a) I don't believe that sparse addressing does in fact make it harder to scan the network, due to hinted scanning via DNS/routing/whois/ND/multicast, b) I believe that pushing the attackers towards hinted scanning will have severe second-order deleterious effects on DNS/network infrastructure/whois, resulting in an overall loss in terms of security posture, and c) I don't believe that attackers will cease pseudo-randomized scanning, and d) I believe that in fact they will throw vastly more resources at both hinted and pseudo-randomized scanning, that they have near-infinite resources at their disposal (with an ever-expanding pool of potential resources to harness), and that the resultant increase in scanning activity will also have severely deleterious second-order effects on the security posture of the Internet as a whole. In short, I'm starting from a substantially different, far more pessimistic set of base premises, and therefore draw a far more negative set of resulting inferences. I don't believe the sky is falling; I believe it's already fallen, and that we're just now starting to come to grips with some of the ramifications of its fall. In my view, an IPv6 Internet is considerably less secure, and inherently less securable, than the present horribly insecure and barely securable IPv4 Internet; furthermore, I believe that many of the supposed 'security' measures being touted for IPv6 are at best placebos, and at worst are iatrogenic in nature. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Thu, 06 Jan 2011 07:50:17 GMT, "Dobbins, Roland" said:
In my view, an IPv6 Internet is considerably less secure, and inherently less securable, than the present horribly insecure and barely securable IPv4 Internet;
Playing devil's advocate for a moment... Even if an IPv6 network is 10 times as insecure as a similarly configured IPv4 network, they are both as dust motes in a tornado given the incredibly insecure state of most endpoints on the network. Last I looked, there's a lot less scanning of subnets looking for probably-firewalled-by-default-anyhow systems because it's just so much easier to to whack the systems in a drive-by attack when the system visits a compromised web page... And the "ZOMG they can overflow the ARP/ND/whatever table" is a total red herring - you know damned well that if a script kiddie with a 10K node botnet wants to hose down your network, you're going to be looking at a DDoS, and it really doesn't matter whether it's SYN packets, or ND traffic, or forged ICMP echo-reply mobygrams.
On 1/6/2011 10:28 AM, Valdis.Kletnieks@vt.edu wrote:
And the "ZOMG they can overflow the ARP/ND/whatever table" is a total red herring - you know damned well that if a script kiddie with a 10K node botnet wants to hose down your network, you're going to be looking at a DDoS, and it really doesn't matter whether it's SYN packets, or ND traffic, or forged ICMP echo-reply mobygrams.
My personal concern is not the intentional DDoS, but the idiotic side effects of unintentional idiocy. Nachi was nicer than Blaster to the host, but it unintentionally DDoS'd many networks that couldn't handle the load. How many morons will scan a /64 out of curiosity? Even if they get bored after 1-2 hours, the effects of such a scan on the ND table could be catastrophic in the protocol's default behavior. How many virus writers will utilize a hinted scan technique, which could still end up scanning thousands of v6 addresses per /64 and following consecutive /64s which likely are handled by the same router? It is not the intentional that we should fear, but the unintentional. Jack
On Jan 6, 2011, at 11:48 PM, Jack Bates wrote:
It is not the intentional that we should fear, but the unintentional.
This is the single largest issue with IPv6 and the whole ND mess in a nutshell - unintentional DoS becomes much more likely. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 6, 2011, at 11:28 PM, <Valdis.Kletnieks@vt.edu> wrote:
Playing devil's advocate for a moment...
I don't see this as devil's advocacy, since I've said a) we're already hosed (i.e., what you said) and b), we're going to get even more hosed with IPv6. ;> ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 6, 2011, at 1:51 PM, Joe Greco wrote:
There are numerous parallels between physical and electronic security. Let's just concede that for a moment.
I can't, and here's why:
1. In the physical world, attackers run a substantial risk of being caught,= and of tangible, severe penalties if that eventuality comes to pass; in th= e online world, the risk of being caught is nil.
That's not true, and we see examples of it happening periodically.
2. In the physical world, attackers have a limited number and variety of re= sources they can bring to bear; in the online world, the attackers have nea= r-infinite resources, for all practical purposes.
No, they don't. They have a different set of resources. They may be able to fill your transit connections, but they probably cannot cause your line cards to start on fire, or your switches to come unscrewed from the rack, things that real-world attackers can do. In the physical world, attackers have a near-infinite selection of attacks. If I really want into your house, for example, I can get there. It might be by breaking through a door, or by smashing a window, or (my favorite) by taking my Sawzall and a demolition bit and putting a hole through your wall. I can convince your kids that I'm a policeman and there's a bad man in the house. I can sleep with your wife and gain access that way. We see parallels in the online world, different, but vulnerable as well.
3. In the physical world, the attackers generally don't posses the ability = nor the desire to bring the whole neighborhood crashing down around the ear= s of the defenders; in the online world, they almost always have the abilit= y, and often the desire, to do just that.
So? That's a matter of what the goal of the attack is. In the physical world, we do indeed have some attackers who possess the ability and desire to bring whole neighborhoods crashing down; we lost some great real estate about ten years ago in lower Manhattan due to such nutjobs, and suicide bombers are a fact of life in some areas of the world. Electronic attacks are more likely to result in electronic "crashing down" for a variety of reasons, one of which is that overwhelming things electronically is fairly easy and effective, but the flip side to that is that the resulting damage is often just a short-term outage (PayPal, Mastercard, etc., all seem to be back online after recent attacks). The fact that there are some differences between physical and electronic security doesn't mean that there aren't also many parallels. It's probably hard to permanently destroy electronic infrastructure. Certain attacks, such as on the facility (kill the cooling, rapidly toggle the power, etc) might be effective in that sense. It's easier to destroy stuff during a physical attack. So that's different, fine. However, the point of security is still to try to convince a bad guy to go elsewhere, to find an easier target. When he has it out for you, though, it's basically a matter of whether or not he's willing to do what is necessary. That concept works for both the real world and for the online world.
Making it harder to scan a network *can* and *does* deter certain classes= of attacks.=20
But as I've tried to make clear, a) I don't believe that sparse addressing = does in fact make it harder to scan the network, due to hinted scanning via= DNS/routing/whois/ND/multicast,
You don't have to believe it. It certainly doesn't make it harder in all cases, either. No amount of randomization will make "www.foobar.com" less readily identifiable with an AAAA pointing at it. But there are other use cases. Consider, for example, /56 allocations to end users on a service provider's network. There'll be no DNS/routing/whois vectors there; there might be ND/multicast vectors of some sort. The point is, though, that the guy with a /56 at the end of a cablemodem will be effectively unscannable if he's using randomly-selected 4941 IP addresses. And getting all righteous about firewall configurations and how he should have a transparent proxying firewall is fine, I agree, but the *real* world is that when his buddy tells him that he's having problems running WoW because of the firewall and he can do ${FOO} to turn it off, he's going to do that, because users are results- oriented in a way that makes all of us groan. So what I am looking for now is for you to explain to me how an end-user with a /56 (or even a /64!) on a cablemodem is not "harder to scan".
b) I believe that pushing the attackers to= wards hinted scanning will have severe second-order deleterious effects on = DNS/network infrastructure/whois, resulting in an overall loss in terms of = security posture,
I don't buy that. I believe that things like DNS and whois are natural candidates for additional layers of application level protection, and that application level protection scales more readily than things done closer to the wire. We're already seeing whois services protected by query-rate limits, and there's no reason DNS cannot be protected similarly.
and c) I don't believe that attackers will cease pseudo-r= andomized scanning, and d) I believe that in fact they will throw vastly mo= re resources at both hinted and pseudo-randomized scanning, that they have = near-infinite resources at their disposal (with an ever-expanding pool of p= otential resources to harness), and that the resultant increase in scanning= activity will also have severely deleterious second-order effects on the s= ecurity posture of the Internet as a whole.
Fair enough. I see where you're coming from and why you believe that, and it might even become true. On the flip side, however, I would point out that attackers have had vastly more resources made available to them in part *because* IPv4 has been so easily scanned and abused. To be sure, a lot of viruses have spread via e-mail spam and drive-by downloads, and sparse addressing will not prevent script kiddies from banging away on ssh brute force attacks against www.yoursite.com. But there's been a lot of spread through stupidity as well. Further, the sheer magnitude of the task of random scanning means that any actual random scanning of /64 networks will be ineffective; this leaves us to discuss ways to minimize the "pseudo" in pseudo-random scanning, and to see what can be done about hinted scanning. I think there's room for some constructive discussion there.
In short, I'm starting from a substantially different, far more pessimistic= set of base premises, and therefore draw a far more negative set of result= ing inferences.
I hope you'll understand that I'm trying to get a feel for all of that.
I don't believe the sky is falling; I believe it's already fallen, and that= we're just now starting to come to grips with some of the ramifications of= its fall. =20
In my view, an IPv6 Internet is considerably less secure, and inherently le= ss securable, than the present horribly insecure and barely securable IPv4 = Internet; furthermore, I believe that many of the supposed 'security' measu= res being touted for IPv6 are at best placebos, and at worst are iatrogenic= in nature.
I don't see that. I see potential issues with ND, for example, but I don't see the potential for things like 4941 as "considerably less secure." Unless you're one of the people who are in favor of running everything through NAT as a form of "firewall", or things like that. I understand the desire there, too, though I think it's horribly broken... ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On 1/6/2011 10:44 AM, Joe Greco wrote:
On the flip side, however, I would point out that attackers have had vastly more resources made available to them in part *because* IPv4 has been so easily scanned and abused. To be sure, a lot of viruses have spread via e-mail spam and drive-by downloads, and sparse addressing will not prevent script kiddies from banging away on ssh brute force attacks against www.yoursite.com. But there's been a lot of spread through stupidity as well.
A randomly setup ssh server without DNS will find itself brute force attacked. Darknets are setup specifically for detection of scans. One side effect of v6, is determining how best to deploy darknets, as we can't just take one or two blocks to do it anymore. We'll need to interweave the darknets with the production blocks. I wish it was possible via DHCPv6-PD to assign a block minus a sub-block (hey, don't use this /64 in the /48 I gave you). It could be that darknets will have to go and flow analysis is all we'll be left with. Jack
On 6 Jan 2011, at 17:17, Jack Bates wrote:
A randomly setup ssh server without DNS will find itself brute force attacked. Darknets are setup specifically for detection of scans. One side effect of v6, is determining how best to deploy darknets, as we can't just take one or two blocks to do it anymore. We'll need to interweave the darknets with the production blocks. I wish it was possible via DHCPv6-PD to assign a block minus a sub-block (hey, don't use this /64 in the /48 I gave you). It could be that darknets will have to go and flow analysis is all we'll be left with.
As RFC6018 suggests, this could be done dynamically on any given active subnet. Tim
On 1/7/2011 8:17 AM, Tim Chown wrote:
As RFC6018 suggests, this could be done dynamically on any given active subnet.
Unfortunately, I don't see support for it in major router vendors for service providers. Currently, flow + arp/ND/routing tables are utilized to determine a variety of situations, but even then, flow collection is limited at higher speeds. I considered a 1 in 200 approach, but the iBGP tables will go through the roof for a single DHCPv6 pool in a single pop. I a worse problem with darknets than those scanning have with scanning a /64, especially since their scans are likely to be more targeted and not as random. Jack
On Thu, Jan 6, 2011 at 12:54 AM, Joe Greco <jgreco@ns.sol.net> wrote:
I'm starting off with the assumption that knowledge of the host address *might* be something of value. If it isn't, no harm done. If it is, and the address becomes virtually impossible to find, then we've just defeated an attack, and it's hard to see that as anything but positive.
I'm starting off with the assumption that the layer-3 network needs to function for the host machines to be useful. Your position is to just hand any attacker an "off switch" and let them disable any (LAN | router, depending on router failure mode) for which they know the subnet exists, whether or not they know any of its host addresses. This is a little like spending money on man-traps and security guards, but running all your fiber through obvious ducts in a public parking garage. It may be hard to compromise the hosts, but taking them offline is trivial. On Thu, Jan 6, 2011 at 1:01 AM, Kevin Oberman <oberman@es.net> wrote:
I am amazed at the number of folks who seem to think that there is time to change IPv6 is ANY significant way. Indeed, the ship has failed. If you r network is not well along in getting ready for IPv6, you are probably well on you way to being out of business.
There are many things that can change very easily. Vendors can add knobs, subnet size can get smaller (it works just fine today, it just isn't "standard"), and so on. A TCP session today looks a lot different than it did in the mid-90s. Now we have things like SYN cookie, window scaling, we even went through the "hurry up and configure TCP MD5 on your BGP just in case." Fixing this problem by deploying subnets as a /120 instead of a /64 is a lot easier than any of those changes to TCP, which all required operating system modifications on one or both sides. How many networks honor ICMP route-record, source routing, or make productive use of redirects (if they have not outright disabled it?) How many networks decided to block all ICMP traffic because some clueless employee told them it was smart? CIDR routing? Do you recall that the TTL field in IP headers was originally not a remaining-hops-count, but actually, a value in seconds (hence "Time To Live")? IPv4, and the things built on top of it, have evolved tremendously, some, all the way to the host network. A lot of this evolution took place before it was common to conduct a credit card transaction over the Internet, at a time when it really was not mission-critical for most operators. IPv6 is still not there, but I agree, we are rapidly approaching that time, and much more than 90% of IPv4 networks have a lot of work to do. It would be good to see LANs smaller than /64 accepted sometime before IPv6 does become widely-deployed to end users. Or some other practical solution to the problems of huge subnet sizes, whatever those solutions may be. My guess is there may be other, very significant, challenges to having huge LAN subnets. This is one we actually know about, but are choosing not to solve. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Jan 5, 2011, at 9:17 PM, Joe Greco wrote:
It has nothing to do with "security by obscurity".
You may wish to re-read what Joe was saying - he was positing sparse addres= sing as a positive good because it will supposedly make it more difficult f= or attackers to locate endpoints in the first place, i.e., security through= obscurity. I think that's an invalid argument.
That's not necessarily security through obscurity. A client that just picks a random(*) address in the /64 and sits on it forever could be reasonably argued to be doing a form of security through obscurity. However, that's not the only potential use! A client that initiates each new outbound connection from a different IP address is doing something Really Good.
If hosts start cycling their addresses that frequently, don't you run the risk of that becoming a form of DOS on your router's ND tables? Owen
On Thu, Jan 6, 2011 at 6:46 PM, Owen DeLong <owen@delong.com> wrote:
On Jan 5, 2011, at 9:17 PM, Joe Greco wrote:
However, that's not the only potential use! A client that initiates each new outbound connection from a different IP address is doing something Really Good. If hosts start cycling their addresses that frequently, don't you run the risk of that becoming a form of DOS on your router's ND tables?
Of course, Owen. I replied to that specific point in Joe's post earlier, although I have written so much on this thread, I have tried to condense my replies, so anyone reading in thread mode may have missed it. The fact that Joe even makes that suggestion signals how little understanding he has of the problem. His idea would DoS his own router. There are many posts on this thread from folks who think of themselves as expert, at least enough to try and tell me that I'm wrong, when they lack basic understanding of how the forwarding process works in operation. That is what everyone should be afraid of -- most of the "experts" aren't, and almost no one has practical experience with a mission-critical IPv6 network, so conditions like this remain unanalyzed. It took a long time to discover a lot of vulnerabilities as the Internet grew from academia to everyday necessity. We are all now making some obvious, unnecessary mistakes with IPv6 deployments. It is also crucial to understand that some platforms use the same resources (in control plane or data plane) for ARP and NDP tables and resolution, and this means that some dual-stack networks will see their IPv4 networks melt down due to problems with their IPv6 network design and implementation. If you are dual-stack, this is probably not a problem confined to v6 traffic flowing through your network; it may also take out your mission-critical v4 services. If you don't know, then you need to admit you don't know and find out what the failure mode of your routers is, before your network blows up in your face. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Thu, Jan 6, 2011 at 6:46 PM, Owen DeLong <owen@delong.com> wrote:
On Jan 5, 2011, at 9:17 PM, Joe Greco wrote:
However, that's not the only potential use! =A0A client that initiates each new outbound connection from a different IP address is doing something Really Good. If hosts start cycling their addresses that frequently, don't you run the risk of that becoming a form of DOS on your router's ND tables?
Of course, Owen. I replied to that specific point in Joe's post earlier, although I have written so much on this thread, I have tried to condense my replies, so anyone reading in thread mode may have missed it.
The fact that Joe even makes that suggestion signals how little understanding he has of the problem. His idea would DoS his own router.
With today's implementations of things? Perhaps. However, you show yourself equally incapable of grasping the real problem by looking at the broader picture, and recognizing that problematic issues such as finding hosts on a network are very solvable problems, and that we are at an early enough phase of IPv6 that we can even expect some experiments will be tried. Look beyond what _is_ today and see if you can figure out what it _could_ be. There's no need for what I suggest to DoS a router; that's just accepting a naive implementation and saying "well this can't be done because this one way of doing it breaks things." It is better to look for a way to fix the problem. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Thu, Jan 6, 2011 at 9:24 PM, Joe Greco <jgreco@ns.sol.net> wrote:
With today's implementations of things? Perhaps. However, you show yourself equally incapable of grasping the real problem by looking at the broader picture, and recognizing that problematic issues such as finding hosts on a network are very solvable problems, and that we are at an early enough phase of IPv6 that we can even expect some experiments will be tried.
Look beyond what _is_ today and see if you can figure out what it _could_ be. There's no need for what I suggest to DoS a router; that's just accepting a naive implementation and saying "well this can't be done because this one way of doing it breaks things." It is better to look for a way to fix the problem.
Actually, unlike most posters on this subject, I have a very good understanding of how everything works "under the hood." For this reason, I also understand what is possible given the size of a /64 subnet and the knowledge that we will never have adjacency tables approaching this size. If you are someone who thinks, oh, those Cisco and Juniper developers will figure this out, they just haven't thought about it hard enough yet, I can understand why you believe that a simple fix like "no ip directed-broadcast" is on the horizon. Unfortunately, it is not. The only thing they can do is give more mitigation knobs to allow operators to choose our failure modes and thresholds. To really fix it, you need a smaller subnet or a radical protocol change that will introduce a different set of problems. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Jan 6, 2011, at 7:13 PM, Jeff Wheeler wrote:
On Thu, Jan 6, 2011 at 9:24 PM, Joe Greco <jgreco@ns.sol.net> wrote:
With today's implementations of things? Perhaps. However, you show yourself equally incapable of grasping the real problem by looking at the broader picture, and recognizing that problematic issues such as finding hosts on a network are very solvable problems, and that we are at an early enough phase of IPv6 that we can even expect some experiments will be tried.
Look beyond what _is_ today and see if you can figure out what it _could_ be. There's no need for what I suggest to DoS a router; that's just accepting a naive implementation and saying "well this can't be done because this one way of doing it breaks things." It is better to look for a way to fix the problem.
Actually, unlike most posters on this subject, I have a very good understanding of how everything works "under the hood." For this reason, I also understand what is possible given the size of a /64 subnet and the knowledge that we will never have adjacency tables approaching this size.
If you are someone who thinks, oh, those Cisco and Juniper developers will figure this out, they just haven't thought about it hard enough yet, I can understand why you believe that a simple fix like "no ip directed-broadcast" is on the horizon. Unfortunately, it is not. The only thing they can do is give more mitigation knobs to allow operators to choose our failure modes and thresholds. To really fix it, you need a smaller subnet or a radical protocol change that will introduce a different set of problems.
I think I have a pretty good understanding of what happens under the hood, too. The reality is that what you say is theoretically possible, but, not terribly practical from an attacker perspective. It's pretty trivial to block these attacks out from threats outside your network or at least severely limit the number of attackable addresses within the individual network. Smaller network segments are not necessary to reduce the attackable profile of the network segment. Yes, a determined host within your network segment can DOS the network segment this way. Guess what... If you've got a determined attacker on your network segment, you've already lost on multiple other levels, so, this might even be a feature. As such, while the issue you bring up can be a problem for a poorly administered network, I think you overstate it's viability as an attack vector in most real world instances. Owen
On Jan 5, 2011, at 9:17 PM, Joe Greco wrote:
It has nothing to do with "security by obscurity". =20 You may wish to re-read what Joe was saying - he was positing sparse = addres=3D sing as a positive good because it will supposedly make it more = difficult f=3D or attackers to locate endpoints in the first place, i.e., security = through=3D obscurity. I think that's an invalid argument. =20 That's not necessarily security through obscurity. A client that just picks a random(*) address in the /64 and sits on it forever could be reasonably argued to be doing a form of security through obscurity. However, that's not the only potential use! A client that initiates each new outbound connection from a different IP address is doing something Really Good. =20 If hosts start cycling their addresses that frequently, don't you run = the risk of that becoming a form of DOS on your router's ND tables?
It could, but given the changes we've seen in the last twenty years, I have no reason to expect that this won't become practical and commonplace in IPv6. I think it is a matter of finding the right enabling technologies, and as others have noted, what currently exists for IPv6 isn't necessarily the best-of-breed. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
On Wednesday, January 05, 2011 10:42:25 pm George Bonser wrote:
I don't think you are understanding the problem. The problem comes from addressing hosts that don't even exist. This causes the router to attempt to find that host. The v6 equivalent of ARP. At some point that table becomes full of entries for hosts that don't exist so there isn't room for hosts that do exist.
Ok, perhaps I'm dense, but why is the router going to try to find a host that it already doesn't know based on an unsolicited outside packet? Why is the router trusting the outside's idea of what addresses are active, and why isn't the router dropping packets on the floor destined to hosts on one of its interfaces' local subnets that it doesn't already know about? If the packet is a response to a request from the host, then the router should have seen the outgoing packet (or, in the case of HSRP-teamed routers, all the routers in the standby group should be keeping track of all hosts, etc) and it should already be in the neighbor table. Sounds a bit too much like ATM SVC addressing and the old LANE business for my liking. Like I said, perhaps I'm dense and ignorant and just simply misunderstanding the issue, but I still find it hard to believe that a router would blindly trust an outside address to know about an inside address that is not already in the router's neighbor table. In the case of a server (the only case I can see for such an unsolicited packet), I would think that it would be in the router's neighbor table already, or at least the server's OS should take pains to make sure it's in the neighbor table already!
On 6 Jan 2011, at 15:10, Lamar Owen wrote:
Ok, perhaps I'm dense, but why is the router going to try to find a host that it already doesn't know based on an unsolicited outside packet? Why is the router trusting the outside's idea of what addresses are active, and why isn't the router dropping packets on the floor destined to hosts on one of its interfaces' local subnets that it doesn't already know about?
If the packet is a response to a request from the host, then the router should have seen the outgoing packet (or, in the case of HSRP-teamed routers, all the routers in the standby group should be keeping track of all hosts, etc) and it should already be in the neighbor table.
There's some interesting discussion around this point in RFC6018, which discusses the use of greynet monitoring in sparsely populated IPv6 subnets. This approach may be one method to help detect and or mitigate such attacks. Tim
On Thu, 6 Jan 2011, Lamar Owen wrote:
Ok, perhaps I'm dense, but why is the router going to try to find a host that it already doesn't know based on an unsolicited outside packet? Why is the router trusting the outside's idea of what addresses are active, and why isn't the router dropping packets on the floor destined to hosts on one of its interfaces' local subnets that it doesn't already know about?
Because the standard says it should do that.
If the packet is a response to a request from the host, then the router should have seen the outgoing packet (or, in the case of HSRP-teamed routers, all the routers in the standby group should be keeping track of all hosts, etc) and it should already be in the neighbor table.
Are you trying to abolish the end to end principle of the Internet by implementing stateful firewalls in all routers?
Like I said, perhaps I'm dense and ignorant and just simply misunderstanding the issue, but I still find it hard to believe that a router would blindly trust an outside address to know about an inside address that is not already in the router's neighbor table.
That's how it's always worked, both for v4 and v6. -- Mikael Abrahamsson email: swmike@swm.pp.se
On 1/6/2011 9:27 AM, Mikael Abrahamsson wrote:
On Thu, 6 Jan 2011, Lamar Owen wrote:
Ok, perhaps I'm dense, but why is the router going to try to find a host that it already doesn't know based on an unsolicited outside packet? Why is the router trusting the outside's idea of what addresses are active, and why isn't the router dropping packets on the floor destined to hosts on one of its interfaces' local subnets that it doesn't already know about?
Because the standard says it should do that.
The standard was broken with arp, and continues to be broken with NDP. Routers should not handle things the same as normal hosts.
If the packet is a response to a request from the host, then the router should have seen the outgoing packet (or, in the case of HSRP-teamed routers, all the routers in the standby group should be keeping track of all hosts, etc) and it should already be in the neighbor table.
Are you trying to abolish the end to end principle of the Internet by implementing stateful firewalls in all routers?
Not stateful firewalls. He's referring to neighbor learning based on incoming traffic to the router from the trusted side. ie, I received a packet from the server, so I will add his MAC to my neighbor table. There are many methods for learning MAC addresses, though. DHCP/MAC security with static ARP and other viable options have properly killed this problem in v4 by routers not looking for unknown neighbors.
Like I said, perhaps I'm dense and ignorant and just simply misunderstanding the issue, but I still find it hard to believe that a router would blindly trust an outside address to know about an inside address that is not already in the router's neighbor table.
That's how it's always worked, both for v4 and v6.
It's how it works, but not how it should work. In the last years, v4 has seen some nice implementations that specifically are designed (especially for eyeball networks who have vast pools of space) to keep routers from sending unsolicited arp requests and maintaining only a valid pool of mappings. That is how the protocols should have been designed in the first place. Host to Host communications are one thing. Router to host communications should be designed with the idea that the host needs to tell the router who it is, not the router asking. This keeps packets from unknown hosts from causing these table issues. There are also (some of the above designed to do) security measures dealing with local abuse and hijacking, but that is separate issue. This is about resource exhaustion, and policing/ACL isn't the proper fix. Having hosts (in a secure or insecure manner) notify the router of their mapping is the appropriate fix. Protocol wise, insecure is fine, wrapped with an extra layer of security (as security can have multiple implementations). Jack
On Thu, 6 Jan 2011, Jack Bates wrote:
Not stateful firewalls. He's referring to neighbor learning based on incoming traffic to the router from the trusted side. ie, I received a packet from the server, so I will add his MAC to my neighbor table. There are many methods for learning MAC addresses, though. DHCP/MAC security with static ARP and other viable options have properly killed this problem in v4 by routers not looking for unknown neighbors.
When people start to talk about "trusted side" etc, I immediately think firewalls and not plain routing. I don't trust anyone, neither my customers, nor Internet. I guess it might make sense to have the host register address usage (in the SLAAC case) with the router, and the router having a mechanism to broadcast/multicast to everybody that "I lost my state mac/ip table, please re-register" so they can do it again.
It's how it works, but not how it should work. In the last years, v4 has seen some nice implementations that specifically are designed (especially for eyeball networks who have vast pools of space) to keep routers from sending unsolicited arp requests and maintaining only a valid pool of mappings.
In the DHCP case this is easy, yes. I perfer to have only LL on the link towards the customer operated CPE, thus I don't really need to keep lots of ND state per customer. -- Mikael Abrahamsson email: swmike@swm.pp.se
On 1/6/2011 9:52 AM, Mikael Abrahamsson wrote:
In the DHCP case this is easy, yes.
I perfer to have only LL on the link towards the customer operated CPE, thus I don't really need to keep lots of ND state per customer.
I use RBE and unnumbered vlans in most areas, which keeps some state, but effectively prohibits the problem, as well as other problems. I have vendors curse me for wanting the router to handle the security instead of their DSLAMs, but then their DSLAMs often broke IPv6 with their so called security. Jack
On Thursday, January 06, 2011 10:27:54 am you wrote:
On Thu, 6 Jan 2011, Lamar Owen wrote:
Ok, perhaps I'm dense, but why is the router going to try to find a host that it already doesn't know based on an unsolicited outside packet?
Because the standard says it should do that.
Since when have standards been blindly followed by vendors? If I were an IPv6 router vendor, I'd code up a 'drop the packet if it's destined for an address in a directly attached subnet but that doesn't already have a neighbor table entry ' knob and sell it as a high-priced security add-on to my already bloated product line.... Actually, thinking like a coder, it would be removing the code that punts to neighbor discovery on receipt of an outside-the-destination-subnet packet destined to an address that's not in the neighbor table (and is an address within one of the router's directly attached subnets), and wouldn't require any additional CPU (or hardware punt to neighbor discovery) to implement. Could even be sold as a forwarding performance improvement (for incoming to the subnet packets only, obviously). And then allow an 'icmp-host-unreachable' to either be returned or not, according to the policy of the subnet in question. Standards are written by people, of course, and most paragraphs have reasons to be there; I would find it interesting to hear the rationale for a router filling a slot in its neighbor table for a host that doesn't exist. For that matter, I'd like to see a pointer to which standard that says this so I can read the verbiage myself, as that may have enough explanation to satisfy my curiosity.
If the packet is a response to a request from the host, then the router should have seen the outgoing packet (or, in the case of HSRP-teamed routers, all the routers in the standby group should be keeping track of all hosts, etc) and it should already be in the neighbor table.
Are you trying to abolish the end to end principle of the Internet by implementing stateful firewalls in all routers?
Not at all; end to end is fine, but if there is no end to send a packet to, that packet should be dropped and not blindly trusted (since it will be abused for sure) by the router serving the destination subnet, which is the only router that is in a position to know if the endpoint exists or not. Dropping in this case means 'don't punt to discovery for this packet' and isn't blocking, it's just not taking the extra effort to look up something it already doesn't know. Not what I consider a stateful firewall. This reminds me somewhat of some IPv4 routers doing Proxy ARP by default.
Like I said, perhaps I'm dense and ignorant and just simply misunderstanding the issue, but I still find it hard to believe that a router would blindly trust an outside address to know about an inside address that is not already in the router's neighbor table.
That's how it's always worked, both for v4 and v6.
Sounds like I need to study it in more depth, but I'm still having a hard time seeing why such behavior is a good idea. Time to break out the wireshark laptop and do some SPANning.... and to see if I can find the reference in the RFC's somewhere. Thanks for the info.
Hey Lamar, long time no talk. On 1/6/2011 10:16 AM, Lamar Owen wrote:
Standards are written by people, of course, and most paragraphs have reasons to be there; I would find it interesting to hear the rationale for a router filling a slot in its neighbor table for a host that doesn't exist. For that matter, I'd like to see a pointer to which standard that says this so I can read the verbiage myself, as that may have enough explanation to satisfy my curiosity.
This actually came up last week in freenode/#ipv6; someone was puzzled why there were FAIL entries showing up in their neighbor table, so I dug into the RFC I found for ND (2461). Turns out, it specifically says entries for failed solicitations SHOULD be deleted. http://tools.ietf.org/html/rfc2461#section-7.3.3 It's the seventh paragraph into that section, including the indented Note. ("Upon entering the PROBE state...") Pardon me if that's the wrong RFC. Jima
On 1/5/2011 10:18 PM, Dobbins, Roland wrote:
This whole focus on sparse addressing is just another way to tout security-by-obscurity. We already know that security-by-obscurity is a fundamentally-flawed concept, so it doesn't make sense to try and keep rationalizing it in various domain-specific instantiations.
I agree. It's not the hosts I'm worried about protecting, it's the potential noise directed at the IPv6 space, intentional/irrational scan or otherwise generated traffic. Still, the idea that "nobody will scan a /64" reminds me of the days when 640K ought to be enough for anybody, 56-bit DES ought to be good enough to never be cracked, 10 megabits was astoundingly fast, a T1 was more than enough commodity, and a 300-baud acoustic coupler was a modern marvel. I hesitate to write anything off to impossibility, having witnessed the 8 to 16 to 32 to 64-bit processor progression :) But perhaps it's time for Moore to rest and we can make assumptions about that impossibility. Scanned or not, IPv6 still presents a "very large" route target. Given the transient / spoofed / backscatter / garbage / scan / script kiddie noise that accidentally lands in my IPv4 space, I shudder to think of the noise level of the many-orders-of-magnitude-greater IPv6 space. And the "depth" of infrastructure at which you can decide the traffic is bogus is much greater with IPv6. Most will end up on the target network anyway, no? Jeff
On Jan 6, 2011, at 11:21 AM, Jeff Kell wrote:
I hesitate to write anything off to impossibility, having witnessed the 8 to 16 to 32 to 64-bit processor progression :)
Indeed; how quickly we forget, eh? ;>
And the "depth" of infrastructure at which you can decide the traffic is bogus is much greater with IPv6.
/s/can/have the ability to
Most will end up on the target network anyway, no?
And that's the real point. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
Still, the idea that "nobody will scan a /64" reminds me of the days when 640K ought to be enough for anybody, ...
We really need to wrap our heads around the orders of magnitude involved here. If you could scan an address every nanosecond, which I think is a reasonable upper bound what with the speed of light and all, it would still take 500 years to scan a /64. Enumerating all the addresses will never be practical. But there's plenty of damage one can do with a much less than thorough enumeration.
And the "depth" of infrastructure at which you can decide the traffic is bogus is much greater with IPv6. Most will end up on the target network anyway, no?
I get the impression that we're just beginning to figure out all the ways that bad things can happen when friends or foes start using all those addresses. For example, over in the IRTF ASRG list we're arguing about what to do with IP based blacklists and whitelists, since spammers could easily use a unique IP address for every message they ever send. (Please don't argue about that particular issue here, but feel free to do so in the ASRG.) Regards, John Levine, johnl@iecc.com, Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail. http://jl.ly
On 06/01/11 16:01, John Levine wrote:
Still, the idea that "nobody will scan a /64" reminds me of the days when 640K ought to be enough for anybody, ...
We really need to wrap our heads around the orders of magnitude involved here. If you could scan an address every nanosecond, which I think is a reasonable upper bound what with the speed of light and all, it would still take 500 years to scan a /64. Enumerating all the addresses will never be practical. But there's plenty of damage one can do with a much less than thorough enumeration.
I'm probably ruining an interview question from $COMPANYTHATDIDN'THIREME but think just of a 64-bit counter, *if* you had the ability to iterate through 32-bits every second[1] it still takes ~136 years to go all the way through 64 bits. I don't know about you, but that doesn't worry me. At that point it's a straight bandwidth DoS. What makes much more sense is mapping the first /112 or so of a subnet, the last /112 or so, that will catch most static hosts and routers, then if you really want just iterate through the 2^46 valid assigned MAC's[2], much less if you make some assumptions about which OUI's are likely to exist on a subnet[3]. Julien 1: ie, think of a 4.3ish Ghz CPU that can do "i++ and jump to 0" in a single instruction 2: One bit lost for broadcast, one bit for local/global addresses 3: Skipping all unassigned is obvious, but there's a huge amount that will match systems you'll never care about, 2^36 is probably not far off.
On Jan 5, 2011, at 7:18 PM, Dobbins, Roland wrote:
On Jan 6, 2011, at 10:08 AM, Joe Greco wrote:
Packing everything densely is an obvious problem with IPv4; we learned early on that having a 48-bit (32 address, 16 port) space to scan made port-scanning easy, attractive, productive, and commonplace.
I don't believe that host-/port-scanning is as serious a problem as you seem to think it is, nor do I think that trying to somehow prevent host from being host-/port-scanned has any material benefit in terms of security posture, that's our fundamental disagreement.
You are mistaken... Host scanning followed by port sweeps is a very common threat and still widely practiced in IPv4.
If I've done what's necessary to secure my hosts/applications, host-/port-scanning isn't going to find anything to exploit (overly-aggressive scanning can be a DoS vector, but there are ways to ameliorate that, too).
And there are ways to mitigate ND attacks as well.
If I haven't done what's necessary to secure my hosts/applications, one way or another, they *will* end up being exploited - and the faux security-by-obscurity offered by sparse addressing won't matter a bit.
Sparse addressing is a win for much more than just rendering scanning useless, but, making scanning useless is still a win. Owen
On Jan 7, 2011, at 1:20 AM, Owen DeLong wrote:
You are mistaken... Host scanning followed by port sweeps is a very common threat and still widely practiced in IPv4.
I know it's common and widely-practiced. My point is that if the host is security properly, this doesn't matter; and that if it isn't secured properly, it's going to be found via hinted scanning and exploited, anyways.
And there are ways to mitigate ND attacks as well.
As has been pointed out elsewhere in this thread, not to the degree of control and certainty needed in production environments.
Sparse addressing is a win for much more than just rendering scanning useless, but, making scanning useless is still a win.
Since it doesn't make scanning useless (again, hinted scanning), that 'win' is gone. How else is it supposedly a win? ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 6, 2011, at 3:32 PM, Dobbins, Roland wrote:
On Jan 7, 2011, at 1:20 AM, Owen DeLong wrote:
You are mistaken... Host scanning followed by port sweeps is a very common threat and still widely practiced in IPv4.
I know it's common and widely-practiced. My point is that if the host is security properly, this doesn't matter; and that if it isn't secured properly, it's going to be found via hinted scanning and exploited, anyways.
True, but, that doesn't really matter. Sparse addressing still provides other useful benefits.
And there are ways to mitigate ND attacks as well.
As has been pointed out elsewhere in this thread, not to the degree of control and certainty needed in production environments.
We can agree to disagree here until I see a production environment get taken down by a scan. So far, we've not had a problem with any of the IPv6 scans through our network. All have given up in <8 hours without having caused any sort of ND table overflow issues.
Sparse addressing is a win for much more than just rendering scanning useless, but, making scanning useless is still a win.
Since it doesn't make scanning useless (again, hinted scanning), that 'win' is gone. How else is it supposedly a win?
Not having to worry about room to grow without renumbering is a good thing. I've posted other advantages in an earlier message. It does make sequential scanning useless and it does make even hinted scanning a bit more difficult or less effective. Think of the difference between playing battleship as it is traditionally played on a simple X, Y grid vs. playing it on a playing field where the ships have 180 different possible orientations (1 per degree instead of 0º and 90º only) Once you get a hit, you need a maximum of 4 additional attempts to identify the orientation of the ship and 50%+ of the time you can get it in ≤2 additional attempts. With a 360º board, this becomes quite a bit more difficult. Sparse addressing does this even against hinted scanning. Owen
On 6 Jan 2011, at 18:20, Owen DeLong wrote:
On Jan 5, 2011, at 7:18 PM, Dobbins, Roland wrote:
On Jan 6, 2011, at 10:08 AM, Joe Greco wrote:
Packing everything densely is an obvious problem with IPv4; we learned early on that having a 48-bit (32 address, 16 port) space to scan made port-scanning easy, attractive, productive, and commonplace.
I don't believe that host-/port-scanning is as serious a problem as you seem to think it is, nor do I think that trying to somehow prevent host from being host-/port-scanned has any material benefit in terms of security posture, that's our fundamental disagreement.
You are mistaken... Host scanning followed by port sweeps is a very common threat and still widely practiced in IPv4.
In our IPv6 enterprise we have not seen any 'traditional' port scans (across IP space), rather we see port sweeps on IPv6 addresses that we expose publicly (DNS servers, web servers, MX servers etc). This is discussed a bit in RFC5157. We have yet to see any of the ND problems discussed in this thread, mainly I believe because our perimeter firewall blacks any such sweeps before they hit the edge router serving the 'attacked' subnet. The main operational problem we see is denial of service caused by unintentional IPv6 RAs from hosts. I think this is an interesting thread though and we'll run some tests internally to see how the issue might (or might not) affect our network. Tim
On Jan 7, 2011, at 9:23 PM, Tim Chown wrote:
The main operational problem we see is denial of service caused by unintentional IPv6 RAs from hosts.
Which is a whole other can of IPv6 worms, heh. ;> ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Fri, Jan 7, 2011 at 09:57, Dobbins, Roland <rdobbins@arbor.net> wrote:
On Jan 7, 2011, at 9:23 PM, Tim Chown wrote:
The main operational problem we see is denial of service caused by unintentional IPv6 RAs from hosts.
Which is a whole other can of IPv6 worms, heh.
But atleast we are finally getting the solutions for those - RA-Guard, port ACLs, etc. /TJ (PS - Keep pushing vendors for more widespread support for those!)
On Jan 7, 2011, at 6:23 AM, Tim Chown wrote:
On 6 Jan 2011, at 18:20, Owen DeLong wrote:
On Jan 5, 2011, at 7:18 PM, Dobbins, Roland wrote:
On Jan 6, 2011, at 10:08 AM, Joe Greco wrote:
Packing everything densely is an obvious problem with IPv4; we learned early on that having a 48-bit (32 address, 16 port) space to scan made port-scanning easy, attractive, productive, and commonplace.
I don't believe that host-/port-scanning is as serious a problem as you seem to think it is, nor do I think that trying to somehow prevent host from being host-/port-scanned has any material benefit in terms of security posture, that's our fundamental disagreement.
You are mistaken... Host scanning followed by port sweeps is a very common threat and still widely practiced in IPv4.
In our IPv6 enterprise we have not seen any 'traditional' port scans (across IP space), rather we see port sweeps on IPv6 addresses that we expose publicly (DNS servers, web servers, MX servers etc). This is discussed a bit in RFC5157.
Good for you. We have seen actual host-scanning. It hasn't been particularly successful (firing blind into a very large ocean hoping to hit a whale rarely is), but, nonetheless, we've seen scans go at it for up to 8 hours before they were terminated by the originator. (Very little of a /64 gets scanned in 8 hours, however).
We have yet to see any of the ND problems discussed in this thread, mainly I believe because our perimeter firewall blacks any such sweeps before they hit the edge router serving the 'attacked' subnet.
Likewise, we haven't seen them. Not even with the active scanning that has been touted as the likely cause thereof.
The main operational problem we see is denial of service caused by unintentional IPv6 RAs from hosts.
Yep... Push your switch vendors for RA-Guard. This is a very real problem. Right up there with un-intentional 6to4 gateways that don't lead anywhere. Owen
On Wed, Jan 5, 2011 at 8:57 PM, Joe Greco <jgreco@ns.sol.net> wrote:
This is a much smaller issue with IPv4 ARP, because routers generally have very generous hardware ARP tables in comparison to the typical size of an IPv4 subnet.
no it isn't, if you've ever had your juniper router become unavailable because the arp policer caused it to start ignoring updates, or seen systems become unavailable due to an arp storm you'd know that you can abuse arp on a rather small subnet.
It may also be worth noting that "typical size of an IPv4 subnet" is a bit of a red herring; a v4 router that's responsible for /16 of directly attached /24's is still able to run into some serious issues.
It is uncommon for publicly-addressed LANs to be this large. The reason is simple: relatively few sites still have such an excess of IPv4 addresses that they can use them in such a sparsely-populated manner. Those that do have had twenty years of operational experience with generation after generation of hardware and software, and they have had every opportunity to fully understand the problem (or redesign the relevant portion of their network.) In addition, there is not (any longer) a "standard," and a group of mindless zealots, telling the world that at /16 on your LAN is the only right way to do it. This is, in fact, the case with IPv6 deployments, and will drive what customers demand. To understand the problem, you must first realize that myopic standards-bodies have created it, and either the standards must change, operators must explain to their customers why they are not following the standards, or equipment vendors must provide additional knobs to provide a mitigation mechanism that is an acceptable compromise. Do the advantages of sparse subnets out-weigh the known security detriments, even if good compromise-mechanisms are provided by equipment vendors? "Security by obscurity" is an oft-touted advantage of IPv6 sparse subnets. We all know that anyone with a paypal account can buy a list of a few hundred million email addresses for next to nothing. How long until that is the case with lists of recently-active IPv6 hosts? What portion of attack vectors really depend on scanning hosts that aren't easily found in the DNS, as opposed to vectors depending on a browser click, email attachment, or by simply hammering away at "www.*.com" with common PHP script vulnerabilities? How many people think that massively-sparse-subnets are going to save them money? Where will these cost-efficiencies come from? Why can't you gain that advantage by provisioning, say, 10 times as large a subnet as you think you need, instead of seventy-quadrillion times as large? Is anyone really going to put their Windows Updates off and save money because they are comfortable that their hosts can't be found by random scanning? Is stateless auto-configuration that big a win vs DHCP? Yes, I should have participated in the process in the 1990s. However, just because the bed is already made doesn't mean I am willing to lay my customers in it. These problems can still be fixed before IPv6 is ubiquitous and mission-critical. The easiest fix is to reset the /64 mentality which standards-zealots are clinging to. -- Jeff S Wheeler <jsw@inconcepts.biz> Sr Network Operator / Innovative Network Concepts
On Wed, Jan 5, 2011 at 8:57 PM, Joe Greco <jgreco@ns.sol.net> wrote:
This is a much smaller issue with IPv4 ARP, because routers generally have very generous hardware ARP tables in comparison to the typical size of an IPv4 subnet.
no it isn't, if you've ever had your juniper router become unavailable because the arp policer caused it to start ignoring updates, or seen systems become unavailable due to an arp storm you'd know that you can abuse arp on a rather small subnet.
It may also be worth noting that "typical size of an IPv4 subnet" is a bit of a red herring; a v4 router that's responsible for /16 of directly attached /24's is still able to run into some serious issues.
It is uncommon for publicly-addressed LANs to be this large. The reason is simple: relatively few sites still have such an excess of IPv4 addresses that they can use them in such a sparsely-populated manner.
Who said anything about sparsely populated? A typical hosting provider might well fit such a general picture.
Those that do have had twenty years of operational experience with generation after generation of hardware and software, and they have had every opportunity to fully understand the problem (or redesign the relevant portion of their network.)
No they haven't. I can think of relatively few networks that have survived twenty years, and the ones that I can think of are mostly .edu. Those of us who have been operating IP networks for that length of time probably see both the flaws in IPv4 and IPv6.
In addition, there is not (any longer) a "standard," and a group of mindless zealots, telling the world that at /16 on your LAN is the only right way to do it. This is, in fact, the case with IPv6 deployments, and will drive what customers demand.
The concepts behind IPv4 classful addressing were flawed, but not unrealistic given the history. Various pressures existed to force the development of CIDR. It's not clear that those same pressures will force IPv6 to develop smaller networks - but other pressures *might*. I've yet to hear convincing reasons as to why they *should*.
To understand the problem, you must first realize that myopic standards-bodies have created it, and either the standards must change, operators must explain to their customers why they are not following the standards, or equipment vendors must provide additional knobs to provide a mitigation mechanism that is an acceptable compromise. Do the advantages of sparse subnets out-weigh the known security detriments, even if good compromise-mechanisms are provided by equipment vendors?
Quite frankly, as an interested party, I've been following all this for many years, and I am having a little trouble figuring out what you mean by the "known security detriments" in this context.
"Security by obscurity" is an oft-touted advantage of IPv6 sparse subnets. We all know that anyone with a paypal account can buy a list of a few hundred million email addresses for next to nothing. How long until that is the case with lists of recently-active IPv6 hosts?
Personally, I expect to see IPv6 privacy extensions become commonly used; it's a fairly comprehensive answer to that issue.
What portion of attack vectors really depend on scanning hosts that aren't easily found in the DNS, as opposed to vectors depending on a browser click, email attachment, or by simply hammering away at "www.*.com" with common PHP script vulnerabilities?
I see people scanning our IP space *all* *the* *time*.
How many people think that massively-sparse-subnets are going to save them money?
If it saves me from creeps trawling through our IP space, that's a savings.
Where will these cost-efficiencies come from? Why can't you gain that advantage by provisioning, say, 10 times as large a subnet as you think you need, instead of seventy-quadrillion times as large?
Because at ten times as large, they can still trawl.
Is anyone really going to put their Windows Updates off and save money because they are comfortable that their hosts can't be found by random scanning? Is stateless auto-configuration that big a win vs DHCP?
Yes, I should have participated in the process in the 1990s. However, just because the bed is already made doesn't mean I am willing to lay my customers in it. These problems can still be fixed before IPv6 is ubiquitous and mission-critical. The easiest fix is to reset the /64 mentality which standards-zealots are clinging to.
Think you missed that particular boat a long time ago. The next ship will be departing in a hundred years or so, advance registration for the IPv7 design committee are available over there. ... JG -- Joe Greco - sol.net Network Services - Milwaukee, WI - http://www.sol.net "We call it the 'one bite at the apple' rule. Give me one chance [and] then I won't contact you again." - Direct Marketing Ass'n position on e-mail spam(CNN) With 24 million small businesses in the US alone, that's way too many apples.
From: Joe Greco <jgreco@ns.sol.net> Date: Wed, 5 Jan 2011 21:27:14 -0600 (CST)
On Wed, Jan 5, 2011 at 8:57 PM, Joe Greco <jgreco@ns.sol.net> wrote:
This is a much smaller issue with IPv4 ARP, because routers generally have very generous hardware ARP tables in comparison to the typical size of an IPv4 subnet.
no it isn't, if you've ever had your juniper router become unavailable because the arp policer caused it to start ignoring updates, or seen systems become unavailable due to an arp storm you'd know that you can abuse arp on a rather small subnet.
It may also be worth noting that "typical size of an IPv4 subnet" is a bit of a red herring; a v4 router that's responsible for /16 of directly attached /24's is still able to run into some serious issues.
It is uncommon for publicly-addressed LANs to be this large. The reason is simple: relatively few sites still have such an excess of IPv4 addresses that they can use them in such a sparsely-populated manner.
Who said anything about sparsely populated? A typical hosting provider might well fit such a general picture.
Those that do have had twenty years of operational experience with generation after generation of hardware and software, and they have had every opportunity to fully understand the problem (or redesign the relevant portion of their network.)
No they haven't. I can think of relatively few networks that have survived twenty years, and the ones that I can think of are mostly .edu. Those of us who have been operating IP networks for that length of time probably see both the flaws in IPv4 and IPv6.
In addition, there is not (any longer) a "standard," and a group of mindless zealots, telling the world that at /16 on your LAN is the only right way to do it. This is, in fact, the case with IPv6 deployments, and will drive what customers demand.
The concepts behind IPv4 classful addressing were flawed, but not unrealistic given the history. Various pressures existed to force the development of CIDR. It's not clear that those same pressures will force IPv6 to develop smaller networks - but other pressures *might*. I've yet to hear convincing reasons as to why they *should*.
To understand the problem, you must first realize that myopic standards-bodies have created it, and either the standards must change, operators must explain to their customers why they are not following the standards, or equipment vendors must provide additional knobs to provide a mitigation mechanism that is an acceptable compromise. Do the advantages of sparse subnets out-weigh the known security detriments, even if good compromise-mechanisms are provided by equipment vendors?
Quite frankly, as an interested party, I've been following all this for many years, and I am having a little trouble figuring out what you mean by the "known security detriments" in this context.
"Security by obscurity" is an oft-touted advantage of IPv6 sparse subnets. We all know that anyone with a paypal account can buy a list of a few hundred million email addresses for next to nothing. How long until that is the case with lists of recently-active IPv6 hosts?
Personally, I expect to see IPv6 privacy extensions become commonly used; it's a fairly comprehensive answer to that issue.
What portion of attack vectors really depend on scanning hosts that aren't easily found in the DNS, as opposed to vectors depending on a browser click, email attachment, or by simply hammering away at "www.*.com" with common PHP script vulnerabilities?
I see people scanning our IP space *all* *the* *time*.
How many people think that massively-sparse-subnets are going to save them money?
If it saves me from creeps trawling through our IP space, that's a savings.
Where will these cost-efficiencies come from? Why can't you gain that advantage by provisioning, say, 10 times as large a subnet as you think you need, instead of seventy-quadrillion times as large?
Because at ten times as large, they can still trawl.
Is anyone really going to put their Windows Updates off and save money because they are comfortable that their hosts can't be found by random scanning? Is stateless auto-configuration that big a win vs DHCP?
Yes, I should have participated in the process in the 1990s. However, just because the bed is already made doesn't mean I am willing to lay my customers in it. These problems can still be fixed before IPv6 is ubiquitous and mission-critical. The easiest fix is to reset the /64 mentality which standards-zealots are clinging to.
Think you missed that particular boat a long time ago.
The next ship will be departing in a hundred years or so, advance registration for the IPv7 design committee are available over there.
Sorry, but IPv7 has come and gone. It was assigned to the TUBA proposal, basically replacing IP with CLNP. IPv8 has also been assigned. (Don't ask as it involved he who must not be named.) I am amazed at the number of folks who seem to think that there is time to change IPv6 is ANY significant way. Indeed, the ship has failed. If you r network is not well along in getting ready for IPv6, you are probably well on you way to being out of business. -- R. Kevin Oberman, Network Engineer Energy Sciences Network (ESnet) Ernest O. Lawrence Berkeley National Laboratory (Berkeley Lab) E-mail: oberman@es.net Phone: +1 510 486-8634 Key fingerprint:059B 2DDF 031C 9BA3 14A4 EADA 927D EBB3 987B 3751
"Kevin Oberman" <oberman@es.net> writes:
The next ship will be departing in a hundred years or so, advance registration for the IPv7 design committee are available over there.
Sorry, but IPv7 has come and gone. It was assigned to the TUBA proposal, basically replacing IP with CLNP. IPv8 has also been assigned. (Don't ask as it involved he who must not be named.)
In the grand tradition of list pedantry, I must correct both of these statements. :-) IPv7 was TP/IX, which I never really learned anything about (at least nothing that I can remember) at the time. IPv8 was PIP, which got merged with SIP to form SIPP which as I recall evolved into IPv6. It had nothing to do with he who must not be named, but you can't figure this out by googling IPv8 as all it returns is a series of links to flights of fancy. IPv9 was TUBA. Went down for political reasons, but in retrospect perhaps wouldn't have been such a bad thing compred to the "second system syndrome" design that we find ourselves with today (I know I'm gonna take it on the chin for making such a comment, but whatever). 10-14 are unassigned, guess we'd better get crackin, eh? -r
On Fri, 07 Jan 2011 07:11:42 -0500 "Robert E. Seastrom" <rs@seastrom.com> wrote:
"Kevin Oberman" <oberman@es.net> writes:
The next ship will be departing in a hundred years or so, advance registration for the IPv7 design committee are available over there.
Sorry, but IPv7 has come and gone. It was assigned to the TUBA proposal, basically replacing IP with CLNP. IPv8 has also been assigned. (Don't ask as it involved he who must not be named.)
In the grand tradition of list pedantry, I must correct both of these statements. :-)
IPv7 was TP/IX, which I never really learned anything about (at least nothing that I can remember) at the time.
IPv8 was PIP, which got merged with SIP to form SIPP which as I recall evolved into IPv6. It had nothing to do with he who must not be named, but you can't figure this out by googling IPv8 as all it returns is a series of links to flights of fancy.
IPv9 was TUBA. Went down for political reasons, but in retrospect perhaps wouldn't have been such a bad thing compred to the "second system syndrome" design that we find ourselves with today (I know I'm gonna take it on the chin for making such a comment, but whatever).
10-14 are unassigned, guess we'd better get crackin, eh?
If you define a new protocol version as one that means devices with older protocol generations of firmware/software may not interoperate reliably with devices with new protocol generations of firmware/software, then IPv4 as we know it today is probably at least "IPv7" - address classes was a generational change requiring software/firmware updates (compare addressing in rfc760 verses rfc791), as was classful+subnets and then CIDR. Regards, Mark.
If you define a new protocol version as one that means devices with older protocol generations of firmware/software may not interoperate reliably with devices with new protocol generations of firmware/software, then IPv4 as we know it today is probably at least "IPv7" - address classes was a generational change requiring software/firmware updates (compare addressing in rfc760 verses rfc791), as was classful+subnets and then CIDR.
Regards, Mark.
I think it's defined, instead, in terms of incompatible changes to the content of the packet header. Owen
On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
A (relatively) easy way to avoid this problem is to either use a stateful firewall that only allows internally initiated sessions, or a filter that lists only addresses that are known to be in use.
It would certainly be nice to have a stateful firewall on every single LAN connection. Were there high-speed, stateful firewalls in 1994? Perhaps the IPng folks had this solution in mind, but left it out of the standards process. No doubt they all own stock in SonicWall and are eagerly awaiting the day when "Anonymous" takes down a major ISP every day with a simple attack that has been known to exist, but not addressed, for many years.
You must also realize that the stateful firewall has the same problems
Uh, not exactly...
as the router. It must include a list of allocated IPv6 addresses on each subnet in order to be able to ignore other traffic. While this
Uh, no it doesn't. It just needs a list of the hosts which are permitted to receive inbound connections from the outside. That's the whole point of the stateful in stateful firewall... It can dynamically allow outbound sessions and only needs to be open for hosts that are supposed to receive external session initiations. Since that list is relatively small and you probably need to maintain it anyway, I'm not really seeing a problem here.
can certainly be accomplished, it would be much easier to simply list those addresses in the router, which would avoid the expense of any product typically called a "stateful firewall." In either case, you are now maintaining a list of valid addresses for every subnet on the router, and disabling NDP for any other addresses. I agree with you, this knob should be offered by vendors in addition to my list of possible vendor solutions.
Except that routers don't (usually) have the ability to do dynamic outbound filtration which means that you have the scaling problem you've described of having to list every host on the net. If the router does have this ability, then, the router is, by definition, a stateful firewall.
On Wed, Jan 5, 2011 at 9:39 AM, Iljitsch van Beijnum <iljitsch@muada.com> wrote:
Sparse subnets in IPv6 are a feature, not a bug. They're not going to go away.
I do not conceptually disagree with sparse subnets. With the equipment limitations of today, they are a plan to fail. Let's hope that all vendors catch up to this before malicious people/groups.
There are risks with sparse subnets that have been inadequately addressed for some of their failure modes at this time. I wouldn't go so far as saying they are a plan to fail. In most cases, most networks shouldn't be susceptible to an externally initiated ND attack in the first place because those should mostly be blocked at your border except for hosts that provide services to the larger internet. Owen
All the same, beware of the anycast addresses if you want to use a smaller block for point-to-point and for LANs, you break stateless autoconfig and very likely terminally confuse DHCPv6 if your prefix length isn't /64.
Breaking stateless autoconfig such that it *cannot* ever work, on my router point-to-point links, is a *feature*. Not a problem. Steinar Haug, Nethelp consulting, sthaug@nethelp.no
On Jan 5, 2011, at 1:15 PM, Jeff Wheeler wrote:
I notice that this document, in its nearly 200 pages, makes only casual mention of ARP/NDP table overflow attacks, which may be among the first real DoS challenges production IPv6 networks, and equipmentvendors, have to resolve.
They also only make small mention of DNS- and broadcast-hinted scanning, and none at all of routing-hinted scanning.
It has been pointed out to me that I should have been more vocal when IPv6 was still called IPng, but in 16 years, there has been nothing done about this problem other than water-cooler talk.
Likewise. I never in my wildest dreams thought that such a bag of hurt, with all the problems of IPv4 *plus* its own inherent problems - in *hex*, no less - would end up being adopted. I was sure that the adults would step in, at some point, and get things back on a more sensible footing. Obviously, I'm the biggest idiot on the Internet, and have only my own misplaced faith in the IAB/IETF process to blame, heh. The authors of the document also make only small mention of the dangers of extension header-driven DoS for infrastructure, but at least they mention it, which puts them ahead of most folks in this regard. They also fail to mention the dangers represented by the consonance of the English letters 'B', 'C', 'D', and 'E'. My guess it that billions of USD in outages, misconfigurations, and avoidable security incidents will result from verbal miscommunication of these letters, yet another reason why adopting a hexadecimal numbering scheme was foolish in the extreme. Ah, well, no use crying over spilt milk. The document itself is a good tutorial on IPv6, and it's great that the authors did indeed touch upon these security concerns, but the security aspect as a whole is seemingly deliberately understated, which does a disservice to the lay reader. One can only imagine that there were non-technical considerations which came into play. ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On Jan 5, 2011, at 4:39 PM, Dobbins, Roland wrote:
They also only make small mention of DNS- and broadcast-hinted scanning, and none at all of routing-hinted scanning.
I meant to include, ' . . . and the strain that this hinted scanning will place on the DNS and routing/switching infrastructure.' ------------------------------------------------------------------------ Roland Dobbins <rdobbins@arbor.net> // <http://www.arbornetworks.com> Most software today is very much like an Egyptian pyramid, with millions of bricks piled on top of each other, with no structural integrity, but just done by brute force and thousands of slaves. -- Alan Kay
On 01/05/2011 10:39 AM, Dobbins, Roland wrote:
The document itself is a good tutorial on IPv6, and it's great that the authors did indeed touch upon these security concerns, but the security aspect as a whole is seemingly deliberately understated, which does a disservice to the lay reader. One can only imagine that there were non-technical considerations which came into play.
That almost sounds like a conspiracy theory, let me know when it shows up on Wikileaks. :-) I think it's better to show what is broken and let vendors fix it, then to look the other way. The only people I know actively and openly working on creating tests to find and report bugs in IPv6 protocols and software is the "THC-IPV6"-project by "van Hauser". Here is an old presentation from 2005 from him: http://media.ccc.de/browse/congress/2005/22C3-772-en-attacking_ipv6.html http://events.ccc.de/congress/2005/fahrplan/attachments/642-vh_thcipv6_attac... Most is still possible and not fixed to this date. And his site: http://www.thc.org/thc-ipv6 He did a new presentation at 27c3 in december 2010: http://events.ccc.de/congress/2010/Fahrplan/events/3957.en.html A video and slides should show up on the list soon: http://media.ccc.de/tags/27c3.html (because of audio transcoding issues some videos are not online right now, if you ask me nicely I could mail a link for the video from before they took it down) Have a nice day, Leen Besselink.
On Wed, Jan 5, 2011 at 5:34 AM, Leen Besselink <leen@consolejunkie.net> wrote:
He did a new presentation at 27c3 in december 2010:
http://events.ccc.de/congress/2010/Fahrplan/events/3957.en.html
A video and slides should show up on the list soon:
http://media.ccc.de/tags/27c3.html
(because of audio transcoding issues some videos are not online right now, if you ask me nicely I could mail a link for the video from before they took it down)
That talk is available on Youtube by the official account http://www.youtube.com/watch?v=c7hq2q4jQYw
participants (33)
-
Bill Bogstad
-
David Sparro
-
Dobbins, Roland
-
George Bonser
-
Iljitsch van Beijnum
-
Jack Bates
-
Jeff Kell
-
Jeff Wheeler
-
Jima
-
Joe Greco
-
Joel Jaeggli
-
John Levine
-
Julien Goodwin
-
Justin M. Streiner
-
Kevin Oberman
-
Lamar Owen
-
Leen Besselink
-
Mark Smith
-
Matthew Petach
-
Mikael Abrahamsson
-
mikea
-
Miquel van Smoorenburg
-
Mohacsi Janos
-
Owen DeLong
-
Phil Regnauld
-
Philip Dorr
-
Richard Barnes
-
Robert E. Seastrom
-
Seth Mattinen
-
sthaug@nethelp.no
-
Tim Chown
-
TJ
-
Valdis.Kletnieks@vt.edu