IPv6 Netowrk Device Numbering BP
We're working out our dual stacked IPv4-IPv6 network. One issue that recently has arisen is how to number the management interfaces on the network devices themselves. I have always been kind of partial to the idea of taking advantage IPv6 features and letting hosts set their own addresses with EUI-64 interface numbers. For the management interface on a network device, it's more like a "normal" host. I'd just as well tell the device its prefix, and let it build the address itself. For IPv6, my opinion is that I'm not even going to try to remember 128-bit addresses. It's not something reasonable to expect humans to do. I'm going to depend on some name-to-number service (DNS or a hosts file), and as far as a computer goes, 2001:db8::80:abff:fe45:6789 is just as easy to remember as, 2001:db8::12:34. The other approach is to assign addresses. To me, that's more of a hold over from IPv4 thinking, but there are legitimate reasons I can think of. It's nice to have the IPv6 address tied to the configuration rather than the hardware. If you need to drop in a replacement device, you copy the configuration and no addresses change. But OTOH, others might consider it a feature that the IP follows the device rather than the role. And the real reason I think people want to do it is that they want to be able to memorize IP addresses of "important" hosts like these. Another option would be to do both. Assign a fixed address and also let it chose EUI-64. However, I see that leading to confusion. Not sure what good it would do. Is there anything like a standard, best practice for this (yet)? What are other people doing and their reasons? Anyone have operational experience with what works and what does not (and the "what does not" is probably really of more interest)? -- Crist J. Clark
On Wed, 2012-10-31 at 22:31 -0700, Crist J. Clark wrote:
We're working out our dual stacked IPv4-IPv6 network. One issue that recently has arisen is how to number the management interfaces on the network devices themselves. [...] Is there anything like a standard, best practice for this (yet)?
Yes and no. It's only best practice when enough people have done it, and enough people have done *different, bad* things, for the practice to emerge as, in general, best. I don't think enough people have done either of those things. There are documents floating about that purporting to describe best practice; I've never read one I really agreed with - in particular they have a tendency to recommend overloading address bits with non-address information. So I think you should listen to lots of people then go about making your own educated decisions, and thus become part of the adventure of creating best practice. Either by being a beacon of wonderfulness to us all, or by crashing and burning so that we can say, in hushed voices as we pass by where you and your network used to be, "don't do that :-)
What are other people doing and their reasons? Anyone have operational experience with what works and what does not (and the "what does not" is probably really of more interest)?
I espouse four principles (there are others, but these are the biggies): - don't overload address bits with non-addressing information - keep the network as flat as reasonably possible - avoid tying topology to geography - avoid exceptions The first can be completely avoided and should be an ironcast rule IMHO. Aggregation requirements will mean the third is always broken eventually, and Murphy's Law will break the fourth. However, by following each rule as far as possible and delaying the point where it must be broken, you will end up with a more flexible, future-proof, error-proof, extensible address plan. I too would be really interested in whatever wisdom others have developed, even if (especially if!) it doesn't agree with mine. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer http://www.biplane.com.au/blog GPG fingerprint: AE1D 4868 6420 AD9A A698 5251 1699 7B78 4EEE 6017 Old fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687
On 11/1/12, Karl Auer <kauer@biplane.com.au> wrote:
I espouse four principles (there are others, but these are the biggies):
Sounds like what is suggested is anti-practices, rather than suggesting affirmative practices. I would suggest slightly differently. Complexity results in failure modes that are difficult to predict, so - Keep addressing design as simple as possible, with as few "interesting things" or distinctions as possible, (such as multiple different prefix lengths for different nets, different autoconfig methods, different host IDs for default gateways, unique numbering schemes, for different network or host types, etc) Without omitting requirements, or overall opportunities for efficient reliable network operations. - Keep addressing complexity in addressing. E.g. Addressing may be simpler with a flat network, but don't use that as an excuse to relocate 2x the cost of addressing complexity to switching infrastructure/routing design and scalability limits that will forseeably be reached. Don't implement carrier grade NAT, just because it ensures the user's default gateway is always 192.168.1.1. Ensure the simplicity and benefits of the whole is maximized, not the individual design elements.
- don't overload address bits with non-addressing information
You suggest building networks with address bits that contain only addressing information. It sounds like an IPv4 principle whose days are done; that, addressing bits are precious, so don't waste a single bit encoding extra information. If encoding additional information can provide something worthwhile, then it should be considered. I'm not sure what exactly would be worthwhile to encode, but something such as a POP#, Serving router, Label/Tag id, Security level, type of network, hash of a circuit ID, might be some potential contenders for some network operators to encode in portions of the network ID for some networks. Specifically P-t-P networks, to which a /48 end site numbered differently may be routed.
- keep the network as flat as reasonably possible
You are suggesting the avoidance of multiple networks, preferring instead large single IP subnets for large areas, whenever possible? IPv6 has not replaced ethernet. Bottlenecks such as unknown unicast flooding, broadcast domain chatter, scalability limits still exist on IPv6oE. Ample subnet IDs are available. With IPv6, there are more reasons than ever to avoid flat networks, even in cases where a flat network might be an option. My suggestion would be: - Avoid flat networks; implement segmentation. Make new subnets; whenever possible plus reliability, security, and serviceability gains exceed the basic config, and continuous management costs. Make flat networks when the benefits are limited, or segmented networks are not possible, or too expensive due to poorly designed hardware/software (E.g. software requiring a flat network between devices that should be segmented).
- avoid tying topology to geography
It sounds like IPv4 thinking again --- avoid creating additional topology for geographic locations, in order to conserve address space.
- avoid exceptions
Consistency is something good designs should strive for; when it can be achieved without major risks, costs, or technical sacrifices, exceeding the value of that consistency.
I too would be really interested in whatever wisdom others have developed, even if (especially if!) it doesn't agree with mine.
Regards, K. -- -JH
On Sun, 2012-11-04 at 13:26 -0600, Jimmy Hess wrote:
On 11/1/12, Karl Auer <kauer@biplane.com.au> wrote:
I espouse four principles (there are others, but these are the biggies):
Sounds like what is suggested is anti-practices, rather than suggesting affirmative practices. I would suggest slightly differently.
I agree that positive is best, but my rules can be expressed in a few phrases. There is value in being concise.
You suggest building networks with address bits that contain only addressing information. It sounds like an IPv4 principle whose days are done; that, addressing bits are precious, so don't waste a single bit encoding extra information.
Address bits are not precious; subnetting bits are. We have moved, with IPv6, from conserving address space to conserving subnet space. Your address space in a leaf subnet numbers in the billions of billions; your subnetting space in a typical /48 site numbers in the tens of thousands. By the way, there are two very diffferent kinds of subnet - the leaf subnet that actually contains hosts, and the structural subnet that divides your network up. I'm talking about structural subnetting.
If encoding additional information can provide something worthwhile, then it should be considered.
Of course. But *as a rule* it should not be done. Only you can weigh the benefits against the downsides. Remember I said these were rules for students; I'm not going to tell a competent professional what to do. I personally would however strenuously avoid overloading address bits merely for the sake of human convenience or readability. Systems with no necessary technical links inevitably diverge; if you have encoded one in the other, the latter will eventually start lying to you. You will have to work to keep the two things in sync, but if they diverge enough that may not be possible, and you end up with one system containing the irrelevant, misleading remnants of another.
- keep the network as flat as reasonably possible
You are suggesting the avoidance of multiple networks, preferring instead large single IP subnets for large areas, whenever possible?
No, that's not what I'm suggesting. See above for the distinction I make between leaf and structural subnets. I am suggesting keeping the network structure as flat as possible.
- Avoid flat networks; implement segmentation.
Yes at the edge, no in the network structure.
- avoid tying topology to geography
It sounds like IPv4 thinking again --- avoid creating additional topology for geographic locations, in order to conserve address space.
Conserving address space is NOT one of my goals, though conserving subnet space is. Without being foolishly parsimonious, though. As you say, there is ample subnet space, but it's not nearly as ample as address space. I use "geography" in the broadest sense of "the physical world". The reason for avoiding tying your network topology to geography is that networks move and flex all the time. So does the physical world, in a way - companies buy extra buildings, occupy more floors, open new state branches, move to new buildings with different floor plans. New administrative divisions arise, old ones disappear, divisions merge. Building your topology on these shifting sands means that every time they change, either your address schema moves further away from reality, or you spend time adjusting it to the new reality. If you chose poorly at the start, it may not be possible to adjust it easily. Again, I can't second guess what physical constructs you will want or need to mirror in your network topology, but I can say with confidence that you should avoid doing so unless absolutely technically necessary. These rules are not really IPv6-specific. It's just that in the IPv4 world, we basically can't follow them, because the technical requirements of the cramped address space drive us towards particular, unavoidable solutions. With IPv6, the rules gain new currency because we *can* mostly follow them. Regards, K. PS: I've put a lot of this, word for word, in my blog at www.biplane.com.au/blog -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer http://www.biplane.com.au/blog GPG fingerprint: AE1D 4868 6420 AD9A A698 5251 1699 7B78 4EEE 6017 Old fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687
On Thu, Nov 1, 2012 at 7:31 AM, Crist J. Clark <pumpky@sonic.net> wrote:
We're working out our dual stacked IPv4-IPv6 network. One issue that recently has arisen is how to number the management interfaces on the network devices themselves.
I have always been kind of partial to the idea of taking advantage IPv6 features and letting hosts set their own addresses with EUI-64 interface numbers. For the management interface on a network device, it's more like a "normal" host. I'd just as well tell the device its prefix, and let it build the address itself. For IPv6, my opinion is that I'm not even going to try to remember 128-bit addresses. It's not something reasonable to expect humans to do. I'm going to depend on some name-to-number service (DNS or a hosts file), and as far as a computer goes, 2001:db8::80:abff:fe45:6789 is just as easy to remember as, 2001:db8::12:34.
The other approach is to assign addresses. To me, that's more of a hold over from IPv4 thinking, but there are legitimate reasons I can think of. It's nice to have the IPv6 address tied to the configuration rather than the hardware. If you need to drop in a replacement device, you copy the configuration and no addresses change. But OTOH, others might consider it a feature that the IP follows the device rather than the role. And the real reason I think people want to do it is that they want to be able to memorize IP addresses of "important" hosts like these.
For simplicity and a wish to keep a mapping to our IPv4 addresses, each device (router/server/firewall) has a static IPv6 address that has the same last digits as the IPv4 address, only the subnet is changed. You can say it's a IPv4 thinking model, but it's easier to remember that if the fileserver it's at 192.168.10.10 then it's IPv6 counterpart address would be 2001:abcd::192:168:10:10 (each subnet being a /64)
Another option would be to do both. Assign a fixed address and also let it chose EUI-64. However, I see that leading to confusion. Not sure what good it would do.
Is there anything like a standard, best practice for this (yet)? What are other people doing and their reasons? Anyone have operational experience with what works and what does not (and the "what does not" is probably really of more interest)?
Letting the host choose it's own IP can be very tricky and has operational hurdles along the way as it's not that easy to copy configurations across devices during upgrades and maintenance swap outs.
Eugeniu Patrascu wrote:
You can say it's a IPv4 thinking model, but it's easier to remember that if the fileserver it's at 192.168.10.10 then it's IPv6 counterpart address would be 2001:abcd::192:168:10:10 (each subnet being a /64)
That is a clever idea except that it can not always follow modified EUI-64 format aof rfc4291. We should better introduce partially decimal format for IPv6 addresses or, better, avoid IPv6 entirely. Masataka Ohta
Another option would be to do both. Assign a fixed address and also let it chose EUI-64. However, I see that leading to confusion. Not sure what good it would do.
Is there anything like a standard, best practice for this (yet)? What are other people doing and their reasons? Anyone have operational experience with what works and what does not (and the "what does not" is probably really of more interest)?
Letting the host choose it's own IP can be very tricky and has operational hurdles along the way as it's not that easy to copy configurations across devices during upgrades and maintenance swap outs.
Though 2001:abcd::192:168:10:10 was written in a format with both : and . , I think would could take the concept mentioned above and extend it either by making it 2001:abcd::C0:A8:0A:0A or 2001:abcd::C0A8:0A0A Doing the latter wastes less space and let's the host use the upper 32bits of the host portion for vhosts. Ex: 2001:abcd::1:C0A8:0A0A Should be easy enough as something like pxelinux already squishes your v4 address down to do file searching on tftp servers. Ex: /mybootdir/pxelinux.cfg/C000025B for 192.0.2.91 I personally have for this "use the last 32bits of the v6 address for the v4 dual stack address" trick and was happy with it. Still fits with the concept of "give each host a /64" On Thu, Nov 1, 2012 at 8:20 AM, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Eugeniu Patrascu wrote:
You can say it's a IPv4 thinking model, but it's easier to remember that if the fileserver it's at 192.168.10.10 then it's IPv6 counterpart address would be 2001:abcd::192:168:10:10 (each subnet being a /64)
That is a clever idea except that it can not always follow modified EUI-64 format aof rfc4291.
We should better introduce partially decimal format for IPv6 addresses or, better, avoid IPv6 entirely.
Masataka Ohta
Another option would be to do both. Assign a fixed address and also let it chose EUI-64. However, I see that leading to confusion. Not sure what good it would do.
Is there anything like a standard, best practice for this (yet)? What are other people doing and their reasons? Anyone have operational experience with what works and what does not (and the "what does not" is probably really of more interest)?
Letting the host choose it's own IP can be very tricky and has operational hurdles along the way as it's not that easy to copy configurations across devices during upgrades and maintenance swap outs.
-- Zach Giles zgiles@gmail.com
On 01/11/2012 12:20, Masataka Ohta wrote:
We should better introduce partially decimal format for IPv6 addresses or, better, avoid IPv6 entirely.
No we shouldn't. Text representations of IPv6 addresses are already a complete pain to parse. We don't need to add to this pain by adding a new format which gains us nothing in particular over existing schemas such as that suggested by Eugeniu. Nick
On Nov 1, 2012, at 06:06 , Nick Hilliard <nick@foobar.org> wrote:
On 01/11/2012 12:20, Masataka Ohta wrote:
We should better introduce partially decimal format for IPv6 addresses or, better, avoid IPv6 entirely.
No we shouldn't. Text representations of IPv6 addresses are already a complete pain to parse. We don't need to add to this pain by adding a new format which gains us nothing in particular over existing schemas such as that suggested by Eugeniu.
Nick
I agree with you that we shouldn't introduce partially decimal format, but I don't see why you say IPv6 addresses are difficult to parse. 1. Tokenize (on : boundaries). 2. If n(tokens) < 8, expand null token to 9-n tokens. (result 8 total tokens). 3. Parse tokens left to right as 16 bit hex numbers, such that accumulator a accumulates each token t as follows: a<<=16 a |= t The only exceptions to this parsing would be if someone handed you a textual representation of an IPv4 mapped address (::ffff:192.0.2.50), which essentially represents the partial decimal format Masataka is requesting. You really shouldn't need to parse these and it's perfectly valid to reject them as invalid input. This really is an output only format to describe an IPv4 connection being mapped to an IPv6 socket with IPV6_V6ONLY=false in the socket options. These addresses should never appear on the wire. Internally, the software sees them as any other 128 bit integer and only the UI presentation of these numbers for display should ever use that format. That format is used as a convenience for user display because it allows the user to readily identify the IPv4 address of the connection rather than having to convert the hex to decimal to know what host is involved. Finally, at this point, if you're feeling like you have to write your own IP address parser, you're probably doing something wrong. PLEASE PLEASE PLEASE use the standard libraries whenever possible. inet_pton, getaddrinfo, etc. if you are using C. In PERL, you have these, plus Net::IP as well which provides extensive IP address parsing and manipulation capabilities. There are similar library functions for virtually every other language at this point as well. Owen
Hi Owen,
You really shouldn't need to parse these and it's perfectly valid to reject them as invalid input. This really is an output only format [...]
I don't agree. I think it's actually the other way around. It's a valid representation of an IPv6 address so you be able to parse them. You don't need to be able to output them though.
Finally, at this point, if you're feeling like you have to write your own IP address parser, you're probably doing something wrong. PLEASE PLEASE PLEASE use the standard libraries whenever possible.
Definitely +1 here! Sander
On 2012-11-01, at 10:27, Sander Steffann <sander@steffann.nl> wrote:
You really shouldn't need to parse these and it's perfectly valid to reject them as invalid input. This really is an output only format [...]
I don't agree. I think it's actually the other way around. It's a valid representation of an IPv6 address so you be able to parse them. You don't need to be able to output them though.
The active advice from the IETF on this topic would seem to be RFC 5952 as updated by RFC 5952. RFC 5952 specifies (in section 5) that the least-significant 32 bits MAY be written in dotted-quad notation if "it is known by some external method that a given prefix is used to embed IPv4". People who make use of a general-purpose v6 addressing plans which incorporate mapped v4 addresses in the lower 32 bits fit clearly into this category, I would think. 5952 is silent on the distinction between parsing such addresses and using them in output. I don't see any justification in the standards for rejecting v4-mapped addresses on input. For what that's worth. I agree that this adds a step to input validation, and that using standard libraries for this stuff is a good idea. Joe
Looks like it's down again.... From ge0-1-v201.r2.mst1.proxility.net (77.93.64.146) icmp_seq=1 Destination Host Unreachable Now that could be through a filter... however: --2012-11-04 11:07:25-- http://www.mail-abuse.org/ Resolving www.mail-abuse.org... 150.70.74.99 Connecting to www.mail-abuse.org|150.70.74.99|:80... failed: No route to host. trace itself ends at my own providers gateway...
Maps was taken over by trend micro years back, maybe they just retired the old domain? --srs (htc one x) On Nov 4, 2012 4:14 PM, "Alexander Maassen" <outsider@scarynet.org> wrote:
Looks like it's down again....
From ge0-1-v201.r2.mst1.proxility.net (77.93.64.146) icmp_seq=1 Destination Host Unreachable
Now that could be through a filter... however:
--2012-11-04 11:07:25-- http://www.mail-abuse.org/ Resolving www.mail-abuse.org... 150.70.74.99 Connecting to www.mail-abuse.org|150.70.74.99|:80... failed: No route to host.
trace itself ends at my own providers gateway...
from the website: This website has been moved to http://ers.trendmicro.com. Please update your bookmarks with this URL. On Sun, Nov 4, 2012 at 2:47 AM, Suresh Ramasubramanian <ops.lists@gmail.com> wrote:
Maps was taken over by trend micro years back, maybe they just retired the old domain?
On 01-Nov-2012, Owen DeLong <owen@delong.com> sent:
The only exceptions to this parsing would be if someone handed you a textual representation of an IPv4 mapped address (::ffff:192.0.2.50), which essentially represents the partial decimal format Masataka is requesting.
I might be missing something here, but isn't that format already valid for any IPv6 address, not just the special v4-in-v6 representation?
import socket p = '2001:abcd::192.16.10.10' n = socket.inet_pton(socket.AF_INET6, p) socket.inet_ntop(socket.AF_INET6, n) '2001:abcd::c010:a0a'
Or is the issue just the ntop part not giving you back the decimalized string? -- Chip Marshall <chip@2bithacker.net> http://weblog.2bithacker.net/ KB1QYW PGP key ID 43C4819E v4sw5PUhw4/5ln5pr5FOPck4ma4u6FLOw5Xm5l5Ui2e4t4/5ARWb7HKOen6a2Xs5IMr2g6CM
On Nov 1, 2012, at 10:43 , Chip Marshall <chip@2bithacker.net> wrote:
On 01-Nov-2012, Owen DeLong <owen@delong.com> sent:
The only exceptions to this parsing would be if someone handed you a textual representation of an IPv4 mapped address (::ffff:192.0.2.50), which essentially represents the partial decimal format Masataka is requesting.
I might be missing something here, but isn't that format already valid for any IPv6 address, not just the special v4-in-v6 representation?
import socket p = '2001:abcd::192.16.10.10' n = socket.inet_pton(socket.AF_INET6, p) socket.inet_ntop(socket.AF_INET6, n) '2001:abcd::c010:a0a'
Or is the issue just the ntop part not giving you back the decimalized string?
That's not a problem and I certainly wouldn't expect it to do so. I guess the silly notation is more widely supported than I thought, but, IMHO, it's a kind of poor choice of syntax outside of the limited use of displaying IPv4-mapped addresses for dual-stack socket connections displaying IPv4 connections in IPv6 format. Owen
On Thu, 2012-11-01 at 07:07 -0700, Owen DeLong wrote:
I agree with you that we shouldn't introduce partially decimal format, but I don't see why you say IPv6 addresses are difficult to parse.
They are not simple to parse, but not particularly difficult either.
1. Tokenize (on : boundaries). 2. If n(tokens) < 8, expand null token to 9-n tokens.
You really shouldn't need to parse [mapped addresses] and it's
It's a bit harder than that. You need to deal with the positioning of the "::", which may be at the beginning or end. Scope identifiers need to be handled. On output, you need to handle the requirements of RFC 5952. perfectly valid
to reject them as invalid input.
No, it's not OK to reject them. You can't just say they are invalid, they are not.
Finally, at this point, if you're feeling like you have to write your own IP address parser, you're probably doing something wrong. PLEASE PLEASE PLEASE use the standard libraries whenever possible.
Definitely oh very yes! That said, I have had to write my own three times now, because of errors or inadequacies in the existing parsers, and I can confidently say it is slightly tricky, but not hard. The key, the essential and vital thing, is to unit test that sucker until it is gasping and limp.
There are similar library functions for virtually every other language at this point as well.
Java's is broken, for a start. I have had to replace it for literals, because it doesn't compress for output, and because it treats a mapped IPv4 address as an IPv4 address! It's also hard to do some operations on InetAddress objects. I still use InetAddress where actual names are concerned, so as not to duplicate the Java DNS functionality. Unfortunately Java appears to not properly prefer IPv6 addresses. There is allegedly a system property to control that, but it is either documented incorrectly or just doesn't work. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer http://www.biplane.com.au/blog GPG fingerprint: AE1D 4868 6420 AD9A A698 5251 1699 7B78 4EEE 6017 Old fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687
On Nov 1, 2012, at 4:52 PM, Karl Auer <kauer@biplane.com.au> wrote:
On Thu, 2012-11-01 at 07:07 -0700, Owen DeLong wrote:
I agree with you that we shouldn't introduce partially decimal format, but I don't see why you say IPv6 addresses are difficult to parse.
They are not simple to parse, but not particularly difficult either.
1. Tokenize (on : boundaries). 2. If n(tokens) < 8, expand null token to 9-n tokens.
It's a bit harder than that. You need to deal with the positioning of the "::", which may be at the beginning or end. Scope identifiers need to be handled. On output, you need to handle the requirements of RFC 5952.
Expanding the :: assumed expanding it in place. That's all you need to do to deal with the positioning of it. It can occur anywhere, not just at the beginning or end, as in 2620:0:930::200:2 which is, btw, also equivalent to 2620::930:0:0:0:200:2.
You really shouldn't need to parse [mapped addresses] and it's perfectly valid to reject them as invalid input.
No, it's not OK to reject them. You can't just say they are invalid, they are not.
Yes, it was pointed out to me that for some silly reason passing understanding, that syntax is supported. It's absurd, but supported. Sigh Probably we should deprecate it as it really doesn't make sense to use it that way. Owen
* Owen DeLong
Yes, it was pointed out to me that for some silly reason passing understanding, that syntax is supported. It's absurd, but supported. Sigh
Probably we should deprecate it as it really doesn't make sense to use it that way.
It absolutely does make sense, especially in the case of IPv4/IPv6 translation. For example, when using NAT64, "64:ff9b::192.0.2.33" is an example of a valid IPv6 address that maps to 192.0.2.33. Much easier to relate to for a human than "64:ff9b::c000:221" is. Similarly, when using SIIT, the same syntax may be used in firewall rules or ACLs. So if you want to open, say, the SSH port from a trusted IPv4 address 192.0.2.33 on the far side of the SIIT gateway to your IPv6 server, it's much easier to open for "64:ff9b::192.0.2.33", and it will also make your ACL much more readable to the next guy that comes along than if you had used "64:ff9b::c000:221". Also see RFC 6052 section 2.4. -- Tore Anderson Redpill Linpro AS - http://www.redpill-linpro.com/
On Nov 2, 2012, at 02:52 , Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
* Owen DeLong
Yes, it was pointed out to me that for some silly reason passing understanding, that syntax is supported. It's absurd, but supported. Sigh
Probably we should deprecate it as it really doesn't make sense to use it that way.
It absolutely does make sense, especially in the case of IPv4/IPv6 translation. For example, when using NAT64, "64:ff9b::192.0.2.33" is an example of a valid IPv6 address that maps to 192.0.2.33. Much easier to relate to for a human than "64:ff9b::c000:221" is.
But there are two situations where you'd use that for NAT64... 1. Presentation of electronic information to the end user, where it's virtually impossible for the system to know whether that's a NAT64 address representing an IPv4 remote end or an IPv6 address, so, how do you know when to represent it that way to the end user? 2. As a destination specifier (in which case the system most likely got the address as a binary return from DNS64 and doesn't present it to the end user prior to sending the request off to the remote server and even if it did, again, doesn't likely have a way to know when to use that notation. Sure, there's the third possibility that an end-user is hand-typing an IPv6-encoded IPv4 address to go through a translator, but, I think for a variety of reasons, that behavior should be relatively strongly discouraged, no?
Similarly, when using SIIT, the same syntax may be used in firewall rules or ACLs. So if you want to open, say, the SSH port from a trusted IPv4 address 192.0.2.33 on the far side of the SIIT gateway to your IPv6 server, it's much easier to open for "64:ff9b::192.0.2.33", and it will also make your ACL much more readable to the next guy that comes along than if you had used "64:ff9b::c000:221".
SIIT is a really bad idea. Codifying it into a firewall only makes things worse.
Also see RFC 6052 section 2.4.
RFC's contain all kinds of embodiments and documentation of bad ideas that should have been deprecated long ago. Use of this notation for parsing rather than as an output-only format is just another example. Yes, once upon a time, lumping lots of V4-ness into IPv6 to try and create some impression of backwards compatibility seemed like a good idea. A couple of decades of experimentation and operational experience has now taught us that it doesn't work out as well as one might have hoped. Time to move on. Owen
* Owen DeLong
On Nov 2, 2012, at 02:52 , Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
It absolutely does make sense, especially in the case of IPv4/IPv6 translation. For example, when using NAT64, "64:ff9b::192.0.2.33" is an example of a valid IPv6 address that maps to 192.0.2.33. Much easier to relate to for a human than "64:ff9b::c000:221" is.
But there are two situations where you'd use that for NAT64...
1. Presentation of electronic information to the end user, where it's virtually impossible for the system to know whether that's a NAT64 address representing an IPv4 remote end or an IPv6 address, so, how do you know when to represent it that way to the end user?
2. As a destination specifier (in which case the system most likely got the address as a binary return from DNS64 and doesn't present it to the end user prior to sending the request off to the remote server and even if it did, again, doesn't likely have a way to know when to use that notation.
There's also the case of when including NAT64 support directly into an application. Not all applications use DNS, and therefore cannot use DNS64 either, e.g., applications that are passing around IP literals in their payload (p2p stuff like BitTorrent and Skype comes to mind). However, by discovering the NAT64 prefix in some other way (see draft-ietf-behave-nat64-discovery-heuristic), they can construct a usable destination specifier by simple string concatenation. It's wouldn't be the only way to do it, but it's certainly simple and easy to understand approach.
Sure, there's the third possibility that an end-user is hand-typing an IPv6-encoded IPv4 address to go through a translator, but, I think for a variety of reasons, that behavior should be relatively strongly discouraged, no?
That wouldn't be a end-user friendly application, no. However, for network operators, dealing with IP literals is common when debugging or other stuff. I frequently use the dotted quad syntax when working on our NAT64 installation, and find it very convenient.
Similarly, when using SIIT, the same syntax may be used in firewall rules or ACLs. So if you want to open, say, the SSH port from a trusted IPv4 address 192.0.2.33 on the far side of the SIIT gateway to your IPv6 server, it's much easier to open for "64:ff9b::192.0.2.33", and it will also make your ACL much more readable to the next guy that comes along than if you had used "64:ff9b::c000:221".
SIIT is a really bad idea.
I disagree. In my opinion, SIIT is perfectly suited for data centres. But let's not take that discussion here - I'll be submitting a draft about the use case to the IETF in a few days, and I hope you'll read it and participate in the discussion on the v6ops or sunset4 list (not yet sure which one it'll be).
Also see RFC 6052 section 2.4.
RFC's contain all kinds of embodiments and documentation of bad ideas that should have been deprecated long ago.
Certainly. There is, however, a mechanism for deprecating RFCs. Either you can move them to Historic status, or you can obsolete them by writing new ones. You, or anyone else who don't like the ::0.0.0.0 syntax, is free to do so. Until that happens, though, it will remain part of the standard and you cannot reject it out of hand just because you don't like it. -- Tore Anderson Redpill Linpro AS - http://www.redpill-linpro.com/
On Nov 3, 2012, at 04:19 , Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
* Owen DeLong
On Nov 2, 2012, at 02:52 , Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
It absolutely does make sense, especially in the case of IPv4/IPv6 translation. For example, when using NAT64, "64:ff9b::192.0.2.33" is an example of a valid IPv6 address that maps to 192.0.2.33. Much easier to relate to for a human than "64:ff9b::c000:221" is.
But there are two situations where you'd use that for NAT64...
1. Presentation of electronic information to the end user, where it's virtually impossible for the system to know whether that's a NAT64 address representing an IPv4 remote end or an IPv6 address, so, how do you know when to represent it that way to the end user?
2. As a destination specifier (in which case the system most likely got the address as a binary return from DNS64 and doesn't present it to the end user prior to sending the request off to the remote server and even if it did, again, doesn't likely have a way to know when to use that notation.
There's also the case of when including NAT64 support directly into an application. Not all applications use DNS, and therefore cannot use DNS64 either, e.g., applications that are passing around IP literals in their payload (p2p stuff like BitTorrent and Skype comes to mind). However, by discovering the NAT64 prefix in some other way (see draft-ietf-behave-nat64-discovery-heuristic), they can construct a usable destination specifier by simple string concatenation. It's wouldn't be the only way to do it, but it's certainly simple and easy to understand approach.
But if the application is doing the construction, then it's doing it with binary and not with a string representation of the address. The binary 128 bits don't change. We're talking about the format of a string representing those 128 bit.
Sure, there's the third possibility that an end-user is hand-typing an IPv6-encoded IPv4 address to go through a translator, but, I think for a variety of reasons, that behavior should be relatively strongly discouraged, no?
That wouldn't be a end-user friendly application, no. However, for network operators, dealing with IP literals is common when debugging or other stuff. I frequently use the dotted quad syntax when working on our NAT64 installation, and find it very convenient.
Fair point.
Similarly, when using SIIT, the same syntax may be used in firewall rules or ACLs. So if you want to open, say, the SSH port from a trusted IPv4 address 192.0.2.33 on the far side of the SIIT gateway to your IPv6 server, it's much easier to open for "64:ff9b::192.0.2.33", and it will also make your ACL much more readable to the next guy that comes along than if you had used "64:ff9b::c000:221".
SIIT is a really bad idea.
I disagree. In my opinion, SIIT is perfectly suited for data centres. But let's not take that discussion here - I'll be submitting a draft about the use case to the IETF in a few days, and I hope you'll read it and participate in the discussion on the v6ops or sunset4 list (not yet sure which one it'll be).
What do you get from SIIT that you don't get from dual stack in a datacenter?
Also see RFC 6052 section 2.4.
RFC's contain all kinds of embodiments and documentation of bad ideas that should have been deprecated long ago.
Certainly. There is, however, a mechanism for deprecating RFCs. Either you can move them to Historic status, or you can obsolete them by writing new ones. You, or anyone else who don't like the ::0.0.0.0 syntax, is free to do so. Until that happens, though, it will remain part of the standard and you cannot reject it out of hand just because you don't like it.
Fair point, however, I can certainly discourage its use. Frankly, lots of stuff from RFCs gets rejected out of hand by various implementers all the time. Owen
* Owen DeLong
What do you get from SIIT that you don't get from dual stack in a datacenter?
In no particular order: - Single stack is much simpler than dual stack. A single stack to configure, a single ACL to write, a single service address to monitor, staff needs to know only a single protocol, deveopment staff needs only develop and do QA for a single protocol, it's a single topology to document, a single IGP to run and monitor, a single protocol to debug and troubleshoot, one less attack vector for the bad guys, and so on. I have a strong feeling that the reason why dual stack failed so miserably as a transition mechanism was precisely because of the fact that it adds significant complexity and operational overhead, compared to single stack. - IPv4 address conservation. If you're running out of IPv4 addresses, you cannot use dual stack, as dual stack does nothing to reduce your dependency on IPv4 compared to single stack IPv4. With dual stack, you'll be using (at least) one IPv4 address per server, plus a bit of overhead due to the server LAN prefixes needing to be rounded up to the nearest power or two (or higher if you want to accommodate for future growth), plus overhead due to the network infrastructure. With SIIT, on the other hand, you'll be using a single IPv4 address per publicly available service - one /32 out of a pool, with nothing going to waste due to aggregation, network infrastructure, and so on. - Promotes first-class native IPv6 deployment. Not that dual stack isn't native IPv6 too, but I do have the impression that often, IPv6 in a dual stacked environment is a second class citizen. IPv6 might be only partially deployed, not monitored as well as IPv4, or that there are architectural dependencies on IPv4 in the application stack, so that you cannot just shut off IPv4 and expect it to continue to work fine on IPv6 only. With SIIT, you get only a single, first-class, citizen - IPv6. And it'll be the only IPv6 migration/transition/deployment project you'll ever have to do. When the time comes to discontinue support for IPv4, you just remove your IN A records and shut down the SIIT gateway(s), there will be no need to touch the application stack at all. As I said earlier, I will submit an IETF draft about the use case shortly (it seems that the upload page is closed right now, due to the upcoming IETF meeting), and I invite you to participate in the discussion - hopefully, we can work together to address your technical concerns with the solution. I did present the use case at RIPE64, by the way - I hope you will find these links interesting: https://ripe64.ripe.net/archives/video/37 https://ripe64.ripe.net/presentations/67-20120417-RIPE64-The_Case_for_IPv6_O... Best regards, -- Tore Anderson Redpill Linpro AS - http://www.redpill-linpro.com/
On Nov 4, 2012, at 1:55 AM, Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
* Owen DeLong
What do you get from SIIT that you don't get from dual stack in a datacenter?
In no particular order:
- Single stack is much simpler than dual stack. A single stack to configure, a single ACL to write, a single service address to monitor, staff needs to know only a single protocol, deveopment staff needs only develop and do QA for a single protocol, it's a single topology to document, a single IGP to run and monitor, a single protocol to debug and troubleshoot, one less attack vector for the bad guys, and so on. I have a strong feeling that the reason why dual stack failed so miserably as a transition mechanism was precisely because of the fact that it adds significant complexity and operational overhead, compared to single stack.
Except that with SIIT, you're still dealing with two stacks, just moving the place where you deal with them around a bit. Further, you're adding the complication of NAT into your world (SIIT is a form of NAT whether you care to admit that to yourself or not).
- IPv4 address conservation. If you're running out of IPv4 addresses, you cannot use dual stack, as dual stack does nothing to reduce your dependency on IPv4 compared to single stack IPv4. With dual stack, you'll be using (at least) one IPv4 address per server, plus a bit of overhead due to the server LAN prefixes needing to be rounded up to the nearest power or two (or higher if you want to accommodate for future growth), plus overhead due to the network infrastructure. With SIIT, on the other hand, you'll be using a single IPv4 address per publicly available service - one /32 out of a pool, with nothing going to waste due to aggregation, network infrastructure, and so on.
Since you end up dealing with NAT anyway, why not just use NAT for IPv4 conservation. It's what most engineers are already used to dealing with and you don't lose anything between it and SIIT. Further, for SIIT to work, you don't really conserve any IPv4 addresses, since address conservation requires state.
- Promotes first-class native IPv6 deployment. Not that dual stack isn't native IPv6 too, but I do have the impression that often, IPv6 in a dual stacked environment is a second class citizen. IPv6 might be only partially deployed, not monitored as well as IPv4, or that there are architectural dependencies on IPv4 in the application stack, so that you cannot just shut off IPv4 and expect it to continue to work fine on IPv6 only. With SIIT, you get only a single, first-class, citizen - IPv6. And it'll be the only IPv6 migration/transition/deployment project you'll ever have to do. When the time comes to discontinue support for IPv4, you just remove your IN A records and shut down the SIIT gateway(s), there will be no need to touch the application stack at all.
Treating IPv6 as a second class citizen is a choice, not an inherent consequence of dual-stack. IPv6 certainly isn't a second class citizen on my network or on Hurricane Electric's network.
As I said earlier, I will submit an IETF draft about the use case shortly (it seems that the upload page is closed right now, due to the upcoming IETF meeting), and I invite you to participate in the discussion - hopefully, we can work together to address your technical concerns with the solution.
I'll look forward to reading the draft.
I did present the use case at RIPE64, by the way - I hope you will find these links interesting:
https://ripe64.ripe.net/archives/video/37 https://ripe64.ripe.net/presentations/67-20120417-RIPE64-The_Case_for_IPv6_O...
I'll try to look them over when I get some time. Owen
* Owen DeLong
On Nov 4, 2012, at 1:55 AM, Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
* Owen DeLong
What do you get from SIIT that you don't get from dual stack in a datacenter?
In no particular order:
- Single stack is much simpler than dual stack. A single stack to configure, a single ACL to write, a single service address to monitor, staff needs to know only a single protocol, deveopment staff needs only develop and do QA for a single protocol, it's a single topology to document, a single IGP to run and monitor, a single protocol to debug and troubleshoot, one less attack vector for the bad guys, and so on. I have a strong feeling that the reason why dual stack failed so miserably as a transition mechanism was precisely because of the fact that it adds significant complexity and operational overhead, compared to single stack.
Except that with SIIT, you're still dealing with two stacks, just moving the place where you deal with them around a bit. Further, you're adding the complication of NAT into your world (SIIT is a form of NAT whether you care to admit that to yourself or not).
The difference is that only a small number of people will need to deal with the two stacks, in a small number of places. They way I envision it, the networking staff would ideally operate SIIT a logical function on the data centre's access routers, or their in their backbone's core/border routers. A typical data centre operator/content provider has a vastly larger number of servers, applications, systems administrators, and software developers, than they have routers and network administrators. By making IPv4 end-user connectivity a service provided by the network, you make the amount of dual stack-related complexity a fraction of what it would be if you had to run dual stack on every server and in every application. I have no problem admitting that SIIT is a form of NAT. It is. The «T» in both cases stands for «Translation», after all.
- IPv4 address conservation. If you're running out of IPv4 addresses, you cannot use dual stack, as dual stack does nothing to reduce your dependency on IPv4 compared to single stack IPv4. With dual stack, you'll be using (at least) one IPv4 address per server, plus a bit of overhead due to the server LAN prefixes needing to be rounded up to the nearest power or two (or higher if you want to accommodate for future growth), plus overhead due to the network infrastructure. With SIIT, on the other hand, you'll be using a single IPv4 address per publicly available service - one /32 out of a pool, with nothing going to waste due to aggregation, network infrastructure, and so on.
Since you end up dealing with NAT anyway, why not just use NAT for IPv4 conservation. It's what most engineers are already used to dealing with and you don't lose anything between it and SIIT. Further, for SIIT to work, you don't really conserve any IPv4 addresses, since address conservation requires state.
Nope! The «S» in SIIT stands for «Stateless». That is the beauty of it. NAT44, on the other hand, is stateful, a very undesirable trait. Suddenly, things like flows per second and flow initiation rate is relevant for the overall performance of the architecture. It requires flows to pass bidirectionally across a single instance - the servers' default route must point to the NAT44, and a failure will cause the disruption of all existing flows. It is probably possible to find ways to avoid some or all of these problems, but it comes at the expense of added complexity. SIIT, on the other hand, is stateless, so you can use anycasting with normal routing protocols or load balancing using ECMP. A failure handled just like any IP re-routing event. You don't need the server's default route to point to the SIIT box, it is just a regular IPv6 route (typically a /96). You don't even have to run it in your own network. Assuming we have IPv6 connectivity between us, I could sell you SIIT service over the Internet or via a direct peering. (I'll be happy to give you a demo just for fun, give me an IPv6 address and I'll map up a public IPv4 front-end address for you in our SIIT deployment.) Finally, by putting your money into NAT44 for IPv4 conservation, you have accomplished exactly *nothing* when it comes to IPv6 deployment. You'll still have to go down the dual stack route, with the added complexity that will cause. With SIIT, you can kill both birds with one stone.
- Promotes first-class native IPv6 deployment. Not that dual stack isn't native IPv6 too, but I do have the impression that often, IPv6 in a dual stacked environment is a second class citizen. IPv6 might be only partially deployed, not monitored as well as IPv4, or that there are architectural dependencies on IPv4 in the application stack, so that you cannot just shut off IPv4 and expect it to continue to work fine on IPv6 only. With SIIT, you get only a single, first-class, citizen - IPv6. And it'll be the only IPv6 migration/transition/deployment project you'll ever have to do. When the time comes to discontinue support for IPv4, you just remove your IN A records and shut down the SIIT gateway(s), there will be no need to touch the application stack at all.
Treating IPv6 as a second class citizen is a choice, not an inherent consequence of dual-stack. IPv6 certainly isn't a second class citizen on my network or on Hurricane Electric's network.
Agreed, and I have no reason to doubt that HE does dual stack really well. That said, I don't think HE is the type of organisation for which SIIT makes the most sense - certainly not for your ISP activities. The type of organisation I picture using SIIT, is one that operates a bunch of servers and application cluster, i.e., they are controlling the entire service stack, and making them available to the internet through a small number of host names. Most internet content providers would fall in this category, including my own employer. -- Tore Anderson Redpill Linpro AS - http://www.redpill-linpro.com/
On Nov 4, 2012, at 5:15 AM, Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
* Owen DeLong
On Nov 4, 2012, at 1:55 AM, Tore Anderson <tore.anderson@redpill-linpro.com> wrote:
* Owen DeLong
What do you get from SIIT that you don't get from dual stack in a datacenter?
In no particular order:
- Single stack is much simpler than dual stack. A single stack to configure, a single ACL to write, a single service address to monitor, staff needs to know only a single protocol, deveopment staff needs only develop and do QA for a single protocol, it's a single topology to document, a single IGP to run and monitor, a single protocol to debug and troubleshoot, one less attack vector for the bad guys, and so on. I have a strong feeling that the reason why dual stack failed so miserably as a transition mechanism was precisely because of the fact that it adds significant complexity and operational overhead, compared to single stack.
Except that with SIIT, you're still dealing with two stacks, just moving the place where you deal with them around a bit. Further, you're adding the complication of NAT into your world (SIIT is a form of NAT whether you care to admit that to yourself or not).
The difference is that only a small number of people will need to deal with the two stacks, in a small number of places. They way I envision it, the networking staff would ideally operate SIIT a logical function on the data centre's access routers, or their in their backbone's core/border routers.
I suppose if you're not moving significant traffic, that might work. In the data centers I deal with, that's a really expensive approach because it would tie up a lot more router CPU resources that really shouldn't be wasted on things end-hosts can do for themselves. By having the end-host just do dual-stack, life gets a lot easier if you're moving significant traffic. If you only have a few megabits or even a couple of gigabits, sure. I haven't worked with anything that small in a long time.
A typical data centre operator/content provider has a vastly larger number of servers, applications, systems administrators, and software developers, than they have routers and network administrators. By making IPv4 end-user connectivity a service provided by the network, you make the amount of dual stack-related complexity a fraction of what it would be if you had to run dual stack on every server and in every application.
In a world where you have lots of network/system administrators that fully understand IPv6 and have limited IPv4 knowledge, sure. In the real world, where the situation is reversed, you just confuse everyone and make the complexity of troubleshooting a lot of things that much harder because it is far more likely to require interaction across teams to get things fixed.
I have no problem admitting that SIIT is a form of NAT. It is. The «T» in both cases stands for «Translation», after all.
Yep.
- IPv4 address conservation. If you're running out of IPv4 addresses, you cannot use dual stack, as dual stack does nothing to reduce your dependency on IPv4 compared to single stack IPv4. With dual stack, you'll be using (at least) one IPv4 address per server, plus a bit of overhead due to the server LAN prefixes needing to be rounded up to the nearest power or two (or higher if you want to accommodate for future growth), plus overhead due to the network infrastructure. With SIIT, on the other hand, you'll be using a single IPv4 address per publicly available service - one /32 out of a pool, with nothing going to waste due to aggregation, network infrastructure, and so on.
Since you end up dealing with NAT anyway, why not just use NAT for IPv4 conservation. It's what most engineers are already used to dealing with and you don't lose anything between it and SIIT. Further, for SIIT to work, you don't really conserve any IPv4 addresses, since address conservation requires state.
Nope! The «S» in SIIT stands for «Stateless». That is the beauty of it.
Right… As soon as you make it stateless, you lose the ability to overload the addresses unless you're using a static mapping of ports, in which case, you've traded dynamic state tables for static tables that, while stateless, are a greater level of complexity and create even more limitations.
NAT44, on the other hand, is stateful, a very undesirable trait. Suddenly, things like flows per second and flow initiation rate is relevant for the overall performance of the architecture. It requires flows to pass bidirectionally across a single instance - the servers' default route must point to the NAT44, and a failure will cause the disruption of all existing flows. It is probably possible to find ways to avoid some or all of these problems, but it comes at the expense of added complexity.
SIIT, on the other hand, is stateless, so you can use anycasting with normal routing protocols or load balancing using ECMP. A failure handled just like any IP re-routing event. You don't need the server's default route to point to the SIIT box, it is just a regular IPv6 route (typically a /96). You don't even have to run it in your own network. Assuming we have IPv6 connectivity between us, I could sell you SIIT service over the Internet or via a direct peering. (I'll be happy to give you a demo just for fun, give me an IPv6 address and I'll map up a public IPv4 front-end address for you in our SIIT deployment.)
Without state, how are you overloading the IPv4 addresses? If I don't have a 1:1 mapping between public IPv4 addresses and IPv6 addresses at the SIIT box, what you have described doesn't seem feasible to me. If I have a 1:1 mapping, then, I don't have any address conservation because the SIIT box has an IPv4 address for every IPv6 host that speaks IPv4.
Finally, by putting your money into NAT44 for IPv4 conservation, you have accomplished exactly *nothing* when it comes to IPv6 deployment. You'll still have to go down the dual stack route, with the added complexity that will cause. With SIIT, you can kill both birds with one stone.
But I'm not putting more money into NAT44… I'm deploying IPv6 on top of my existing IPv4 environment where the NAT44 is already paid for.
- Promotes first-class native IPv6 deployment. Not that dual stack isn't native IPv6 too, but I do have the impression that often, IPv6 in a dual stacked environment is a second class citizen. IPv6 might be only partially deployed, not monitored as well as IPv4, or that there are architectural dependencies on IPv4 in the application stack, so that you cannot just shut off IPv4 and expect it to continue to work fine on IPv6 only. With SIIT, you get only a single, first-class, citizen - IPv6. And it'll be the only IPv6 migration/transition/deployment project you'll ever have to do. When the time comes to discontinue support for IPv4, you just remove your IN A records and shut down the SIIT gateway(s), there will be no need to touch the application stack at all.
Treating IPv6 as a second class citizen is a choice, not an inherent consequence of dual-stack. IPv6 certainly isn't a second class citizen on my network or on Hurricane Electric's network.
Agreed, and I have no reason to doubt that HE does dual stack really well. That said, I don't think HE is the type of organisation for which SIIT makes the most sense - certainly not for your ISP activities. The type of organisation I picture using SIIT, is one that operates a bunch of servers and application cluster, i.e., they are controlling the entire service stack, and making them available to the internet through a small number of host names. Most internet content providers would fall in this category, including my own employer.
We have a lot of customers and professional services clients that fit exactly what you describe. Not one of them has chosen SIIT over dual stack. Admittedly, they also aren't using NAT44, but that's because everything has a public IPv4 address, as things should be for server clusters. Owen
* Owen DeLong
The difference is that only a small number of people will need to deal with the two stacks, in a small number of places. They way I envision it, the networking staff would ideally operate SIIT a logical function on the data centre's access routers, or their in their backbone's core/border routers.
I suppose if you're not moving significant traffic, that might work.
In the data centers I deal with, that's a really expensive approach because it would tie up a lot more router CPU resources that really shouldn't be wasted on things end-hosts can do for themselves.
By having the end-host just do dual-stack, life gets a lot easier if you're moving significant traffic. If you only have a few megabits or even a couple of gigabits, sure. I haven't worked with anything that small in a long time.
For a production deployment, we would obviously not do this in our routers' CPUs, for the same reasons that we wouldn't run regular IP forwarding in their CPUs. If a data centre access router gets a mix of dual-stacked input traffic coming in from the Internet, that same amount of traffic has to go out in the rear towards the data centre. Whether or not it comes out as the same dual stacked mix as what came in, or as only IPv6 does not change the total amount of bandwidth the router has to pass. So the amount of bandwidth is irrelevant, really. I would agree with you if this was question of doing SIIT in software instead of IP forwarding in hardware. But that's no different than when what was a problem a while back - routers that did IPv4 in hardware and IPv6 in software. Under such conditions, you just don't deploy.
A typical data centre operator/content provider has a vastly larger number of servers, applications, systems administrators, and software developers, than they have routers and network administrators. By making IPv4 end-user connectivity a service provided by the network, you make the amount of dual stack-related complexity a fraction of what it would be if you had to run dual stack on every server and in every application.
In a world where you have lots of network/system administrators that fully understand IPv6 and have limited IPv4 knowledge, sure. In the real world, where the situation is reversed, you just confuse everyone and make the complexity of troubleshooting a lot of things that much harder because it is far more likely to require interaction across teams to get things fixed.
With dual stack, they would need to fully understand *both* IPv6 and IPv4. This sounds to me more like an argument for staying IPv4 only. And even if they do know both protocols perfectly, they still have to operate them both, monitor them both, document them both, and so on. That is a non-negligible operational overhead, in my experience.
Right… As soon as you make it stateless, you lose the ability to overload the addresses unless you're using a static mapping of ports, in which case, you've traded dynamic state tables for static tables that, while stateless, are a greater level of complexity and create even more limitations.
I would claim that stateful NAPT44 mapping, which requires a router to dynamically maintain a table of all concurrent flows using a five-tuple identifier based on both L3 and L4 headers, is a vastly more complex and thing than a static configured mapping table with two L3 addresses for each entry.
Without state, how are you overloading the IPv4 addresses?
We're not.
If I don't have a 1:1 mapping between public IPv4 addresses and IPv6 addresses at the SIIT box, what you have described doesn't seem feasible to me.
SIIT is 1:1.
If I have a 1:1 mapping, then, I don't have any address conservation because the SIIT box has an IPv4 address for every IPv6 host that speaks IPv4.
The SIIT box has indeed an IPv4 address for every IPv6 host that speaks IPv4, true. The conservation comes from elsewhere, from the fact that a content provider will often have a large number of servers, but a much smaller number of publicly available IPv4 service addresses. Let's say you have a small customer with a web server and a database server. Using dual stack, you'll need to assign that customer at least a /29. One address for each server, three for network/broadcast/default router, and three more that goes away just because those five gets rounded up to the nearest power of two. With SIIT, that customer would instead get an IPv6 /64 on his LAN, and no IPv4 at all. Instead, a single IPv4 address will gets mapped to the IPv6 address of the customer's web server. You've now just saved 7 out of 8 addresses which you may use for 7 other customers like this one. That's just a small example. For most my customers, the ratio of assigned IPv4 addresses to publicly available services is *at least* one order of magnitude. So I have huge potential for savings here.
Finally, by putting your money into NAT44 for IPv4 conservation, you have accomplished exactly *nothing* when it comes to IPv6 deployment. You'll still have to go down the dual stack route, with the added complexity that will cause. With SIIT, you can kill both birds with one stone.
But I'm not putting more money into NAT44… I'm deploying IPv6 on top of my existing IPv4 environment where the NAT44 is already paid for.
We don't currently use NAT44 and have no desire to invest in it either. I strongly dislike the idea of putting big stateful boxes between our customers and their end users. But if we have to do it anyway, it will certainly eat away from any human resources and budget that could otherwise be used for IPv6 activities. Unfortunately. If you've already bought NAT44 for your IPv4 conservation strategy, the case for SIIT is weaker, I admit. However, there's still the benefits of potentially reducing complexity compared to dual stack, and the fact that it requires no state in your network.
Agreed, and I have no reason to doubt that HE does dual stack really well. That said, I don't think HE is the type of organisation for which SIIT makes the most sense - certainly not for your ISP activities. The type of organisation I picture using SIIT, is one that operates a bunch of servers and application cluster, i.e., they are controlling the entire service stack, and making them available to the internet through a small number of host names. Most internet content providers would fall in this category, including my own employer.
We have a lot of customers and professional services clients that fit exactly what you describe. Not one of them has chosen SIIT over dual stack.
Well, today it's not easy to deploy SIIT, so that's quite understandable. There's not that many implementations available. You can do it on the Cisco ASR1K, although it lacks some features (in particular the static mapping feature) that I find are very desirable in the data centre use case.
Admittedly, they also aren't using NAT44, but that's because everything has a public IPv4 address, as things should be for server clusters.
I'm confused, you said above that you were already using NAT44..? In any case, good for you that you have enough public IPv4 addresses for everything. Very shortly, we won't, and our friendly neighbourhood RIR is fresh out. -- Tore Anderson Redpill Linpro AS - http://www.redpill-linpro.com/
On Nov 1, 2012, at 8:20 AM, Masataka Ohta wrote:
We should better introduce partially decimal format for IPv6 addresses or, better, avoid IPv6 entirely.
With respect, it is already possible to use the decimal subset if you wish. For example, you could write 2001:dba::192:168:2:1 It wouldn't be very dense, but EID density wasn't the objective.
In article <xs4all.CALgc3C7ngGpkRNYaCgN_ACY28uPFbM=KRnACsW0Jg=YsLuQHhQ@mail.gmail.com> you write:
For simplicity and a wish to keep a mapping to our IPv4 addresses, each device (router/server/firewall) has a static IPv6 address that has the same last digits as the IPv4 address, only the subnet is changed. You can say it's a IPv4 thinking model, but it's easier to remember that if the fileserver it's at 192.168.10.10 then it's IPv6 counterpart address would be 2001:abcd::192:168:10:10 (each subnet being a /64)
We use a /120 subnet for servers to prevent the NDP cache exhaustion attack. We do maintain a mapping between IPv4 and IPv6 addresses; it's simply 2001:db8:vv:ww::xx, where xx is the hex value of the last octet of the IPv4 address. Mike.
On Thu, 01 Nov 2012 14:28:48 +0100, "Miquel van Smoorenburg" said:
We use a /120 subnet for servers to prevent the NDP cache exhaustion attack. We do maintain a mapping between IPv4 and IPv6 addresses; it's simply 2001:db8:vv:ww::xx, where xx is the hex value of the last octet of the IPv4 address.
ooh.. that's a clever approach I hadn't seen before. Who should we credit for this one?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 11/1/2012 1:59 PM, Valdis.Kletnieks@vt.edu wrote:
On Thu, 01 Nov 2012 14:28:48 +0100, "Miquel van Smoorenburg" said:
We use a /120 subnet for servers to prevent the NDP cache exhaustion attack. We do maintain a mapping between IPv4 and IPv6 addresses; it's simply 2001:db8:vv:ww::xx, where xx is the hex value of the last octet of the IPv4 address.
ooh.. that's a clever approach I hadn't seen before. Who should we credit for this one?
/120 works well until you get > 99 (if you want the decimal representations of addresses to look the same)... or if your techs understand hex. 10.0.0.123 <-> 2001:db8:vv:ww::7b I have used /116 in the past. This gives you 1-fff at the end. 10.0.0.123 <-> 2001:db8:vv:ww::123 Hopefully, this is future proof(ish) in that IPv6 only hosts (...when that happens...) on the same subnet can use 2001:db8:vv:ww::[a-f][0-f][0-f] without danger of collisions with IPv4/IPv6 hosts. - -DMM -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) Comment: Using GnuPG with Mozilla - http://www.enigmail.net/ iQEcBAEBAgAGBQJQktR2AAoJECp6zT7OFmGauBMH/2bntbEMqdTtwPc/kMKAeikc iHd3giEcstp/v5kaAgdZGm68Juy3jlHXVe7TZriQA3OWYI7dSzZhuVFQxwP2+t1t fsZiU1ptoSKJMnQZhUdCOSuDXQZ4IwAWyhLq1EoXNxwGWXbM+KpddfwHtfLG6syz 3RQ2BB48l+eT1fvxzd1xmyIAjOxvtsqmpLTTOmXAXtN7+e0py/VpoBvgaDfg3Xnt dnc80i2bKM+DGqZJyGbkno0lANh1iZRnUWaPethlxhgQA433Yzu06ut6Vq4zIN2k HZ84b7VbXbxrOmfiRca0vLgue/VyB6PlBevb9yVnqaHb3iWQKF0G8Mq1Ge/nm5I= =KSjA -----END PGP SIGNATURE-----
There are better ways to avoid neighbor exhaustion attacks unless you have attackers inside your network. If you have attackers inside your network, you probably have bigger problems than neighbor table attacks anyway, but that's a different issue. Even if you're going to do something silly like use /120s on interfaces, I highly recommend going ahead and reserving the enclosing /64 so that when you discover /120 wasn't the best idea, you can easily retrofit. Owen On Nov 1, 2012, at 12:58 , David Miller <dmiller@tiggee.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 11/1/2012 1:59 PM, Valdis.Kletnieks@vt.edu wrote:
On Thu, 01 Nov 2012 14:28:48 +0100, "Miquel van Smoorenburg" said:
We use a /120 subnet for servers to prevent the NDP cache exhaustion attack. We do maintain a mapping between IPv4 and IPv6 addresses; it's simply 2001:db8:vv:ww::xx, where xx is the hex value of the last octet of the IPv4 address.
ooh.. that's a clever approach I hadn't seen before. Who should we credit for this one?
/120 works well until you get > 99 (if you want the decimal representations of addresses to look the same)... or if your techs understand hex.
10.0.0.123 <-> 2001:db8:vv:ww::7b
I have used /116 in the past. This gives you 1-fff at the end.
10.0.0.123 <-> 2001:db8:vv:ww::123
Hopefully, this is future proof(ish) in that IPv6 only hosts (...when that happens...) on the same subnet can use 2001:db8:vv:ww::[a-f][0-f][0-f] without danger of collisions with IPv4/IPv6 hosts.
- -DMM -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) Comment: Using GnuPG with Mozilla - http://www.enigmail.net/
iQEcBAEBAgAGBQJQktR2AAoJECp6zT7OFmGauBMH/2bntbEMqdTtwPc/kMKAeikc iHd3giEcstp/v5kaAgdZGm68Juy3jlHXVe7TZriQA3OWYI7dSzZhuVFQxwP2+t1t fsZiU1ptoSKJMnQZhUdCOSuDXQZ4IwAWyhLq1EoXNxwGWXbM+KpddfwHtfLG6syz 3RQ2BB48l+eT1fvxzd1xmyIAjOxvtsqmpLTTOmXAXtN7+e0py/VpoBvgaDfg3Xnt dnc80i2bKM+DGqZJyGbkno0lANh1iZRnUWaPethlxhgQA433Yzu06ut6Vq4zIN2k HZ84b7VbXbxrOmfiRca0vLgue/VyB6PlBevb9yVnqaHb3iWQKF0G8Mq1Ge/nm5I= =KSjA -----END PGP SIGNATURE-----
In article <xs4all.963E27C7-A0C5-44AC-86AF-33E6286C9BC1@delong.com> you write:
There are better ways to avoid neighbor exhaustion attacks unless you have attackers inside your network.
You mean filtering. I haven't tried it recently, but a while ago I put an output filter on a Juniper router that allowed just the lower /120 out of a /64 on an interface. What happened was that neighbor discovery happened /before/ filtering. I should probably test that against recent JunOS releases, but that was a firm reason to go with a /120 instead of a filter. Besides, configuring a /120 is way less work than a filter per interface (yes we do have per-interface filters but they're kind of generic).
Even if you're going to do something silly like use /120s on interfaces, I highly recommend going ahead and reserving the enclosing /64 so that when you discover /120 wasn't the best idea, you can easily retrofit.
Sure, we do that, as soon as router vendors solve the NDP CE attack problem we'll go back to /64s. Mike.
On Nov 1, 2012, at 4:41 PM, "Miquel van Smoorenburg" <mikevs@xs4all.net> wrote:
In article <xs4all.963E27C7-A0C5-44AC-86AF-33E6286C9BC1@delong.com> you write:
There are better ways to avoid neighbor exhaustion attacks unless you have attackers inside your network.
You mean filtering. I haven't tried it recently, but a while ago I put an output filter on a Juniper router that allowed just the lower /120 out of a /64 on an interface. What happened was that neighbor discovery happened /before/ filtering. I should probably test that against recent JunOS releases, but that was a firm reason to go with a /120 instead of a filter. Besides, configuring a /120 is way less work than a filter per interface (yes we do have per-interface filters but they're kind of generic).
I mean assign your point to points from a particular /48 within your /32 or a particular /56 within your /48 or whatever is appropriate to your situation. Then, at your borders, filter that entire /48 or /56 or whatever it is so that people outside simply aren't allowed to send packets to your point to point links at all.
Even if you're going to do something silly like use /120s on interfaces, I highly recommend going ahead and reserving the enclosing /64 so that when you discover /120 wasn't the best idea, you can easily retrofit.
Sure, we do that, as soon as router vendors solve the NDP CE attack problem we'll go back to /64s.
FWIW, the NDP CE attack doesn't yield much in the way of incentives to most attackers. As a DOS, it only prevents new nodes from joining the networks attached to the router and they can generally only attack the NC of the upstream router closer to them on each link, not the more distant one. Since core routers tend to have pretty stable neighbor relations, the actual attack surface in the real world is relatively small and there are far more effective DOS vectors available. Nonetheless, defense in depth is the right approach, but, do it in the way that requires the least maintenance effort on your part. Filtering an entire range of P2P links at the borders is about as low maintenance as it gets. (Again, this is assuming you don't have to deal with attackers inside your borders). If you are a university, things get more complicated because your job is to have attackers (or at least potential attackers) inside your borders. If you're not a university, then if you have attackers inside your borders, you probably have bigger problems than NDP CE. Owen
There are better ways to avoid neighbor exhaustion attacks unless you have attackers inside your network. All of the migrations are compromises of one sort or another. We thought
On 11/1/12 2:01 PM, Owen DeLong wrote: this one was important enough to include in an informational status RFC (6583). Which approach is most appropriate (and whether it's necessary at all) will depend on the circumstances involved.
If you have attackers inside your network, you probably have bigger problems than neighbor table attacks anyway, but that's a different issue.
Even if you're going to do something silly like use /120s on interfaces, I highly recommend going ahead and reserving the enclosing /64 so that when you discover /120 wasn't the best idea, you can easily retrofit. The problem isn't silly, I didn't find it all that funny when I first induced it in the lab. Owen
On Nov 1, 2012, at 12:58 , David Miller <dmiller@tiggee.com> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 11/1/2012 1:59 PM, Valdis.Kletnieks@vt.edu wrote:
On Thu, 01 Nov 2012 14:28:48 +0100, "Miquel van Smoorenburg" said:
We use a /120 subnet for servers to prevent the NDP cache exhaustion attack. We do maintain a mapping between IPv4 and IPv6 addresses; it's simply 2001:db8:vv:ww::xx, where xx is the hex value of the last octet of the IPv4 address. ooh.. that's a clever approach I hadn't seen before. Who should we credit for this one?
/120 works well until you get > 99 (if you want the decimal representations of addresses to look the same)... or if your techs understand hex.
10.0.0.123 <-> 2001:db8:vv:ww::7b
I have used /116 in the past. This gives you 1-fff at the end.
10.0.0.123 <-> 2001:db8:vv:ww::123
Hopefully, this is future proof(ish) in that IPv6 only hosts (...when that happens...) on the same subnet can use 2001:db8:vv:ww::[a-f][0-f][0-f] without danger of collisions with IPv4/IPv6 hosts.
- -DMM -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) Comment: Using GnuPG with Mozilla - http://www.enigmail.net/
iQEcBAEBAgAGBQJQktR2AAoJECp6zT7OFmGauBMH/2bntbEMqdTtwPc/kMKAeikc iHd3giEcstp/v5kaAgdZGm68Juy3jlHXVe7TZriQA3OWYI7dSzZhuVFQxwP2+t1t fsZiU1ptoSKJMnQZhUdCOSuDXQZ4IwAWyhLq1EoXNxwGWXbM+KpddfwHtfLG6syz 3RQ2BB48l+eT1fvxzd1xmyIAjOxvtsqmpLTTOmXAXtN7+e0py/VpoBvgaDfg3Xnt dnc80i2bKM+DGqZJyGbkno0lANh1iZRnUWaPethlxhgQA433Yzu06ut6Vq4zIN2k HZ84b7VbXbxrOmfiRca0vLgue/VyB6PlBevb9yVnqaHb3iWQKF0G8Mq1Ge/nm5I= =KSjA -----END PGP SIGNATURE-----
I have always been kind of partial to the idea of taking advantage IPv6 features and letting hosts set their own addresses with EUI-64 interface numbers.
That's all fine and dandy until the NIC card is swapped out for a new one. It's best to use fixed IPv6 addresses for services (and have the service bind() to those) and use the EUI-64 address for machine-related tasks (ssh, backups, etc). You can use the same EUI-64 network for both, as the EUI-64 space is sparse and there are lots of "never will be autoconfed" address, conveniently including those with lots of zeroes. The router(s) interface addresses should be hardcoded within that EUI-64 subnet, and …::1/64, …::2/64 are the obvious choices. There's an issue of address exhaustion is you use /64 for router-router links, and the best suggestion I've seen there is to use /126, as that makes the last octet consistently …1 or …2 for each end of a point-to-point link, which is operationally nicer than stuffing about with binary in your head to determine which address to ping (i.e., you take your interface's address and replace the last hexnumeral with 1 or 2 to get your neighbours address). The exception to router link addressing would be links with eBGP neighbours, where using the ASN of the networks is just so convenient. You don't care much for correspondence between IPv4 and IPv6 addresses, except in the case of router loopback interfaces where it is very operationally convenient to be able to mentally determine "is this the same router which I just saw in IPv4". Since you'll be typing those most often they are the obvious candidate for "subnet zero" so that "::" reduces the typing to the minimum. The obvious thing to do is to reserve the entire …:00:00:00:00::/64 and use the bottom N bits of that to match the binary IPv4 address of the loopback. N could be 32 bits if you like excessive typing or have a really big network. I've seen a few schemes which try to decimal numerals of the IPv4 address in the IPv6 address, but I don't find any of them compelling. If you really, really think you want that, then putting the top 16b in hex numerals and the lower 16b in decimal numerals will do what you want without excessive address consumption. This sounds difficult to use, but operationally you soon get used to the hex prefix and only notice when it isn't one of the common ones. -- Glen Turner <http://www.gdt.id.au/~gdt/>
Veering off this topic's course, Is there any issue with addresses like this ? 2001:470:1f00:1aa:abad:babe:8:beef < I have a bunch of these type 'addresses' configured for my various machines. I make it a point to come up with some sort of 'hex' speak address, what are peoples opinions on this?
On 03/11/2012 07:44, Randy wrote:
Veering off this topic's course, Is there any issue with addresses like this ? 2001:470:1f00:1aa:abad:babe:8:beef < I have a bunch of these type 'addresses' configured for my various machines.
I make it a point to come up with some sort of 'hex' speak address, what are peoples opinions on this?
Why bother? DNS supports all 26 characters ;-) Its cute... but it tends to only be useful in fairly small deployments. You quickly run out of nice combinations. I prefer to choose addresses that allow for the most consecutive zeros. Many UIs I've used display IPv6 address strings in very un-useful ways as they approach the allowable length of 39 characters. Many require you to resize your viewing window/column/etc to see the full address and some simply truncate the string and refuse to show you the host ID portion. -- Graham Beneke
On Sat, 2012-11-03 at 00:44 -0500, Randy wrote:
Veering off this topic's course, Is there any issue with addresses like this ? 2001:470:1f00:1aa:abad:babe:8:beef < I have a bunch of these type 'addresses' configured for my various machines.
I make it a point to come up with some sort of 'hex' speak address, what are peoples opinions on this?
I tell my students to avoid them. For several reasons: - if you need to remember an IP address, you are doing it wrong - limiting yourself to "word space" wastes "address space". Possibly acceptable in interface IDs, foolish in subnetting bits. - if people expect something, they will type it, possibly instead of what's actually needed. By making them expect words, you introduce a source of error. - cultural sensitivity and plain good sense suggest that many words or combinations might not be a good idea. How do your female techs feel about "BAD:BABE"? Only marginally better than they feel about "B16:B00B:EEZ", probably. Your markets in India, with its 900 million Hindus, might take a dim view of "DEAD:BEEF". Etc. - clever addresses are guessable addresses for scanners, and highly identifiable in data as probably attached to high-value targets - something that is witty once is generally irritating the thousandth time you see it. Doubly so if it wasn't very funny to begin with. - the time you spend trying to find something "meaningful" is basically wasted, and the time you spend will increase exponentially once you've used all the low-hanging fruit. All in all, using such addresses is probably a Bad Idea. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer http://www.biplane.com.au/blog GPG fingerprint: AE1D 4868 6420 AD9A A698 5251 1699 7B78 4EEE 6017 Old fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687
On Sat, Nov 3, 2012 at 8:28 AM, Karl Auer <kauer@biplane.com.au> wrote:
- if you need to remember an IP address, you are doing it wrong
Because DNS always works flawlessly and you never need to remember IP addresses, right ?
- cultural sensitivity and plain good sense suggest that many words or combinations might not be a good idea. How do your female techs feel about "BAD:BABE"? Only marginally better than they feel about "B16:B00B:EEZ", probably. Your markets in India, with its 900 million Hindus, might take a dim view of "DEAD:BEEF". Etc.
I think you're looking for problems where there are none. I see nothing wrong with BAD:BABE or with DEAD:BEEF. Your thinking suggests that there are only good babes and live beef, which is wrong on so many levels. Positive discrimination is as bad as discrimination and it creates more problems than it solves. In India you can have beef steak at restaurants, so I see no problem with the term.
- clever addresses are guessable addresses for scanners, and highly identifiable in data as probably attached to high-value targets
What is a clever IP address ?
On Mon, 2012-11-05 at 10:07 +0200, Eugeniu Patrascu wrote:
On Sat, Nov 3, 2012 at 8:28 AM, Karl Auer <kauer@biplane.com.au> wrote:
- if you need to remember an IP address, you are doing it wrong Because DNS always works flawlessly and you never need to remember IP addresses, right ?
If you are NOT memorising IP addresses and NOT wasting time on fragile encodings buried in your IP addresses, then your addressing is more robust and more flexible. So you occasionally have a problem with whatever system maps your IP addresses to human-usable entities - so what? You can't memorise ALL your addresses, so you have that problem anyway. And let's not forget your (possibly emergency) replacement - sure, *you* have lots of addresses memorised, but what about other people? You need a suitable mapping system *anyway*.
I think you're looking for problems where there are none. I see nothing wrong with BAD:BABE or with DEAD:BEEF. Your thinking suggests that there are only good babes and live beef, which is wrong on so many levels. Positive discrimination is as bad as discrimination and it creates more problems than it solves.
*You* don't see a problem, so there is no problem? I *personally* have no problem with either example, but I can see how others might, and how others might have a problem with constructs similar in nature to these ones. I think it is likely that others would find those sorts of things objectionable, I see no benefit to using them, and I see several technical and non-technical disadvantages to using them - so my recommendation is not to use them. As to "my thinking", your comments on that are confused. I don't recommend crafting words, regardless of what words they are. How you got from one OP-supplied example and one well-known example to "my thinking" and thence to positive discrimination is a mystery to me. The OP asked for reasons why embedding wordiness in IPv6 addresses might not be a good idea. I gave several reasons, some technical, some not. You've attacked two non-technical ones, with counterarguments that amount to "is not!".
- clever addresses are guessable addresses for scanners, and highly identifiable in data as probably attached to high-value targets What is a clever IP address ?
One that has obviously been constructed by a human - such as one containing readable words, obvious numeric patterns and the like. Regards, K. -- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Karl Auer (kauer@biplane.com.au) http://www.biplane.com.au/kauer http://www.biplane.com.au/blog GPG fingerprint: AE1D 4868 6420 AD9A A698 5251 1699 7B78 4EEE 6017 Old fingerprint: DA41 51B1 1481 16E1 F7E2 B2E9 3007 14ED 5736 F687
On Sat, 03 Nov 2012 00:44:14 -0500, Randy said:
Veering off this topic's course, Is there any issue with addresses like this ? 2001:470:1f00:1aa:abad:babe:8:beef < I have a bunch of these type 'addresses' configured for my various machines.
I make it a point to come up with some sort of 'hex' speak address, what are peoples opinions on this?
Google for "microsoft hyperv hex constant". Show the results to whoever handles your PR. Follow their advice.
participants (23)
-
Alexander Maassen
-
Chip Marshall
-
Crist J. Clark
-
David Miller
-
Eugeniu Patrascu
-
Fred Baker (fred)
-
Glen Turner
-
Graham Beneke
-
Jimmy Hess
-
Joe Abley
-
joel jaeggli
-
Karl Auer
-
Masataka Ohta
-
Miquel van Smoorenburg
-
Nick Hilliard
-
Owen DeLong
-
Randy
-
Sander Steffann
-
Suresh Ramasubramanian
-
Tom Paseka
-
Tore Anderson
-
Valdis.Kletnieks@vt.edu
-
Zachary Giles