Re: Redeploying most of 127/8, 0/8, 240/4 and *.0 as unicast
Steven Bakker <steven.bakker@ams-ix.net> wrote:
The ask is to update every ip stack in the world (including validation, equipment retirement, reconfiguration, etc)...
This raises a great question. Is it even *doable*? What's the *risk*? What will it *cost* to upgrade every node on the Internet? And *how long* might it take? We succeeded in upgrading every end-node and every router in the Internet in the late '90s and early 2000's, when we deployed CIDR. It was doable. We know that because we did it! (And if we hadn't done it, the Internet would not have scaled to world scale.) So today if we decide that unicast use of the 268 million addresses in 240/4 is worth doing, we can upgrade every node. If we do, we might as well support unicast on the other 16 million addresses in 0/8, and the 16 million in 127/8, and the other about 16 million reserved for 4.2BSD's pre-standardized subnet broadcast address that nobody has used since 1985. And take a hard look at another hundred million addresses in the vast empty multicast space, that have never been assigned by IANA for anybody or anything. Adding the address blocks around the edges makes sense; you only have to upgrade everything once, but the 268 million addresses becomes closer to 400 million formerly wasted addresses. That would be worth half again as much to end users, compared to just doing 240/4! That may not be worth it to you. Or to your friends. But it would be useful to a lot of people -- hundreds of millions of people who you may never know. People who didn't get IP addresses when they were free, people outside the US and Europe, who will be able to buy and use them in 5 or 10 years, rather than leaving them unused and rotting on the vine. We already know that making these one-line patches is almost risk-free. 240/4 unicast support is in billions of nodes already, without trouble. Linux, Android, MacOS, iOS, and Solaris all started supporting unicast use of 240/4 in 2008! Most people -- even most people in NANOG -- didn't even notice. 0/8 unicast has been in Linux and Android kernels for multiple years, again with no problems. Unicast use of the lowest address in each subnet is now in Linux and NetBSD, recently (see the drafts for specifics). If anyone knows of security issues that we haven't addressed in the drafts, please tell us the details! There has been some arm-waving about a need to update firewalls, but most of these addresses have been usable as unicast on LANs and private networks for more than a decade, and nobody's reported any firewall vulnerabilities to CERT. Given the low risk, the natural way for these unicast extensions to roll out is to simply include them in new releases of the various operating systems and router OS's that implement the Internet protocols. It is already happening, we're just asking that the process be adopted universally, which is why we wrote Internet-Drafts for IETF. Microsoft Windows is the biggest laggard; they drop any packet whose destination OR SOURCE address is in 240/4. When standards said 240/4 was reserved for what might become future arcane (variable-length, anycast, 6to4, etc) addressing modes, that made sense. It doesn't make sense in 2021. IPv4 is stable and won't be inventing any new addressing modes. The future is here, and all it wants out of 240/4 is more unicast addresses. By following the normal OS upgrade path, the cost of upgrading is almost zero. People naturally upgrade their OS's every few years. They replace their server or laptop with a more capable one that has the latest OS. Laggards might take 5 or 10 years. Peoples' home WiFi routers break, or are upgraded to faster models, or they change ISPs and throw the old one out, every 3 to 5 years. A huge proportion of end-users get automatic over-the-net upgrades, via an infrastructure that had not yet been built for consumers during the CIDR transition. "Patch Tuesday" could put some or all of these extensions into billions of systems at scale, for a one-time fixed engineering and testing cost. We have tested major routers, and none so far require software updates to enable most of these addresses (except on the lowest address per subnet). At worst, the ISP would have to turn off or reconfigure a bogon filter with a config setting. Also, many "Martian address" bogon lists are centrally maintained (e.g. https://team-cymru.com/community-services/bogon-reference/ ) and can easily be updated. We have found no ASIC IP implementations that hardwire in assumptions about specific IP address ranges. If you know of any, please let us know, otherwise, let's let that strawman rest. Our drafts don't propose to choose between public and private use of the newly usable unicast addresses (so the prior subject line that said "unicast public" was incorrect). Since the kernel and router implementation is the same in either case, we're trying to get those fixed first. There will be plenty of years and plenty of forums (NANOG, IETF, ICANN, IANA, and the RIRs) in which to wrestle the public-vs-private questions to the ground and make community decisions on actual allocations. But if we don't fix the kernels and routers first, none of those decisions would be implementable. Finally, as suggested by David Conrad, there is a well understood process for "de-bogonizing" an address range on the global Internet, once support for it exists in OS's. Cloudflare used it on 1.1.1.1; RIPE used it on 128.0/16 and on 2a10::/12. You introduce a global BGP route for some part of the range, stand up a server on it, and use various distributed measurement testbeds to see who can reach that server. When chunks of the Internet can't, an engineer figures out where the blockage is, and communicates with that ISP or vendor to resolve the issue. Lather, rinse and repeat for a year or more, until reachability is "high enough". Addresses that later end up allocated to private address blocks would never need 100% global reachability, but global testing would still help to locate low-volume OS implementations that might need to be updated. Addresses bought to number retail cellphones need not be as reachable as ones used on public-facing servers, etc. The beauty of a market for IP addresses, rather than one-size-fits-all allocation models, is that ones with different reachability can sell for different prices, at different times, into different niches where they can be put to use. John
I find it a bit interesting to follow this thread... There was a discussion in March where Douglas Fischer shared this picture which shows that Amazon is already using 240/4 space internally. https://pasteboard.co/JRHNVKw.png And I heard it from other sources, too (not an AWS customer so wont try to verify it). Therefore it is probably nearly impossible to get it to public routeable. In my opinion it is like RFC 1918, 10.64/10 CGN space or the reserved test networks. Do what ever you want with it internally, but you won't get it into the DFZ. The problem is not to get it routable, but to get it usable in a way that you could assign it to customers without causing massive support problems. Am 18.11.2021 um 21:54 schrieb John Gilmore:
Steven Bakker <steven.bakker@ams-ix.net> wrote:
The ask is to update every ip stack in the world (including validation, equipment retirement, reconfiguration, etc)... This raises a great question.
Is it even *doable*? What's the *risk*? What will it *cost* to upgrade every node on the Internet? And *how long* might it take?
We succeeded in upgrading every end-node and every router in the Internet in the late '90s and early 2000's, when we deployed CIDR. It was doable. We know that because we did it! (And if we hadn't done it, the Internet would not have scaled to world scale.)
So today if we decide that unicast use of the 268 million addresses in 240/4 is worth doing, we can upgrade every node. If we do, we might as well support unicast on the other 16 million addresses in 0/8, and the 16 million in 127/8, and the other about 16 million reserved for 4.2BSD's pre-standardized subnet broadcast address that nobody has used since 1985. And take a hard look at another hundred million addresses in the vast empty multicast space, that have never been assigned by IANA for anybody or anything. Adding the address blocks around the edges makes sense; you only have to upgrade everything once, but the 268 million addresses becomes closer to 400 million formerly wasted addresses. That would be worth half again as much to end users, compared to just doing 240/4!
That may not be worth it to you. Or to your friends. But it would be useful to a lot of people -- hundreds of millions of people who you may never know. People who didn't get IP addresses when they were free, people outside the US and Europe, who will be able to buy and use them in 5 or 10 years, rather than leaving them unused and rotting on the vine.
We already know that making these one-line patches is almost risk-free. 240/4 unicast support is in billions of nodes already, without trouble. Linux, Android, MacOS, iOS, and Solaris all started supporting unicast use of 240/4 in 2008! Most people -- even most people in NANOG -- didn't even notice. 0/8 unicast has been in Linux and Android kernels for multiple years, again with no problems. Unicast use of the lowest address in each subnet is now in Linux and NetBSD, recently (see the drafts for specifics). If anyone knows of security issues that we haven't addressed in the drafts, please tell us the details! There has been some arm-waving about a need to update firewalls, but most of these addresses have been usable as unicast on LANs and private networks for more than a decade, and nobody's reported any firewall vulnerabilities to CERT.
Given the low risk, the natural way for these unicast extensions to roll out is to simply include them in new releases of the various operating systems and router OS's that implement the Internet protocols. It is already happening, we're just asking that the process be adopted universally, which is why we wrote Internet-Drafts for IETF. Microsoft Windows is the biggest laggard; they drop any packet whose destination OR SOURCE address is in 240/4. When standards said 240/4 was reserved for what might become future arcane (variable-length, anycast, 6to4, etc) addressing modes, that made sense. It doesn't make sense in 2021. IPv4 is stable and won't be inventing any new addressing modes. The future is here, and all it wants out of 240/4 is more unicast addresses.
By following the normal OS upgrade path, the cost of upgrading is almost zero. People naturally upgrade their OS's every few years. They replace their server or laptop with a more capable one that has the latest OS. Laggards might take 5 or 10 years. Peoples' home WiFi routers break, or are upgraded to faster models, or they change ISPs and throw the old one out, every 3 to 5 years. A huge proportion of end-users get automatic over-the-net upgrades, via an infrastructure that had not yet been built for consumers during the CIDR transition. "Patch Tuesday" could put some or all of these extensions into billions of systems at scale, for a one-time fixed engineering and testing cost.
We have tested major routers, and none so far require software updates to enable most of these addresses (except on the lowest address per subnet). At worst, the ISP would have to turn off or reconfigure a bogon filter with a config setting. Also, many "Martian address" bogon lists are centrally maintained (e.g. https://team-cymru.com/community-services/bogon-reference/ ) and can easily be updated. We have found no ASIC IP implementations that hardwire in assumptions about specific IP address ranges. If you know of any, please let us know, otherwise, let's let that strawman rest.
Our drafts don't propose to choose between public and private use of the newly usable unicast addresses (so the prior subject line that said "unicast public" was incorrect). Since the kernel and router implementation is the same in either case, we're trying to get those fixed first. There will be plenty of years and plenty of forums (NANOG, IETF, ICANN, IANA, and the RIRs) in which to wrestle the public-vs-private questions to the ground and make community decisions on actual allocations. But if we don't fix the kernels and routers first, none of those decisions would be implementable.
Finally, as suggested by David Conrad, there is a well understood process for "de-bogonizing" an address range on the global Internet, once support for it exists in OS's. Cloudflare used it on 1.1.1.1; RIPE used it on 128.0/16 and on 2a10::/12. You introduce a global BGP route for some part of the range, stand up a server on it, and use various distributed measurement testbeds to see who can reach that server. When chunks of the Internet can't, an engineer figures out where the blockage is, and communicates with that ISP or vendor to resolve the issue. Lather, rinse and repeat for a year or more, until reachability is "high enough".
Addresses that later end up allocated to private address blocks would never need 100% global reachability, but global testing would still help to locate low-volume OS implementations that might need to be updated. Addresses bought to number retail cellphones need not be as reachable as ones used on public-facing servers, etc. The beauty of a market for IP addresses, rather than one-size-fits-all allocation models, is that ones with different reachability can sell for different prices, at different times, into different niches where they can be put to use.
John
John, On Nov 18, 2021, at 12:54 PM, John Gilmore <gnu@toad.com> wrote:
Is it even *doable*?
With enough thrust, pigs fly quite well, although the landing can be messy.
What's the *risk*?
Some (not me) might argue it could (further) hamper IPv6 deployment by diverting limited resources.
What will it *cost* to upgrade every node on the Internet? And *how long* might it take?
These are the pertinent questions, which are, of course extremely hard to estimate.
We succeeded in upgrading every end-node and every router in the Internet in the late '90s and early 2000's, when we deployed CIDR.
My recollection was that CIDR deployment was a bit early than that, but regardless, the Internet of the late '90s and early 2000’s was vastly different than the Internet today. For one thing, most of the end nodes still had people with technical clue managing them. That’s not the case today.
So today if we decide that unicast use of the 268 million addresses in 240/4 is worth doing, we can upgrade every node.
Can we? We can’t even get some DNS resolvers to stop querying root server IP addresses that were renumbered two decades ago. People aren’t even patching/updating publicly available systems with active security exploits that are impacting them directly and you believe they’ll be willing to update all their devices to benefit other people (the ones who want the 240/4 space)? You must be more optimistic than I. Regards, -drc
On Thu, Nov 18, 2021 at 8:24 PM David Conrad <drc@virtualized.org> wrote:
...
Some (not me) might argue it could (further) hamper IPv6 deployment by diverting limited resources.
It may help IPv6 deployment if more V4 addresses are eventually released and allocated Assuming the RIRs would ultimately like to provision the usage of addresses within their own policies that the new address releases are exclusively for 'IPv6 Transition Tech.', such as CGN NAT addresses, And make the proviso that new Allocations should be revoked/reduced on the basis of Non-Use alone if found not being used highly-efficiently --- reducing the cost of "New" V4 addresses but Restricting who can get the allocations and for what.. May make the scarcity "Fairer" between Older existing Network Service Providers versus Brand new Network Service Providers who did not get to start with pre-assigned IPv4, thus Helping V6 deployment. An important thing to hamper IPv6 deployment is bound to be newfound difficulty getting IPv4 numbers, and there are a large number of hosting and access network service providers pre-distributed IPv4, so currently IPv4 is scarce enough to discourage new providers deploying IPv6 (because it will discourage new providers starting and making any networks at all), But because of existing assignments, IPv4 is not scarce enough for existing providers to feel immediate pressure or need/desire to provide end to end IPv6 connectivity --- not when they can potentially pay up for more IPv4 and gobble up other NSPs through acquisitions. IPv4 numbers are required in order for network service providers to deploy IPv6 to subscribers, and for those subscribers to have connectivity to the majority of networks --- If you cannot get the IPv4, then more likely that new network service provider does not get created and offer service in the first place. Alternatively new service providers find a costly source of some IPv4 numbers and pay up only to result in eventual necessity of deploying IPv6 services at a higher-price: it places existing providers with legacy IPv4 assignments at competitive advantage, and gives existing providers reason to decline or delay fully deploying IPv6 end-to-end. Even if you want to give your Subscribers IPv6 only and utilize no V4 addresses for them like some large mobile providers; you still end up needing a technology such as 464XLAT or CGN in the end, and you are still going to require a substantial sum of IPv4 addresses in order to do it.
What will it *cost* to upgrade every node on the Internet? And *how long* might it take?
You need an upgrade timeframe for "cost to upgrade" to make sense. It is unlikely any node will proceed forever without any upgrades - After a sufficient number of years, or decades have passed; You will get to a point where every node, or almost every node had to have new software or replacement hardware for multiple reasons at some point, once a new version of Windows fixes your issue, your one IPv4 tweak is 0.001% of the cost or less of the new version release.. For example, Windows XP devices are almost nowhere to be seen anymore, less than 1% - If the time frame is about 5 years, then by that time most server/personal computers ought to have been replaced, or at least running a newer major OS version. The upgrade have a cost, but at that point there are numerous reasons it is required, and the upgrade cost is subsumed by the cost required just to maintain any system (With 5+ years, the upgrade cost goes down to almost zero compared to the cumulative Annual upkeep/depreciation). Of course upgrades that must be done immediately, or within a short schedule are more expensive. -- -JH
It may help IPv6 deployment if more V4 addresses are eventually released and allocated
No, it won't. The biggest impediment to IPv6 adoption is that too many people invest too much time and resources in finding ways to squeeze more blood from the IPv4 stone. If tomorrow, RFCs were changed so every last address in the V4 space that could be re-purposed for public use was : - Every last one of these new IPs would be allocated years before the majority of network devices and end hosts received software updates to work properly. - In the interim, a messy situation would exist with different endpoints unable to reach endpoints numbered in these spaces , creating operational nightmares for ISPs who frankly already have operational nightmares. - At the end of this period when it's all figured out, we're right back where we started. IPv4 will again be completely exhausted , and no more going back to the well to redefine sections to get more of it. IPv6 isn't perfect. That's not an excuse to ignore it and invest the limited resources we have into Yet Another IPv4 Zombification Effort. On Fri, Nov 19, 2021 at 3:12 PM Jim <mysidia@gmail.com> wrote:
On Thu, Nov 18, 2021 at 8:24 PM David Conrad <drc@virtualized.org> wrote:
Some (not me) might argue it could (further) hamper IPv6 deployment by
... diverting limited resources.
It may help IPv6 deployment if more V4 addresses are eventually released and allocated Assuming the RIRs would ultimately like to provision the usage of addresses within their own policies that the new address releases are exclusively for 'IPv6 Transition Tech.', such as CGN NAT addresses, And make the proviso that new Allocations should be revoked/reduced on the basis of Non-Use alone if found not being used highly-efficiently --- reducing the cost of "New" V4 addresses but Restricting who can get the allocations and for what.. May make the scarcity "Fairer" between Older existing Network Service Providers versus Brand new Network Service Providers who did not get to start with pre-assigned IPv4, thus Helping V6 deployment.
An important thing to hamper IPv6 deployment is bound to be newfound difficulty getting IPv4 numbers, and there are a large number of hosting and access network service providers pre-distributed IPv4, so currently IPv4 is scarce enough to discourage new providers deploying IPv6 (because it will discourage new providers starting and making any networks at all), But because of existing assignments, IPv4 is not scarce enough for existing providers to feel immediate pressure or need/desire to provide end to end IPv6 connectivity --- not when they can potentially pay up for more IPv4 and gobble up other NSPs through acquisitions.
IPv4 numbers are required in order for network service providers to deploy IPv6 to subscribers, and for those subscribers to have connectivity to the majority of networks --- If you cannot get the IPv4, then more likely that new network service provider does not get created and offer service in the first place. Alternatively new service providers find a costly source of some IPv4 numbers and pay up only to result in eventual necessity of deploying IPv6 services at a higher-price: it places existing providers with legacy IPv4 assignments at competitive advantage, and gives existing providers reason to decline or delay fully deploying IPv6 end-to-end.
Even if you want to give your Subscribers IPv6 only and utilize no V4 addresses for them like some large mobile providers; you still end up needing a technology such as 464XLAT or CGN in the end, and you are still going to require a substantial sum of IPv4 addresses in order to do it.
What will it *cost* to upgrade every node on the Internet? And *how long* might it take?
You need an upgrade timeframe for "cost to upgrade" to make sense. It is unlikely any node will proceed forever without any upgrades - After a sufficient number of years, or decades have passed; You will get to a point where every node, or almost every node had to have new software or replacement hardware for multiple reasons at some point, once a new version of Windows fixes your issue, your one IPv4 tweak is 0.001% of the cost or less of the new version release.. For example, Windows XP devices are almost nowhere to be seen anymore, less than 1% - If the time frame is about 5 years, then by that time most server/personal computers ought to have been replaced, or at least running a newer major OS version.
The upgrade have a cost, but at that point there are numerous reasons it is required, and the upgrade cost is subsumed by the cost required just to maintain any system (With 5+ years, the upgrade cost goes down to almost zero compared to the cumulative Annual upkeep/depreciation).
Of course upgrades that must be done immediately, or within a short schedule are more expensive.
-- -JH
Tom Beecher wrote:
The biggest impediment to IPv6 adoption is that too many people invest too much time and resources in finding ways to squeeze more blood from the IPv4 stone.
Reverse that. IPv6 has impediments to adoption, which is why more time and resources are being spent to keep IPv4 usable until those impediments can be overcome.
IPv6 isn't perfect. That's not an excuse to ignore it and invest the limited resources we have into Yet Another IPv4 Zombification Effort.
As noted earlier, False Dilemma Even worse, your thinking presupposes a finite amount of people-effort resources that must be properly managed by those superior in some fashion with more correct thinking. I hope you can see when focused in that fashion all that is wrong with that viewpoint. Joe
On Sat, Nov 20, 2021 at 6:27 PM Joe Maimon <jmaimon@jmaimon.com> wrote:
Tom Beecher wrote: [...]
IPv6 isn't perfect. That's not an excuse to ignore it and invest the limited resources we have into Yet Another IPv4 Zombification Effort.
As noted earlier, False Dilemma
Even worse, your thinking presupposes a finite amount of people-effort resources that must be properly managed by those superior in some fashion with more correct thinking.
This is absolutely true in the corporate world. You have a finite number of people working a finite number of hours each week on tasks that must be prioritized based on business needs. You can't magically make more people appear out of thin air without spending more money, which is generally against the needs of the business, and you can't generally make more working hours appear in the week without either magic or violating workers rights. Thus, you have a finite amount of people-effort resources which must be managed by those higher up in the corporate structure. As an old boss of mine once said... "You sum it up so well." Matt
On Nov 19, 2021, at 12:11 , Jim <mysidia@gmail.com> wrote:
On Thu, Nov 18, 2021 at 8:24 PM David Conrad <drc@virtualized.org> wrote:
...
Some (not me) might argue it could (further) hamper IPv6 deployment by diverting limited resources.
It may help IPv6 deployment if more V4 addresses are eventually released and allocated Assuming the RIRs would ultimately like to provision the usage of addresses within their own policies that the new address releases are exclusively for 'IPv6 Transition Tech.', such as CGN NAT addresses,
CGN NAT is NOT a transition technology. DS-LITE is an example of a transition technology CGN-NAT is an example of an avoidance of transition technology. Owen
On Nov 18, 2021, at 12:54 , John Gilmore <gnu@toad.com> wrote:
Steven Bakker <steven.bakker@ams-ix.net> wrote:
The ask is to update every ip stack in the world (including validation, equipment retirement, reconfiguration, etc)...
This raises a great question.
Is it even *doable*? What's the *risk*? What will it *cost* to upgrade every node on the Internet? And *how long* might it take?
We succeeded in upgrading every end-node and every router in the Internet in the late '90s and early 2000's, when we deployed CIDR. It was doable. We know that because we did it! (And if we hadn't done it, the Internet would not have scaled to world scale.)
Actually, CIDR didn’t require upgrading every end-node, just some of them. That’s what made it doable… Updating only routers, not end-nodes. Another thing that made it doable is that there were a LOT fewer end-nodes and a much smaller vendor space when it came to the source of routers that needed to be updated. Further, in the CIDR deployment days, routers were almost entirely still CPU-switched rather than ASIC or even line-card switched. Heck, the workhorse backbone router that stimulated the development of CIDR was built on an open-standard Mutlibus backplane with a MIPS CPU IIRC. That also made widespread software updates a much simpler proposition. Hardly anyone had a backbone router that was older than an AGS (in fact, even the AGS was relatively rare in favor of the AGS+). I’d venture to guess that something north of 90% of BGP-speaking routers were running IOS of the day (version 8.something, if memory serves). Juniper didn’t exist yet. Arista didn’t exist yet. Foundry? Nope. etc. Proteon was mostly out of business and didn’t really make anything in that class. Wellfleet did, but they had a very small market share. The lift is a lot harder today and the potential benefits continue to shrink.
That may not be worth it to you. Or to your friends. But it would be useful to a lot of people -- hundreds of millions of people who you may never know. People who didn't get IP addresses when they were free, people outside the US and Europe, who will be able to buy and use them in 5 or 10 years, rather than leaving them unused and rotting on the vine.
I question this assertion. I might buy tens of thousands of people, but I find it pretty hard to give credibility to the idea that this would make a significant difference to hundreds of millions of people. It’s not going to reduce NAT or CGN deployment significantly. It’s not going to speed up IPv6 deployment in any meaningful way. You’re going to have to make a much stronger case for the benefit here being significant if you want that argument to be taken seriously. Owen
On 11/19/21 7:38 AM, Owen DeLong via NANOG wrote:
Actually, CIDR didn’t require upgrading every end-node, just some of them.
That’s what made it doable… Updating only routers, not end-nodes.
Another thing that made it doable is that there were a LOT fewer end-nodes and a much smaller vendor space when it came to the source of routers that needed to be updated.
Further, in the CIDR deployment days, routers were almost entirely still CPU-switched rather than ASIC or even line-card switched. Heck, the workhorse backbone router that stimulated the development of CIDR was built on an open-standard Mutlibus backplane with a MIPS CPU IIRC. That also made widespread software updates a much simpler proposition. Hardly anyone had a backbone router that was older than an AGS (in fact, even the AGS was relatively rare in favor of the AGS+).
I don't think you can overstate how ASIC's made changing anything pretty much impossible. I'm not sure exactly then the cut over to ASIC's started to happen in the 90's, but once it did it was pretty much game over for ipv6. Instead of slipping an implementation into a release train and see what happens, it was getting buy in from a product manager that had absolutely no interest in respinning silicon. I remember when Deering and I were talking to the GSR folks (iirc) and it was hopeless since it would have to use the software path and nobody was going to buy a GSR for its software path. It's why all of the pissing and moaning about what ipv6 looked like completely missed the point. There was a fuse lit in 1992 to when the first hardware based routing was done. *Anything* that extended the address space would have been better. Mike
On Fri, Nov 19, 2021 at 10:04 AM Michael Thomas <mike@mtcc.com> wrote:
I don't think you can overstate how ASIC's made changing anything pretty much impossible. It's why all of the pissing and moaning about what ipv6 looked like completely missed the point. There was a fuse lit in 1992 to when the first hardware based routing was done. *Anything* that extended the address space would have been better.
Obligatory 2007 plug: https://bill.herrin.us/network/ipxl.html -- William Herrin bill@herrin.us https://bill.herrin.us/
On 11/19/21 10:15 AM, William Herrin wrote:
On Fri, Nov 19, 2021 at 10:04 AM Michael Thomas <mike@mtcc.com> wrote:
I don't think you can overstate how ASIC's made changing anything pretty much impossible. It's why all of the pissing and moaning about what ipv6 looked like completely missed the point. There was a fuse lit in 1992 to when the first hardware based routing was done. *Anything* that extended the address space would have been better. Obligatory 2007 plug: https://bill.herrin.us/network/ipxl.html
And just as impossible since it would pop it out of the fast path. Does big iron support ipv6 these days? Mike
It appears that Michael Thomas <mike@mtcc.com> said:
And just as impossible since it would pop it out of the fast path. Does big iron support ipv6 these days?
My research associate Ms. Google advises me that Juniper does: https://www.juniper.net/documentation/us/en/software/junos/routing-overview/... As does Cisco: https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9600-ser... R's, John
On 11/19/21 2:44 PM, John Levine wrote:
It appears that Michael Thomas <mike@mtcc.com> said:
And just as impossible since it would pop it out of the fast path. Does big iron support ipv6 these days? My research associate Ms. Google advises me that Juniper does:
https://www.juniper.net/documentation/us/en/software/junos/routing-overview/...
As does Cisco:
https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9600-ser...
R's,
Both have sprawling product lines though even with fsvo big iron. It would be nice to hear that they can build out big networks, but given the use of ipv6 in mobile I assume they can. I wonder what the situation is for enterprise which doesn't have any direct drivers that I know of. Mike
It appears that Michael Thomas <mike@mtcc.com> said:
Both have sprawling product lines though even with fsvo big iron. It would be nice to hear that they can build out big networks, but given the use of ipv6 in mobile I assume they can. I wonder what the situation is for enterprise which doesn't have any direct drivers that I know of.
Google says they see about 1/3 of their users on IPv6 so I presume they are getting their routers from someone. As you note, many mobile networks are IPv6 internally, and they seem to work. R's, John
Cisco and Juniper routers have had v6 functionality for over 10 years. Lucent/Nokia, and others. Check UNL list at https://www.iol.unh.edu/registry/usgv6 for v6 compliant routers and switches. John Lee On Fri, Nov 19, 2021 at 5:48 PM John Levine <johnl@iecc.com> wrote:
It appears that Michael Thomas <mike@mtcc.com> said:
And just as impossible since it would pop it out of the fast path. Does big iron support ipv6 these days?
My research associate Ms. Google advises me that Juniper does:
https://www.juniper.net/documentation/us/en/software/junos/routing-overview/...
As does Cisco:
https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9600-ser...
R's, John
On Sat, 20 Nov 2021 at 09:38, John Lee <jllee9753@gmail.com> wrote:
Cisco and Juniper routers have had v6 functionality for over 10 years. Lucent/Nokia, and others. Check UNL list at https://www.iol.unh.edu/registry/usgv6 for v6 compliant routers and switches.
People who work with network devices directly using other APIs than email. phone and slack know that feature parity to this day is not there. And know that IPv6 is a massive time commitment, more than doubling many of the work involved in testing, automating, provisioning and operating. I could explain a lot about the relative merits of IPv4 and IPv6 design and how well those design choices map to silicon or benefit users, but I don't anymore care how bad/good IPv4 and IPv6 are relative to each other, I'm just tired of the dual stack suck and how much it steals from me and my users. And how we, the IP engineer community have failed our customers in this transition. We cocked up, it happened on our watch. -- ++ytti
When considering the IPv6 product, I would suggest you read USGv6-Revision-1 (1) to define the specification you need for the product. Then go to the USGv6 Registry (2), select the features and read the Supplier Declaration of Conformity (SDOC) to ensure that the product meets your requirements. Do this prior to having the discussion with the vendor sales. Also, ask for documents which provide details on performance and security testing. It will save you hours of troubleshooting problems and patching vulnerability. Lessons learned from implementing IPv6 products. (1) https://www.nist.gov/programs-projects/usgv6-program/usgv6-revision-1 (2) https://www.iol.unh.edu/registry/usgv6 Joe Klein "inveniet viam, aut faciet" --- Seneca's Hercules Furens (Act II, Scene 1) "*I skate to where the puck is going to be, not to where it has been." -- *Wayne Gretzky "I never lose. I either win or learn" - Nelson Mandela On Sat, Nov 20, 2021 at 2:36 AM John Lee <jllee9753@gmail.com> wrote:
Cisco and Juniper routers have had v6 functionality for over 10 years. Lucent/Nokia, and others. Check UNL list at https://www.iol.unh.edu/registry/usgv6 for v6 compliant routers and switches.
John Lee
On Fri, Nov 19, 2021 at 5:48 PM John Levine <johnl@iecc.com> wrote:
It appears that Michael Thomas <mike@mtcc.com> said:
And just as impossible since it would pop it out of the fast path. Does big iron support ipv6 these days?
My research associate Ms. Google advises me that Juniper does:
https://www.juniper.net/documentation/us/en/software/junos/routing-overview/...
As does Cisco:
https://www.cisco.com/c/en/us/products/collateral/switches/catalyst-9600-ser...
R's, John
Speed of router depends on degree of parallelism. So, for quick routing table lookup, if you provide 128bit TCAM for IPv6 in addition to 32bit TCAM for IPv4, speed is mostly same, though, for each entry, TCAM for IPv6 costs 4 times more and consumes 4 times more power than that for IPv4. However, as global routing table size of IPv6 is a lot smaller than that of IPv4, the number of the entries of TCAM for IPv6 is a lot smaller than that for IPv4. But it is so primarily because IPv6 is not very widely deployed. Or, if you perform IPv6 address look up by software, IPv6 is slower. Masataka Ohta
On Nov 20, 2021, at 00:41 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Speed of router depends on degree of parallelism.
So, for quick routing table lookup, if you provide 128bit TCAM for IPv6 in addition to 32bit TCAM for IPv4, speed is mostly same, though, for each entry, TCAM for IPv6 costs 4 times more and consumes 4 times more power than that for IPv4.
However, as global routing table size of IPv6 is a lot smaller than that of IPv4, the number of the entries of TCAM for IPv6 is a lot smaller than that for IPv4. But it is so primarily because IPv6 is not very widely deployed.
Uh, no. It is so because on average IPv4 is so fragmented that most providers of any size are advertising 8+ prefixes compared to a more realistic IPv6 average of 1-3. Owen
Owen DeLong wrote:
Uh, no. It is so because on average IPv4 is so fragmented that most providers of any size are advertising 8+ prefixes compared to a more realistic IPv6 average of 1-3.
Mergers of entities having an IP address range is a primary reason of entities having multiple address ranges. As IPv6 was developed a lot later than IPv4, it has not suffered from mergers so much yet. Masataka Ohta
Subject: Re: is ipv6 fast, was silly Redeploying Date: Mon, Nov 22, 2021 at 02:04:55AM +0900 Quoting Masataka Ohta (mohta@necom830.hpcl.titech.ac.jp):
Mergers of entities having an IP address range is a primary reason of entities having multiple address ranges. As IPv6 was developed a lot later than IPv4, it has not suffered from mergers so much yet.
Yes. You are completely correct. But, those entities usually have one v6 prefix each. And multiple v4 ones. Because they've required more addresses. Not everyone are Apple, "hp"[0] or MIT, where initial allocation still is mostly sufficient. (I believe MIT handed some back too) Instead they had to ask repeated times for smaller and smaller chunks of addresses. (Now they're buying them for prices that may well be motivating people to come up with crazy schemes of reusing reserved addresses.. ) In contrast, the v6 allocations are mostly sufficient. Even for sprawling businesses. In the end, if they merge with another company, each merger brings one (1) more net, not a flock of v4 /24's. Your reasoning is correct, but the size of the math matters more. -- Måns Nilsson primary/secondary/besserwisser/machina MN-1334-RIPE SA0XLR +46 705 989668 Content: 80% POLYESTER, 20% DACRONi ... The waitress's UNIFORM sheds TARTAR SAUCE like an 8" by 10" GLOSSY ... [0] The real Hewlett-Packard made test equipment. What calls itself "hp" today is just another IT company.
Mans Nilsson wrote:
Not everyone are Apple, "hp"[0] or MIT, where initial allocation still is mostly sufficient.
The number of routing table entries is growing exponentially, not because of increase of the number of ISPs, but because of multihoming. As such, if entities requiring IPv4 multihoming will also require IPv6 multihoming, the numbers of routing table entries will be same. The proper solution is to have end to end multihoming: https://tools.ietf.org/id/draft-ohta-e2e-multihoming-02.txt
Your reasoning is correct, but the size of the math matters more.
Indeed, with the current operational practice. global IPv4 routing table size is bounded below 16M. OTOH, that for IPv6 is unbounded. Masataka Ohta
On Nov 22, 2021, at 02:45 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Mans Nilsson wrote:
Not everyone are Apple, "hp"[0] or MIT, where initial allocation still is mostly sufficient.
The number of routing table entries is growing exponentially, not because of increase of the number of ISPs, but because of multihoming.
Again, wrong. The number is growing exponentially primarily because of the fragmentation that comes from recycling addresses.
As such, if entities requiring IPv4 multihoming will also require IPv6 multihoming, the numbers of routing table entries will be same.
There are actually ways to do IPv6 multihoming that don’t require using the same prefix with both providers. Yes, there are tradeoffs, but these mechanisms aren’t even practical in IPv4, but have been sufficiently widely implemented in IPv6 to say that they are viable in some cases. Nonetheless, multihoming isn’t creating 8-16 prefixes per ASN. Fragmentation is.
Your reasoning is correct, but the size of the math matters more.
Indeed, with the current operational practice. global IPv4 routing table size is bounded below 16M. OTOH, that for IPv6 is unbounded.
Only by virtue of the lack of addresses available in IPv4. The other tradeoffs associated with that limitation are rather unpalatable at best. Owen
Owen DeLong wrote:
The number of routing table entries is growing exponentially, not because of increase of the number of ISPs, but because of multihoming.
Again, wrong. The number is growing exponentially primarily because of the fragmentation that comes from recycling addresses.
Such fragmentation only occurs when address ranges are rent to others for multihoming but later recycled for internal use, which means it is caused by multihoming. Anyway, such cases are quite unlikely and negligible.
There are actually ways to do IPv6 multihoming that don’t require using the same prefix with both providers.
That's what I proposed 20 years ago both with IPv4 and IPv6 in: https://tools.ietf.org/id/draft-ohta-e2e-multihoming-02.txt
Yes, there are tradeoffs, but these mechanisms aren't even practical in IPv4,
Wrong. As is specified by rfc2821: When the lookup succeeds, the mapping can result in a list of alternative delivery addresses rather than a single address, because of multiple MX records, multihoming, or both. To provide reliable mail transmission, the SMTP client MUST be able to try (and retry) each of the relevant addresses in this list in order, until a delivery attempt succeeds. However, there MAY also be a configurable the idea of end to end multihoming is widely deployed by SMTP at the application layer, though wider deployment require TCP modification as I wrote in my draft. Similar specification is also found in section 7.2 of rfc1035.
but have been sufficiently widely implemented in IPv6 to say that they are viable in some cases.
You are just wrong. IP layer has very little to do with it. Masataka Ohta PS LISP is garbage.
On Nov 23, 2021, at 12:28 AM, Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Owen DeLong wrote:
The number of routing table entries is growing exponentially, not because of increase of the number of ISPs, but because of multihoming. Again, wrong. The number is growing exponentially primarily because of the fragmentation that comes from recycling addresses.
Such fragmentation only occurs when address ranges are rent to others for multihoming but later recycled for internal use, which means it is caused by multihoming.
Nope… It occurs when (e.g. HP or MIT or AMPR) sell off pieces of a class A as smaller prefixes to various other purchasers. It happens when /16s are sold off to multiple purchasers in pieces.
Anyway, such cases are quite unlikely and negligible.
ROFLMAO you are demonstrating your extreme detachment from reality: http://ipv4marketgroup.com <http://ipv4marketgroup.com/> http://ipv4auctions.com <http://ipv4auctions.com/> etc. There are multiple businesses that do almost nothing outside of these types of transactions.
There are actually ways to do IPv6 multihoming that don’t require using the same prefix with both providers.
That's what I proposed 20 years ago both with IPv4 and IPv6 in:
THat’s not one of the alternatives I was talking about.
Yes, there are tradeoffs, but these mechanisms aren't even practical in IPv4,
Wrong. As is specified by rfc2821:
When the lookup succeeds, the mapping can result in a list of alternative delivery addresses rather than a single address, because of multiple MX records, multihoming, or both. To provide reliable mail transmission, the SMTP client MUST be able to try (and retry) each of the relevant addresses in this list in order, until a delivery attempt succeeds. However, there MAY also be a configurable
the idea of end to end multihoming is widely deployed by SMTP at the application layer, though wider deployment require TCP modification as I wrote in my draft.
Again, NOT what I was talking about. I was talking about the fact that in IPv6, you can multi-home by applying a prefix from each provider to each subnet. If the upstream connection goes away, deprecate the RA for that prefix and/or mark it as no longer on-link.
Similar specification is also found in section 7.2 of rfc1035.
You are talking about pomegranate seeds, I’m talking about Apples. They are not the same, so your statements about my remarks are inherently flawed.
but have been sufficiently widely implemented in IPv6 to say that they are viable in some cases.
You are just wrong. IP layer has very little to do with it.
No, you’re just talking about a form of multihoming that has absolutely nothing to do with the forms I was referring to.
LISP is garbage.
Somewhat agree, though the concept wasn’t entirely wrong. Separating Locators from IDs is something we will eventually need to do in order to scale internet routing. LISP is definitely not the ideal way to do it, but was a valiant attempt if one assumes the constraints of having to operate in an existing IPv6 environment. If one is willing to modify the packet header, then there are better options. Owen
Owen DeLong wrote:
Again, wrong. The number is growing exponentially primarily because of the fragmentation that comes from recycling addresses.
Nope… It occurs when (e.g. HP or MIT or AMPR) sell off pieces of a class A as smaller prefixes to various other purchasers.
That, by no means, is not recycling. Moreover, most, if not all, entities purchasing address ranges use them with multihoming.
ROFLMAO you are demonstrating your extreme detachment from reality:
So, you have never thought about the reason why people buy IP addresses.
THat's not one of the alternatives I was talking about.
That you haven't mentioned any alternative and have talked about nothing is your problem. Masataka Ohta
On Mon, Nov 22, 2021 at 2:49 AM Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Mans Nilsson wrote:
Not everyone are Apple, "hp"[0] or MIT, where initial allocation still is mostly sufficient.
The number of routing table entries is growing exponentially, not because of increase of the number of ISPs, but because of multihoming.
As such, if entities requiring IPv4 multihoming will also require IPv6 multihoming, the numbers of routing table entries will be same.
The proper solution is to have end to end multihoming:
I'd never read that. We'd made openwrt in particular use "source specific routing" for ipv6 by default, many years ago, but I don't know to what extent that facility is used. ip route from a:b:c:d:/64 via dev A ip route from a:b:d:d:/64 via dev B
Your reasoning is correct, but the size of the math matters more.
Indeed, with the current operational practice. global IPv4 routing table size is bounded below 16M. OTOH, that for IPv6 is unbounded.
Masataka Ohta
-- I tried to build a better future, a few times: https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org Dave Täht CEO, TekLibre, LLC
Dave Taht wrote:
The proper solution is to have end to end multihoming:
I'd never read that. We'd made openwrt in particular use "source specific routing" for ipv6 by default, many years ago, but I don't know to what extent that facility is used.
Considering that most, if not all, multihomed sites should have rich (maybe full) routing table in some part (at least) of the sites between exit routers to balance load between the routers, I can't see why source specific routing was considered necessary only to cause routing loops. For multihoming with PA address ranges with plain TCP/UDP, what is necessary for ISP failure is to have routing protocols to carry proper source address ranges for each routing table entry and to modify end systems to listen to routing protocol and choose proper source address, which is against the poor architecture of IPv6/ND to assume intelligent routers and dumb hosts. But, even with that, if some ISP fails, TCP/UDP through the ISP will fail and must be restarted with new source address, which is not very useful. Moreover, if destination is also inside another multihomed site with PA address ranges, all the destination addresses must be tried. So, as modifying end systems is inevitable, there is no reason not to support full end to end multihoming including modifications to support multiple addresses by TCP and some applications. Masataka Ohta
On Wed, 24 Nov 2021 at 08:16, Masataka Ohta < mohta@necom830.hpcl.titech.ac.jp> wrote:
So, as modifying end systems is inevitable, there is no reason not to support full end to end multihoming including modifications to support multiple addresses by TCP and some applications.
Masataka Ohta
Are you proposing SCTP? There is sadly not much more hope for widespread adoption of that as of IPv6. Regards, Baldur
On Wed, 24 Nov 2021 at 16:16, Baldur Norddahl <baldur.norddahl@gmail.com> wrote:
Are you proposing SCTP? There is sadly not much more hope for widespread adoption of that as of IPv6.
If you use Apple, you use MP-TCP, for better UX while using both mobile and wifi. SCTP is no good, because you cannot transition between two connections, they have to overlap for a period, there is no separation of identity and location. QUIC actually can do that, in that identity is PKI and location is IP address, so you could roam from one IP to another IP without having overlapping connectivity. -- ++ytti
On Wed, Nov 24, 2021 at 9:12 AM Baldur Norddahl <baldur.norddahl@gmail.com> wrote:
On Wed, 24 Nov 2021 at 08:16, Masataka Ohta < mohta@necom830.hpcl.titech.ac.jp> wrote:
So, as modifying end systems is inevitable, there is no reason not to support full end to end multihoming including modifications to support multiple addresses by TCP and some applications.
Masataka Ohta
Are you proposing SCTP? There is sadly not much more hope for widespread adoption of that as of IPv6.
or perhaps MP-TCP? :) or shim6?
On 25 Nov 2021, at 7:57 am, Christopher Morrow <morrowc.lists@gmail.com> wrote:
Are you proposing SCTP? There is sadly not much more hope for widespread adoption of that as of IPv6.
or perhaps MP-TCP? :) or shim6?
Shim6 died a comprehensive death many yers ago. I recall NANOG played a role in it's untimely demise. :-) Geoff
On Wed, Nov 24, 2021 at 5:12 PM Geoff Huston <gih@apnic.net> wrote:
On 25 Nov 2021, at 7:57 am, Christopher Morrow <morrowc.lists@gmail.com> wrote:
Are you proposing SCTP? There is sadly not much more hope for widespread adoption of that as of IPv6.
or perhaps MP-TCP? :) or shim6?
Shim6 died a comprehensive death many yers ago. I recall NANOG played a role in it's untimely demise. :-)
oh, darn! :) reading the ID that masataka referenced, it sounded very much like shim6 about ~4 yrs prior to shim6's "invention". I also don't recall seeing the draft referenced during the shim6 conversations. Also, for completeness, MP-TCP clearly does not help UDP or ICMP flows... nor IPSEC nor GRE nor... unless you HTTP over MP-TCP and encap UDP/ICMP/GRE/IPSEC over that! Talk about layer violations! talk about fun! -chris
Christopher Morrow <morrowc.lists@gmail.com> writes:
Also, for completeness, MP-TCP clearly does not help UDP or ICMP flows... nor IPSEC nor GRE nor... unless you HTTP over MP-TCP and encap UDP/ICMP/GRE/IPSEC over that!
IP over DNS has been a thing forever. IP over DoH should work just fine.
Talk about layer violations! talk about fun!
Yes, fun... Bjørn
On 11/25/21 11:54 AM, Bjørn Mork wrote:
Christopher Morrow <morrowc.lists@gmail.com> writes:
Also, for completeness, MP-TCP clearly does not help UDP or ICMP flows... nor IPSEC nor GRE nor... unless you HTTP over MP-TCP and encap UDP/ICMP/GRE/IPSEC over that! IP over DNS has been a thing forever. IP over DoH should work just fine.
Talk about layer violations! talk about fun! Yes, fun...
Feh. I've written transistors over http. Beat that. Mike
On Nov 25, 2021, at 12:06 , Michael Thomas <mike@mtcc.com> wrote:
On 11/25/21 11:54 AM, Bjørn Mork wrote:
Christopher Morrow <morrowc.lists@gmail.com> writes:
Also, for completeness, MP-TCP clearly does not help UDP or ICMP flows... nor IPSEC nor GRE nor... unless you HTTP over MP-TCP and encap UDP/ICMP/GRE/IPSEC over that! IP over DNS has been a thing forever. IP over DoH should work just fine.
Talk about layer violations! talk about fun! Yes, fun...
Feh. I've written transistors over http. Beat that.
Mike
I think the only thing I know of that is sillier than that would be Avi Freedman’s BGP4 implementation on a palm pilot. Owen
Baldur Norddahl wrote:
Are you proposing SCTP? There is sadly not much more hope for widespread adoption of that as of IPv6.
My ID describes the architectural framework both for IPv4 and IPv6. Modification to TCP is discussed, for example, in: https://datatracker.ietf.org/doc/html/draft-arifumi-tcp-mh-00 I still think something like that is necessary before IPv4 global routing table size become 16M (ignoring loopback/multicast/ClassE). Christopher Morrow wrote:
reading the ID that masataka referenced, it sounded very much like shim6 about ~4 yrs prior to shim6's "invention".
No, not at all.
I also don't recall seeing the draft referenced during the shim6 conversations.
Despite my ID saying: All the other processing can be performed by transport layer (typically in kernel) using default or application specific timing of TCP. Without TCP, applications must be able to detect loss of connectivity in application dependent way shim6 is wrongly architected to address the issue at the *connectionless* IP layer, where there is no proper period for timeout. Also, transport/application layer information such as TCP sequence numbers may offer proper security. Notion of connection (including half one such as DNS query/reply at the application layer) is essential for proper state maintenance. Similar layering violation also occurred to network layer PMTUD, which is why it is rather harmful than useful. Masataka Ohta
On Nov 21, 2021, at 09:04 , Masataka Ohta <mohta@necom830.hpcl.titech.ac.jp> wrote:
Owen DeLong wrote:
Uh, no. It is so because on average IPv4 is so fragmented that most providers of any size are advertising 8+ prefixes compared to a more realistic IPv6 average of 1-3.
Mergers of entities having an IP address range is a primary reason of entities having multiple address ranges. As IPv6 was developed a lot later than IPv4, it has not suffered from mergers so much yet.
No, it is not. Slow start and other RIR policies around scarcity and fairness of distribution of the last crumbs are the primary contributor, with traffic engineering a somewhat distant second. Mergers are actually somewhere around 10th on the list last time I looked. Owen
Greetings John and all On 18.11.21 21:54, John Gilmore wrote:
We succeeded in upgrading every end-node and every router in the Internet in the late '90s and early 2000's, when we deployed CIDR. It was doable. We know that because we did it! (And if we hadn't done it, the Internet would not have scaled to world scale.)
I want to highlight one (hopefully) entertaining point. There is evidence in Geoff's and Tony's graphs of when all of this happened because there was a drop in the number of routes in 1994. If you squint, you can see it here <https://bgp.potaroo.net/> just at the lower left-hand side of the graph. It's really funny to me because the scale of that graph has changed *dramatically* – bordering on two orders of magnitude. At the time the dip was a substantial percentage of the routes at the time. I will also point out that even though endpoints by-and-large were not involved, keeping the routers from falling down nevertheless was the result of an enormous effort and collaboration between researchers, developers, and operators, who I doubt very much would ever want to repeat that sort of experience. Eliot
On Thu, Nov 18, 2021 at 1:21 PM John Gilmore <gnu@toad.com> wrote:
We have found no ASIC IP implementations that hardwire in assumptions about specific IP address ranges. If you know of any, please let us know, otherwise, let's let that strawman rest.
There's at least one. Marvell PresteriaCX (its either PresteriaCX or DX, forget which). It is in Juniper EX4500, among others. Hardware-based bogon filter when L3 routing that cannot be disabled. cheers, lincoln.
participants (23)
-
Baldur Norddahl
-
Bjørn Mork
-
Christopher Morrow
-
Dave Taht
-
David Conrad
-
Eliot Lear
-
Geoff Huston
-
j k
-
Jim
-
Joe Maimon
-
John Gilmore
-
John Lee
-
John Levine
-
Karsten Thomann
-
Lincoln Dale
-
Masataka Ohta
-
Matthew Petach
-
Michael Thomas
-
Måns Nilsson
-
Owen DeLong
-
Saku Ytti
-
Tom Beecher
-
William Herrin