As HTTP seems to be a major factor causing a lot of short lived connections, and several large ISPs have demonstrated that large scale transparent HTTP proxies seem to work just fine, you could also move the IPv4 port 80 traffic from the CGN to a transparent HTTP proxy. As well as any benefits from caching keeping connections local it can also combine 1000 users trying to load facebook in to a handful of persistent connections to the facebook servers. The proxy can of course also have its own global IPv4 address rather than going through the NAT, I have no experience with large scale HTTP proxy deployments but I strongly suspect a single HTTP proxy can handle traffic for a lot more users than low hundreds currently being suggested for NAT444! and can be scaled out separately if required. As an end user this is probably a little worse with HTTP coming from a different IP address to everything else, but not that much worse. As a provider it may be much easier to scale to larger numbers of customers. The proxy can also take IPv4-only users to a dual stacked site over IPv6, as I am under no illusions that even with IPv6 to every customer you will still have customers behind IPv4-only NAT routers they bought themselves for quite a while. With some DNS tricks this might be useful for those users reaching IPv6-only sites, however it would probably be better if they were unable to reach those sites at all to give them an incentive to fix their IPv6. On 7 September 2011 21:37, Leigh Porter <leigh.porter@ukbroadband.com> wrote:
Other simple tricks such as ensuring that your own internal services such as DNS are available without traversing NAT also help.
As obvious as this probably is, i'm sure someone will overlook it! Also other services such as providers with CDN nodes in their network may want to talk to the CDN operator about having those connected to directly from the internal addresses to avoid traversing the NAT, and I'm sure there are other services as well. - Mike