At least once or twice a month I'm downloading something and will find the IPv4 to transfer significantly faster. Case in point, I downloaded the proxmox iso yesterday to a colo server with 50g uplinks. It loafed at 2.4 mbytes/s using default wget, which of course preferred ipv6. Adding -4 to wget made that shoot up to 80 mbytes/s.
Lots of reasons that could explain that which have nothing to do with traffic being v4 vs v6. On Mon, Dec 1, 2025 at 4:44 PM Bryan Fields via NANOG <nanog@lists.nanog.org> wrote:
On 12/1/25 14:22, Jared Mauch via NANOG wrote:
I find myself having to tether off their networks when I’m on IPv4 only networks to access things like my hypervisors and other assets that are IPv6-only because they have superior networking these days.
While I'll agree v6 is easy and should be deployed I have to take issue with the current as-built being superior.
At least once or twice a month I'm downloading something and will find the IPv4 to transfer significantly faster. Case in point, I downloaded the proxmox iso yesterday to a colo server with 50g uplinks. It loafed at 2.4 mbytes/s using default wget, which of course preferred ipv6. Adding -4 to wget made that shoot up to 80 mbytes/s.
This is ipv6 behavior I've seen time and time again. I'm unsure where problems like these lie in the network, other than it's not mine or my peers. I've seen the same issues with v6 paths to the same server bounce around the west coast and back, whilst IPv4 is 6 hops and 12 ms away.
This is exactly the sort of thing that holds IPv6 back by giving it a bad name. -- Bryan Fields
727-409-1194 - Voice http://bryanfields.net _______________________________________________ NANOG mailing list
https://lists.nanog.org/archives/list/nanog@lists.nanog.org/message/APA2YIX4...