On Jun 4, 2012, at 5:21 PM, Joe Maimon wrote:
Jeroen Massar wrote:
That indeed matches most of the corporate world quite well. That they are heavily misinformed does not make it the correct answer though.
Either you are correct and they are all wrong, or they have a perspective that you dont or wont see.
He is correct. I have seen their perspective and it is, in fact, misinformed and based largely on superstition.
Either way I dont see them changing their mind anytime soon.
Very likely true, unfortunately. Zealots are rarely persuaded by facts, science, or anything based in reality, choosing instead to maintain their bubble of belief even to the point of historically killing those that could not accept their misguided viewpoint. Nonetheless, over time, even humans eventually figured out that Galileo was right and the world is, indeed round, does, in fact orbit the sun (and not the other way around) and is not, in fact, at the center of the universe. Given that we were able to overcome the catholic church with those facts eventually, I suspect that overcoming corporate IT mythology over time will be somewhat sooner and easier. It might eve take less than 100 years instead of several hundred.
So how about we both accept that they exist and start designing the network to welcome rather than ostracize them, unless that is your intent.
I would rather educate them and let them experience the errs of their ways until they learn than damage the network in the pursuit of inclusion in this case. If you reward bad behavior with adaptation to that behavior and accommodation, you get more bad behavior. This was proven with appeasement of hitler in the 40s (hey, someone had to feed Godwin's law, right?) and has been confirmed with the recent corporate bail-outs, bank bail-outs and the mortgage crisis. One could even argue that the existing corporate attitudes about NAT are a reflection of this behavior being rewarded with ALGs and other code constructs aimed at accommodating that bad behavior.
And the good thing is that if you can support jumbo frames, just turn it on and let pMTU do it's work. Happy 9000's ;)
pMTU has been broken in IPv4 since the early days.
PMTUD is broken in IPv4 since the early days because it didn't exist in the early days. PMTUD is a relatively recent feature for IPv4. PMTUD has been getting progressively less broken in IPv4 since it was introduced.
It is still broken. It is also broken in IPv6. It will likely still be broken for the forseeable future. This is
PMTU-D itself is not broken in IPv6, but some networks do break PMTU-D.
a) a problem that should not be ignored
True. Ignoring ignorance is no better than accommodating it. The correct answer to ignorance is education.
b) a failure in imagination when designing the protocol
Not really. In reality, it is a failure of implementers to follow the published standards. The protocol, as designed, works as expected if deployed in accordance with the specifications.
c) a missed opportunity to correct a systemic issue with IPv4
There are many of those (the most glaring being the failure to address scalability of the routing system). However, since, as near as I can tell, PMTU-D was a new feature for IPv6 which was subsequently back-ported to IPv4, I am not sure that statement really applies in this case. Many of the features we take for granted in IPv4 today were actually introduced as part of IPv6 development, including IPSEC, PMTU-D, CIDR notation for prefix length, and more.
Or better said: mis-configuring systems break things.
Why do switches auto-mdix these days?
Because it makes correct configuration easier. You can turn this off on most switches, in fact, and if you do, you can still misconfigure them. Any device with a non-buggy IPv6 implementation, by default does not block ICMPv6 PTB messages. If you subsequently deliberately misconfigure it to block them, then, you have taken deliberate action to misconfigure your network.
Because insisting that things will work properly if you just configure them correctly turns out to be inferior to designing a system that requires less configuration to achieve the same goal.
Breaking PMTU-D in IPv6 requires configuration unless the implementation is buggy. Don't get me started on how bad a buggy Auto MDI/X implementation can make your life. Believe me, it is far worse than PMTU-D blocking.
Automate.
Already done. Most PMTU-D blocks in IPv6 are the result of operators taking deliberate configuration action to block packets that should not be blocked. That is equivalent to turning off Auto-MDI/X or Autonegotiation on the port.
This whole thread is all about how IPv6 has not improved any of the issues that are well known with IPv4 and in many cases makes them worse.
You cannot unteach stupid people to do stupid things.
I disagree. People can be educated. It may take more effort than working around them, but, it can be done.
Protocol changes will not suddenly make people understand that what they want to do is wrong and breaks said protocol.
Nope... This requires education.
Greets, Jeroen
You also cannot teach protocol people that there is protocol and then there is reality.
Huh? This seems nonsensical to me, so I am unsure what you mean.
Relying on ICMP exception messages was always wrong for normal network operation.
Here we must disagree. What else can you rely on? In order to characterize the path to a given destination, you must either get an exception back for packets that are too large, or, you must get confirmation back that your packet arrived. The absent of arrival confirmation does not tell you anything about why the packet did not arrive, so, assuming that it is due to size requires a rather lengthy conversation searching for the largest size of packet that will pass through the path. For example, consider that you are on an ethernet segment with jumbo frames. PMTU-D is relatively efficient. You send a 9000 octet datagram and you get back an ICMP message telling you the largest size datagram that will pass. If there are several points where the PMTU is reduced along the way, you will have 1 round trip of this type for each of those points. Notice there are no waits for timeouts involved here. Probing as you have proposed requires you to essentially do a binary search to arrive at some number n where 1280≤n≤9000, so, you end up doing something like this: Send 5140 octet datagram, wait for reply (how long?) Send 3210 octet datagram, wait for reply (how long?) Send 2245 octet datagram, wait for reply (how long?) Send 1762 octet datagram, wait for reply (how long?) Send 1521 octet datagram, wait for reply (how long?) Send 1400 octet datagram, wait for reply (how long?) Send 1340 octet datagram, wait for reply (how long?) Send 1310 octet datagram, wait for reply (how long?) Send 1296 octet datagram, wait for reply (how long?) Send 1288 octet datagram, wait for reply (how long?) Send 1284 octet datagram, wait for reply (how long?) Send 1282 octet datagram, wait for reply (how long?) Send 1281 octet datagram, wait for reply (how long?) Settle on 1280 MTU... So, you waited for 13 timeouts before you actually passed useful traffic? Or, perhaps you putter along at the lowest possible MTU until you find some higher value you know works so you're sending lots of extra traffic? That's fantastic for modern short-lived flows. You send more traffic for PMTU discovery than you receive in the entire life of the flow in some cases. Owen