Internet Exchanges supporting jumbo frames?
Hi, I'm trying to convince my local Internet Exchange location (and it is not small, exceed 1 terabit per second on a daily basis) to adopt jumbo frames. For IPv6 is is hassle free, Path MTU Discovery arranges the max MTU per connection/destination. For IPv4, it requires more planning. For instance, two datacenters tend to exchange relevant traffic because customers with disaster recovery in mind (saving the same content in two different datacenters, two different suppliers). In most cases, these datacenters are quite far from each other, even in different countries. In this context, jumbo frames would allow max speed even the latency is from a tipical international link. Could anyone share with me Internet Exchanges you know that allow jumbo frames (like https://www.gr-ix.gr/specs/ does) and how you notice benefit from it? Best regards, Kurt Kraut
On 09/03/2016 15:26, Kurt Kraut via NANOG wrote:
Could anyone share with me Internet Exchanges you know that allow jumbo frames (like https://www.gr-ix.gr/specs/ does) and how you notice benefit from it?
Netnod does it in separate vlan's. -- Grzegorz Janoszka
On Wed 2016-Mar-09 15:32:32 +0100, Grzegorz Janoszka <Grzegorz@Janoszka.pl> wrote:
On 09/03/2016 15:26, Kurt Kraut via NANOG wrote:
Could anyone share with me Internet Exchanges you know that allow jumbo frames (like https://www.gr-ix.gr/specs/ does) and how you notice benefit from it?
Netnod does it in separate vlan's.
-- Grzegorz Janoszka
Same for the SIX. -- Hugo Slabbert | email, xmpp/jabber: hugo@slabnet.com pgp key: B178313E | also on Signal
Hi Kurt, On Wed, Mar 09, 2016 at 11:26:35AM -0300, Kurt Kraut via NANOG wrote:
I'm trying to convince my local Internet Exchange location (and it is not small, exceed 1 terabit per second on a daily basis) to adopt jumbo frames. For IPv6 is is hassle free, Path MTU Discovery arranges the max MTU per connection/destination.
For IPv4, it requires more planning. For instance, two datacenters tend to exchange relevant traffic because customers with disaster recovery in mind (saving the same content in two different datacenters, two different suppliers). In most cases, these datacenters are quite far from each other, even in different countries. In this context, jumbo frames would allow max speed even the latency is from a tipical international link.
Could anyone share with me Internet Exchanges you know that allow jumbo frames (like https://www.gr-ix.gr/specs/ does) and how you notice benefit from it?
You might find this presentation interesting: https://www.nanog.org/sites/default/files/wednesday.general.steenbergen.anti... The presenter argues: "Internet-wide Jumbo Frames will probably cause infinitely more harm than good under the current technology." Kind regards, Job
On 9 March 2016 at 16:34, Job Snijders <job@instituut.net> wrote:
https://www.nanog.org/sites/default/files/wednesday.general.steenbergen.anti...
IXP can verify if MTU is too large or too small with active poller. Poller in the IXP has too large MTU, it tries to send ping packets with max_size+1, if they work, customer has too large MTU. Also it tries to send max_size, if it does not work, customer has too small MTU. As icing on top, it tries to send max_size+1 but fragments it to max_size and 1, and sees what comes back. IXP is only interface in whole of Internet which collapses MTU to 1500B, private peers regularly have higher MTU, ~everyone runs core at higher MTU. I think it's crucial that we stop thinking MTU as single thing, we should separate edge MTU and core MTU, that is how we already think and provision when we think about our own network. Then question becomes, is IXP edge or core? I would say run core MTU in IXP, so edge MTU can be tunneled without fragmentation over it. IXP can offer edgeMTU and coreMTU VLANs, so that people who are religiously against it, can only peer in edgeMTU VLAN. -- ++ytti
Saku Ytti wrote:
Poller in the IXP has too large MTU, it tries to send ping packets with max_size+1, if they work, customer has too large MTU. Also it tries to send max_size, if it does not work, customer has too small MTU. As icing on top, it tries to send max_size+1 but fragments it to max_size and 1, and sees what comes back.
you're recommending that routers at IXPs do inflight fragmentation? Nick
Saku Ytti wrote:
I'm suggesting IXP has active poller which detects customer MTU misconfigs.
any ixp configuration which requires active polling to ensure correct configuration is doomed to failure. You are completely overestimating human nature if you believe that the IXP operator can make this work by harassing people into configuring the correct mtu, even if the data is clear that their systems are misconfigured. Nick
On 9 March 2016 at 20:25, Nick Hilliard <nick@foobar.org> wrote:
any ixp configuration which requires active polling to ensure correct configuration is doomed to failure. You are completely overestimating human nature if you believe that the IXP operator can make this work by harassing people into configuring the correct mtu, even if the data is clear that their systems are misconfigured.
It's not a novel idea, IXPs already do active polling, even ARP sponges. In a competitive market, hopefully customers will choose the IXP operator who knows how to ensure minimal pain for the customers. -- ++ytti
On 9 Mar 2016, at 18:29, Saku Ytti <saku@ytti.fi> wrote:
It's not a novel idea, IXPs already do active polling, even ARP sponges. In a competitive market, hopefully customers will choose the IXP operator who knows how to ensure minimal pain for the customers.
There is a critical difference between these two situations. In the case of an arp sponge, the ixp operator has control of both the polling and the workaround. In the case of mtu management they would only have control of the polling, not the remediation. The point I was making is that an ixp operator can only control their own infrastructure. Once it's someone else's infrastructure, all you can do is make polite suggestions. Nick
On 9 March 2016 at 20:59, Nick Hilliard <nick@foobar.org> wrote:
There is a critical difference between these two situations. In the case of an arp sponge, the ixp operator has control of both the polling and the workaround. In the case of mtu management they would only have control of the polling, not the remediation. The point I was making is that an ixp operator can only control their own infrastructure. Once it's someone else's infrastructure, all you can do is make polite suggestions.
If customer does not react, put it on quarantine VLAN. This can be automated too. Wrong MTU => open internal case, contact customers email, no customer response in N days, quarantine VLAN. Even the most outrageous success stories in the world, majority of the people would have said before attempting that it won't work, because that is safe and easy. And usually they are right, most things don't work, but it's very difficult to actually know without trying what works and what does. Luckily we have actual IXPs running big and small VLAN MTUs, even without this monitoring capability, and it Internet still works. -- ++ytti
Saku Ytti wrote:
If customer does not react, put it on quarantine VLAN. This can be automated too. Wrong MTU => open internal case, contact customers email, no customer response in N days, quarantine VLAN.
Even the most outrageous success stories in the world, majority of the people would have said before attempting that it won't work, because that is safe and easy. And usually they are right, most things don't work, but it's very difficult to actually know without trying what works and what does. Luckily we have actual IXPs running big and small VLAN MTUs, even without this monitoring capability, and it Internet still works.
Imo, there is not enough value for an IXP to do such monitoring, especially assuming we agree on Richard Steenbergen's conclusion in his presentation. Promoting that kind of monitoring as a differentiator will surely help adding an extra bullet point on marketing material, but will be a trivial part in the decision process of a potential customer. --Aris
Saku Ytti wrote:
If customer does not react, put it on quarantine VLAN. This can be automated too. Wrong MTU => open internal case, contact customers email, no customer response in N days, quarantine VLAN.
... and then the customer will leave the service down because it the primary peering lan works fine and they couldn't be bothered fixing jumbo lan connectivity because the neteng who wanted the 9000 byte mtu connectivity in the first place got distracted by a squirrel or left the company or was too busy doing other things.
work, but it's very difficult to actually know without trying what works and what does.
I've spent a good deal of time and effort trying to get a jumbo peering vlan to work and it didn't work for the reasons that I've mentioned, and others. For example, many types of hardware don't allow you to specify a different MTU for different .1q tags on the same physical interface. This means that if you want a connection to a jumbo MTU vlan and a standard mtu vlan, you needed two separate connections into the IXP. At that point, the ixp participant is unlikely to want to bother because there's no cost:value justification in getting the second connection. Don't get me wrong: jumbo MTU IXPs are a great idea in theory. In practice, they cause an inordinate amount of pain. Nick
On 9 March 2016 at 21:46, Nick Hilliard <nick@foobar.org> wrote:
I've spent a good deal of time and effort trying to get a jumbo peering vlan to work and it didn't work for the reasons that I've mentioned, and others.
It works and has worked 2 decades in real IXP. -- ++ytti, boy who didn't cry wolf
Saku Ytti wrote:
It works and has worked 2 decades in real IXP.
If you're referring to Netnod, this started out as a fddi platform with a native max frame size of 4470. Maintaining something which already exists is not nearly as difficult as starting something from scratch and trying to reach a critical mass. In the case of INEX and several other IXPs, it failed because of (in no particular order): - hardware problems - lack of interest among ixp participants outside individual pushers - lack of consensus about what MTU should be chosen - operational problems causing people to "temporarily disable connectivity until someone can take a look at it", i.e. permanently. - additional expense in some situations - the main peering lan worked fine, ie no overriding value proposition - pmtu problems - fragile and troublesome to debug when things went wrong
++ytti, boy who didn't cry wolf
Many IXPs have either looked at or attempted to build jumbo peering lans. You can see how well they worked out by looking at the number of successful deployments. The reason for this tiny number isn't due to lack of effort on the part of the ixp operators. Nick, boy who did the jumbo vlan thing and got the t shirt
On Wed, 9 Mar 2016, Nick Hilliard wrote:
Many IXPs have either looked at or attempted to build jumbo peering lans. You can see how well they worked out by looking at the number of successful deployments. The reason for this tiny number isn't due to lack of effort on the part of the ixp operators.
I believe all IXP operators should offer higher MTU vlans, so that the ISPs who are interested can use them. If individual ISPs are not interested, then they don't have to use it. It's available if they gain interest. The whole point of an IX is to be a market place where interested parties can talk to each other. The IXP should not limit (to reasonable extent) what services the ISPs can run across the infrastructure. If two ISPs need higher than 1500 MTU between them, then forcing them to connect outside of the IXP L2 infrastructure doesn't make any sense to me, when it's fairly easy for the IXP to offer this service. The IXPs who offer "private lans" between two parties, do they generally limit these as well to 1500 L3 MTU? -- Mikael Abrahamsson email: swmike@swm.pp.se
On 9 Mar 2016, at 21:17, Mikael Abrahamsson <swmike@swm.pp.se> wrote:
On Wed, 9 Mar 2016, Nick Hilliard wrote:
Many IXPs have either looked at or attempted to build jumbo peering lans. You can see how well they worked out by looking at the number of successful deployments. The reason for this tiny number isn't due to lack of effort on the part of the ixp operators.
I believe all IXP operators should offer higher MTU vlans, so that the ISPs who are interested can use them. If individual ISPs are not interested, then they don't have to use it. It's available if they gain interest.
In my experience many (most) IXP members don’t want multiple VLANs as default as that drives up operational complexity. I am not saying they are right, I am just saying that is reality.
The whole point of an IX is to be a market place where interested parties can talk to each other. The IXP should not limit (to reasonable extent) what services the ISPs can run across the infrastructure. If two ISPs need higher than 1500 MTU between them, then forcing them to connect outside of the IXP L2 infrastructure doesn't make any sense to me, when it's fairly easy for the IXP to offer this service.
Most IXPs offers private VLANs and I assume these can support any MTU size you want. Best Regards, - kurtis -
On 9 March 2016 at 22:56, Nick Hilliard <nick@foobar.org> wrote:
- hardware problems
If we build everything on LCD, we'll have Internet where just HTTP/80 works on 576B. You can certainly find platform which has problems doing else.
- lack of interest among ixp participants outside individual pushers
They probably want faster horses.
- lack of consensus about what MTU should be chosen
If we stop thinking of MTU as single entity and start thinking it as edge MTU and core MTU, it becomes less important, as long as core MTU covers overhead over edge. I would go for 1500B edge, and 9100B core, but that's just me.
- operational problems causing people to "temporarily disable connectivity until someone can take a look at it", i.e. permanently.
Vague. But ultimately this is what you will do always when issue is not solved, sometime you just have to give up, 'ok far end is gone, let's close this connection'.
- additional expense in some situations
Vague. 'Sometimes something has some cost which is more than in some other situation sometimes'.
- the main peering lan worked fine, ie no overriding value proposition
99% Internet users likely are happy with 576B HTTP only INET. I'm not still comfortable accepting that it's only thing Internet shoudl be.
- pmtu problems
Immaterial, it is there regardless.
- fragile and troublesome to debug when things went wrong
I've proposed automated, fully toolisable solution for IXP to verify customers have correct config. People who don't want to deal with this, who don't believe in this, can peer only over edgeMTU VLAN and have completely same situation as today. -- ++ytti
Saku Ytti wrote:
I would go for 1500B edge, and 9100B core, but that's just me.
Other people would be fine with 1522 core because that suits both their needs and equipment limitations. So what do you do? Go with 9100 because it suits you, or 9000 because that's what lots of other people use? Or 4470 because of history? Or 1522 because that enables you to pad on some extra headers and get 1500 payload, and works for more people but is too meh for others to contemplate? Or 9000 and some slop because you commit to carrying 9000 payload on your network, whereas other people only commit to 9000 total frame size? Do you understand how bad humans are at making decisions like this? And how truly awful some equipment is that people install at IXPs?
I've proposed automated, fully toolisable solution for IXP to verify customers have correct config.
but you haven't solved the human problem. The IXP operator does not have enable on IXP participant routers. Nick
On 10 March 2016 at 00:01, Nick Hilliard <nick@foobar.org> wrote:
Other people would be fine with 1522 core because that suits both their needs and equipment limitations. So what do you do? Go with 9100 because it suits you, or 9000 because that's what lots of other people use? Or 4470 because of history? Or 1522 because that enables you to pad on some extra headers and get 1500 payload, and works for more people but is too meh for others to contemplate? Or 9000 and some slop because you commit to carrying 9000 payload on your network, whereas other people only commit to 9000 total frame size?
I don't think it's super important. IXP will do what they think is best for the coreMTU.
And how truly awful some equipment is that people install at IXPs?
People with awful kit are free to do edgeMTU only.
but you haven't solved the human problem. The IXP operator does not have enable on IXP participant routers.
Member may puke L2 loop to IXP, you must have some channel to deal with your customers. If that channel fails you quarantine the VLAN or shut down the port. If you cannot have any communication with your members I can see how this seems like particularly difficult problem. -- ++ytti
Saku Ytti wrote:
Member may puke L2 loop to IXP, you must have some channel to deal with your customers.
First, mac filters. Second, if someone l2 loops and it causes problems because of hardware failure on our side, we reserve the right to pull connectivity: https://www.inex.ie/technical/maintenance
In the case of emergencies, INEX reserves the right to perform critical maintenance on a 24x7x365 basis without prior notification to INEX members. Emergencies are defined as situations where: [...] - member port misconfiguration or hardware/software bug causes loss of service or down-time for other INEX members, in the potential situation where this is not prevented by INEX port security measures
Fortunately, it's only rarely that this happens. Third, someone puking an l2 loop to the IXP is a problem which may affect the ixp infrastructure. This is not the case when people muck up their MTU config. There is a demarcation line here: one is an IXP problem; the other is not. Nick
On Wed, 9 Mar 2016, Nick Hilliard wrote:
For example, many types of hardware don't allow you to specify a different MTU for different .1q tags on the same physical interface.
What hardware types typically connected to an IXP would that be, where this would be a problem? On all platforms I've configured and connected to an IXP, they would all be configured by setting max L2 MTU on the main interface, and then you configure whatever needed IPv4 and IPv6 L3 MTU on the subinterface. -- Mikael Abrahamsson email: swmike@swm.pp.se
Mikael Abrahamsson wrote:
On all platforms I've configured and connected to an IXP, they would all be configured by setting max L2 MTU on the main interface, and then you configure whatever needed IPv4 and IPv6 L3 MTU on the subinterface.
iirc, we had problems with a bunch of ios based platforms. It worked fine on junos / xr platforms. I share your surprise that this could even have caused a problem, but it did. Nick
On 9 March 2016 at 22:28, Nick Hilliard <nick@foobar.org> wrote:
iirc, we had problems with a bunch of ios based platforms. It worked fine on junos / xr platforms. I share your surprise that this could even have caused a problem, but it did.
This is very poor reason to kill it for everyone, 'I recall I may have found platform where it maybe didn't work'. Yet, it's super common go have L3Termination - L2Aggregation - Customer. And sell various MTU options to customers, even use the L2Aggregation as core between two L3Termination. All of these require per logical-interface L3 MTU, while L2 MTU is usually set to max.
From Cisco, I've done this at least on 720x, 7301, 7304 NPE/NSE, 7600, 6500, GSR, CRS-1, ASR1k, ASR9k, 2600, 3600, ASR9k, probably some others too.
-- ++ytti
Until you've ran an IXP, you have no idea how finicky or clueless some network operators are. ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest Internet Exchange http://www.midwest-ix.com ----- Original Message ----- From: "Nick Hilliard" <nick@foobar.org> To: "Saku Ytti" <saku@ytti.fi> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, March 9, 2016 1:46:54 PM Subject: Re: Internet Exchanges supporting jumbo frames? Saku Ytti wrote:
If customer does not react, put it on quarantine VLAN. This can be automated too. Wrong MTU => open internal case, contact customers email, no customer response in N days, quarantine VLAN.
... and then the customer will leave the service down because it the primary peering lan works fine and they couldn't be bothered fixing jumbo lan connectivity because the neteng who wanted the 9000 byte mtu connectivity in the first place got distracted by a squirrel or left the company or was too busy doing other things.
work, but it's very difficult to actually know without trying what works and what does.
I've spent a good deal of time and effort trying to get a jumbo peering vlan to work and it didn't work for the reasons that I've mentioned, and others. For example, many types of hardware don't allow you to specify a different MTU for different .1q tags on the same physical interface. This means that if you want a connection to a jumbo MTU vlan and a standard mtu vlan, you needed two separate connections into the IXP. At that point, the ixp participant is unlikely to want to bother because there's no cost:value justification in getting the second connection. Don't get me wrong: jumbo MTU IXPs are a great idea in theory. In practice, they cause an inordinate amount of pain. Nick
Kurt Kraut via NANOG wrote:
I'm trying to convince my local Internet Exchange location (and it is not small, exceed 1 terabit per second on a daily basis) to adopt jumbo frames.
this has been tried before at many ixps. No matter how good an idea it sounds like, most organisations are welded hard to the idea of a 1500 byte mtu. Even for those who use larger MTUs on their networks, you're likely to find that there is no agreement on the mtu that should be used. Some will want 9000, some 9200, others 4470 and some people will complain that they have some old device somewhere that doesn't support anything more than 1522, and could everyone kindly agree to that instead. Meanwhile, if anyone gets the larger MTU wrong anywhere on their network, packets will be blackholed and customers will end up unhappy. Management will demand that the IXP jumbo service is disconnected until the root cause is fixed, or worse still, will blame the IXP for some mumble relating to how things worked better before enabling jumbo mtus. Nick
2016-03-09 11:45 GMT-03:00 Nick Hilliard <nick@foobar.org>:
this has been tried before at many ixps. No matter how good an idea it sounds like, most organisations are welded hard to the idea of a 1500 byte mtu. Even for those who use larger MTUs on their networks, you're likely to find that there is no agreement on the mtu that should be used. Some will want 9000, some 9200, others 4470 and some people will complain that they have some old device somewhere that doesn't support anything more than 1522, and could everyone kindly agree to that instead.
Hi Nick, Thank you for replying so quickly. I don't see why the consensus for an MTU must be reached. IPv6 Path MTU Discovery would handle it by itself, wouldn't it? If one participant supports 9k and another 4k, the traffic between them would be at 4k with no manual intervention. If to participants adopts 9k, hooray, it will be 9k thanks do PMTUD. Am I missing something? Best regards, Kurt Kraut
Maybe breaking v4 in the process? ----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com Midwest Internet Exchange http://www.midwest-ix.com ----- Original Message ----- From: "Kurt Kraut via NANOG" <nanog@nanog.org> To: "Nick Hilliard" <nick@foobar.org> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, March 9, 2016 8:50:23 AM Subject: Re: Internet Exchanges supporting jumbo frames? 2016-03-09 11:45 GMT-03:00 Nick Hilliard <nick@foobar.org>:
this has been tried before at many ixps. No matter how good an idea it sounds like, most organisations are welded hard to the idea of a 1500 byte mtu. Even for those who use larger MTUs on their networks, you're likely to find that there is no agreement on the mtu that should be used. Some will want 9000, some 9200, others 4470 and some people will complain that they have some old device somewhere that doesn't support anything more than 1522, and could everyone kindly agree to that instead.
Hi Nick, Thank you for replying so quickly. I don't see why the consensus for an MTU must be reached. IPv6 Path MTU Discovery would handle it by itself, wouldn't it? If one participant supports 9k and another 4k, the traffic between them would be at 4k with no manual intervention. If to participants adopts 9k, hooray, it will be 9k thanks do PMTUD. Am I missing something? Best regards, Kurt Kraut
Hi Mike, The adoption of jumbo frames in a IXP doesn't brake IPv4. For an ISP, their corporate and residencial users would still use 1,5k. For datacenters, their local switches and servers are still set to 1,5k MTU. Nothing will brake. When needed, if needed and when supported, from a specific server, from a specific switch, to a specific router it can raise the MTU up to the max MTU supported by IXP if the operator know the destination also supports it, like in the disaster recovery example I gave. For IPv6, the best MTU will be detected and used with no operational effort. For those who doesn't care about it, an IXP adopting jumbo frames wouldn't demand any kind of change for their network. They just set their interfaces to 1500 bytes and go rest. For those who care like me can take benefit from it and for that reason I see no reason for not adopting it. Best regards, Kurt Kraut 2016-03-09 11:53 GMT-03:00 Mike Hammett <nanog@ics-il.net>:
Maybe breaking v4 in the process?
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest Internet Exchange http://www.midwest-ix.com
----- Original Message -----
From: "Kurt Kraut via NANOG" <nanog@nanog.org> To: "Nick Hilliard" <nick@foobar.org> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, March 9, 2016 8:50:23 AM Subject: Re: Internet Exchanges supporting jumbo frames?
2016-03-09 11:45 GMT-03:00 Nick Hilliard <nick@foobar.org>:
this has been tried before at many ixps. No matter how good an idea it sounds like, most organisations are welded hard to the idea of a 1500 byte mtu. Even for those who use larger MTUs on their networks, you're likely to find that there is no agreement on the mtu that should be used. Some will want 9000, some 9200, others 4470 and some people will complain that they have some old device somewhere that doesn't support anything more than 1522, and could everyone kindly agree to that instead.
Hi Nick,
Thank you for replying so quickly. I don't see why the consensus for an MTU must be reached. IPv6 Path MTU Discovery would handle it by itself, wouldn't it? If one participant supports 9k and another 4k, the traffic between them would be at 4k with no manual intervention. If to participants adopts 9k, hooray, it will be 9k thanks do PMTUD.
Am I missing something?
Best regards,
Kurt Kraut
There is no way to avoid breaking MTU for IPv4 but use PMTUD for IPv6, is there? Meaning to stick to 1500 for IPv4 and use something larger for IPv6? Kind regards, Stefan On 09.03.2016 15:59, Kurt Kraut via NANOG wrote:
Hi Mike,
The adoption of jumbo frames in a IXP doesn't brake IPv4. For an ISP, their corporate and residencial users would still use 1,5k. For datacenters, their local switches and servers are still set to 1,5k MTU. Nothing will brake. When needed, if needed and when supported, from a specific server, from a specific switch, to a specific router it can raise the MTU up to the max MTU supported by IXP if the operator know the destination also supports it, like in the disaster recovery example I gave. For IPv6, the best MTU will be detected and used with no operational effort.
For those who doesn't care about it, an IXP adopting jumbo frames wouldn't demand any kind of change for their network. They just set their interfaces to 1500 bytes and go rest. For those who care like me can take benefit from it and for that reason I see no reason for not adopting it.
Best regards,
Kurt Kraut
2016-03-09 11:53 GMT-03:00 Mike Hammett <nanog@ics-il.net>:
Maybe breaking v4 in the process?
----- Mike Hammett Intelligent Computing Solutions http://www.ics-il.com
Midwest Internet Exchange http://www.midwest-ix.com
----- Original Message -----
From: "Kurt Kraut via NANOG" <nanog@nanog.org> To: "Nick Hilliard" <nick@foobar.org> Cc: "NANOG list" <nanog@nanog.org> Sent: Wednesday, March 9, 2016 8:50:23 AM Subject: Re: Internet Exchanges supporting jumbo frames?
2016-03-09 11:45 GMT-03:00 Nick Hilliard <nick@foobar.org>:
this has been tried before at many ixps. No matter how good an idea it sounds like, most organisations are welded hard to the idea of a 1500 byte mtu. Even for those who use larger MTUs on their networks, you're likely to find that there is no agreement on the mtu that should be used. Some will want 9000, some 9200, others 4470 and some people will complain that they have some old device somewhere that doesn't support anything more than 1522, and could everyone kindly agree to that instead.
Hi Nick,
Thank you for replying so quickly. I don't see why the consensus for an MTU must be reached. IPv6 Path MTU Discovery would handle it by itself, wouldn't it? If one participant supports 9k and another 4k, the traffic between them would be at 4k with no manual intervention. If to participants adopts 9k, hooray, it will be 9k thanks do PMTUD.
Am I missing something?
Best regards,
Kurt Kraut
Kurt Kraut wrote:
Thank you for replying so quickly. I don't see why the consensus for an MTU must be reached. IPv6 Path MTU Discovery would handle it by itself, wouldn't it? If one participant supports 9k and another 4k, the traffic between them would be at 4k with no manual intervention. If to participants adopts 9k, hooray, it will be 9k thanks do PMTUD.
Am I missing something?
for starters, if you send a 9001 byte packet to a router which has its interface MTU configured to be 9000 bytes, the packet will be blackholed, not rejected with a PTB. Even if it weren't, how many icmp PTB packets per second would a router be happy to generate before rate limiters kicked in? Once someone malicious works that out, they can send that number of crafted packets per second through the IXP, thereby creating a denial of service situation. There are many other problems, such as pmtud not working properly in the general case. Nick
Could you do the same with a 1501 byte packet?
On Mar 9, 2016, at 10:51 AM, Nick Hilliard <nick@foobar.org> wrote:
Kurt Kraut wrote:
Thank you for replying so quickly. I don't see why the consensus for an MTU must be reached. IPv6 Path MTU Discovery would handle it by itself, wouldn't it? If one participant supports 9k and another 4k, the traffic between them would be at 4k with no manual intervention. If to participants adopts 9k, hooray, it will be 9k thanks do PMTUD.
Am I missing something?
for starters, if you send a 9001 byte packet to a router which has its interface MTU configured to be 9000 bytes, the packet will be blackholed, not rejected with a PTB.
Even if it weren't, how many icmp PTB packets per second would a router be happy to generate before rate limiters kicked in? Once someone malicious works that out, they can send that number of crafted packets per second through the IXP, thereby creating a denial of service situation.
There are many other problems, such as pmtud not working properly in the general case.
Nick
On Wed, 9 Mar 2016, David Bass wrote:
Could you do the same with a 1501 byte packet?
I have many times ping:ed with 10000 byte packets on a device that has "ip mtu 9000" configured on it, so it sends out two fragments, one being 9000, the other one around 1100 bytes, only to get back a stream of fragments, none of them larger than 1500 bytes. MTU and MRU are two different things. Regarding jumbo usage, the biggest immediate benefit I can see would be if two ISPs want to exchange tunneled traffic with each other, even if the customer access is 1500, there is definitely benefit in being able to slap an L3 tunnel header on that packet, send it as ~1550 bytes to the other ISP, and then they take off the header again, without having to handle tunnel packet fragments (which tend to be quite resource intensive). -- Mikael Abrahamsson email: swmike@swm.pp.se
Mikael Abrahamsson wrote:
I have many times ping:ed with 10000 byte packets on a device that has "ip mtu 9000" configured on it, so it sends out two fragments, one being 9000, the other one around 1100 bytes, only to get back a stream of fragments, none of them larger than 1500 bytes.
here's some data on INEX from a server interface with 9000 mtu. fping has 40 bytes overhead:
# wc -l ixp-router-addresses.txt 85 ixp-router-addresses.txt # fping -b 1460 < ixp-router-addresses.txt | grep -c unreachable 0 # fping -b 1500 < ixp-router-addresses.txt | grep -c unreachable 10 # fping -b 5000 < ixp-router-addresses.txt | grep -c unreachable 11 # fping -b 8960 < ixp-router-addresses.txt | grep -c unreachable 12
Out of interest, there were 5 different vendors in the output, according to the MAC addresses returned.Some of this may be caused by inappropriate icmp filtering on the routers, but the point is that it would be unwise to depend on routers doing the right thing here. If you're going to have a jumbo mtu vlan at an IXP, the VLAN needs to be a hard specification, not an aspiration with any variance. Nick
On Wed, Mar 9, 2016 at 9:50 AM, Kurt Kraut via NANOG <nanog@nanog.org> wrote:
Thank you for replying so quickly. I don't see why the consensus for an MTU must be reached. IPv6 Path MTU Discovery would handle it by itself, wouldn't it? If one participant supports 9k and another 4k, the traffic between them would be at 4k with no manual intervention. If to participants adopts 9k, hooray, it will be 9k thanks do PMTUD.
Am I missing something?
Hi Kurt, As far as I know, there is no "discovery" of MTU on an Ethernet LAN. That's Link MTU, not Path MTU. Unhappy things happen when the participants don't agree exactly about the layer-2 MTU. It's a layer 2 thing; IPv6 and layer 3 pmtud can't discover that layer 2 problem. Pmtud discovers when a link *reports* that the MTU is too small, not when packets vanish as a result of a layer 2 error. As a result, non-1500 byte MTUs in a network with multiple connected organizations can be brittle: human beings are bad at each configuring their router to exactly the same non-standard configuration as everybody else. Other than that one problem, high MTUs at an IXP are a good thing, not a bad one. Because IPv4 path MTU discovery is so badly broken, it's desirable to maintain a minimum MTU of 1500 bytes across the core, even as packets travel through tunnels, VPNs and other layered structures that add bytes to the payload. Greater than 1500 byte MTUs to the customer, on the other hand, are a bad thing. The aggravate the problems with PMTUd. Regards, Bill Herrin -- William Herrin ................ herrin@dirtside.com bill@herrin.us Owner, Dirtside Systems ......... Web: <http://www.dirtside.com/>
On Wed, 9 Mar 2016, Nick Hilliard wrote:
used. Some will want 9000, some 9200, others 4470 and some people
I have a strong opinion for jumboframes=9180bytes (IPv4/IPv6 MTU), partly because there are two standards referencing this size (RFC 1209 and 1626), and also because all major core router vendors support this size now that Juniper has decided (after some pushing) to start supporting it in more recent software on all their major platforms (before that they had too low L2 MTU to be able to support 9180 L3 MTU). In order to deploy this to end systems, I however thing we're going to need something like https://tools.ietf.org/html/draft-van-beijnum-multi-mtu-04 to make this work on mixed-MTU LANs. The whole thing about PMTUD blackhole detection is also going to be needed, so hosts try lower PMTU in case larger packets are dropped because of L2 misconfiguration in networks. With IPv6 we have the chance to make PMTUD work properly and also have PMTU blackhole detection implemented in all hosts. IPv4 is lost cause in my opinion (although it's strange how many hosts that seem to get away with 1492 (or is it 1496) MTU because they're using PPPoE). -- Mikael Abrahamsson email: swmike@swm.pp.se
On 3/9/16 7:58 AM, Mikael Abrahamsson wrote:
On Wed, 9 Mar 2016, Nick Hilliard wrote:
used. Some will want 9000, some 9200, others 4470 and some people
I have a strong opinion for jumboframes=9180bytes (IPv4/IPv6 MTU), partly because there are two standards referencing this size (RFC 1209 and 1626), and also because all major core router vendors support this size now that Juniper has decided (after some pushing) to start supporting it in more recent software on all their major platforms (before that they had too low L2 MTU to be able to support 9180 L3 MTU).
In order to deploy this to end systems, I however thing we're going to need something like https://tools.ietf.org/html/draft-van-beijnum-multi-mtu-04 to make this work on mixed-MTU LANs. The whole thing about PMTUD blackhole detection is also going to be needed, so hosts try lower PMTU in case larger packets are dropped because of L2 misconfiguration in networks.
With IPv6 we have the chance to make PMTUD work properly and also have
The prospects for that seem relatively dire. of course whats being discussed here is the mixed L2 case, where the device will probably not sent icmp6 ptb anyway but rather simply discard the packet as a giant.
PMTU blackhole detection implemented in all hosts. IPv4 is lost cause in my opinion (although it's strange how many hosts that seem to get away with 1492 (or is it 1496) MTU because they're using PPPoE).
if your adv_mss is set accordingly you can get away with a lot.
On Wed, Mar 9, 2016 at 9:27 AM, joel jaeggli <joelja@bogus.com> wrote:
PMTU blackhole detection implemented in all hosts. IPv4 is lost cause in
my opinion (although it's strange how many hosts that seem to get away with 1492 (or is it 1496) MTU because they're using PPPoE).
if your adv_mss is set accordingly you can get away with a lot.
At least for TCP. EDNS with sizes > 14xx bytes just plain doesn't universally work across the internet, yet it's the default everywhere.
In message <CADb+6TAqqYc2yLUGV7n4Qiioq8qasriNsBtCRNNvB2K1A-t1rw@mail.gmail.com> , Joel Maslak writes:
On Wed, Mar 9, 2016 at 9:27 AM, joel jaeggli <joelja@bogus.com> wrote:
PMTU blackhole detection implemented in all hosts. IPv4 is lost cause in
my opinion (although it's strange how many hosts that seem to get away with 1492 (or is it 1496) MTU because they're using PPPoE).
if your adv_mss is set accordingly you can get away with a lot.
At least for TCP. EDNS with sizes > 14xx bytes just plain doesn't universally work across the internet, yet it's the default everywhere.
If you fix your own firewall to accept fragmented packets EDNS basically works. Over the years I've see a couple of sites which can't emit fragmented EDNS but they are few and far between. Firewall vendors could also do the correct thing and support installing slits as well as than pinholes when generating reply traffic acceptance rules on the fly. They could be honest and acknowledge that legitimate reply traffic includes packet fragments and build their boxes to support it. Outbound allow proto udp from any to any 53 keep-state permit-frags could generate allow proto udp from dst 53 to src src-port and allow proto udp from dst to src frag offset != 0 You still have the protocol and the source and destination addresses. You also don't allow full packets to reassemble via the slit rule. Mark -- Mark Andrews, ISC 1 Seymour St., Dundas Valley, NSW 2117, Australia PHONE: +61 2 9871 4742 INTERNET: marka@isc.org
On 9/Mar/16 16:26, Kurt Kraut via NANOG wrote:
Could anyone share with me Internet Exchanges you know that allow jumbo frames (like https://www.gr-ix.gr/specs/ does) and how you notice benefit from it?
NAPAfrica in South Africa support jumbo frames: https://www.napafrica.net/ Little benefit to us since the majority of members don't run jumbo frames. For some that do, it is unclear whether their backbones support jumbo frames across the board. Mark.
I must be missing something very obvious here, because i cannot think of any reason why an IXP shouldn't enable the maximum possible MTU on its infrastructure to be available to its customers. Then it's clearly customers' decision on what MTU to use on their devices, as long as: * It fits inside IXP's MTU * It suits with any other customer's (exchanging traffic with) MTU -- Tassos Kurt Kraut via NANOG wrote on 9/3/16 16:26:
Hi,
I'm trying to convince my local Internet Exchange location (and it is not small, exceed 1 terabit per second on a daily basis) to adopt jumbo frames. For IPv6 is is hassle free, Path MTU Discovery arranges the max MTU per connection/destination.
For IPv4, it requires more planning. For instance, two datacenters tend to exchange relevant traffic because customers with disaster recovery in mind (saving the same content in two different datacenters, two different suppliers). In most cases, these datacenters are quite far from each other, even in different countries. In this context, jumbo frames would allow max speed even the latency is from a tipical international link.
Could anyone share with me Internet Exchanges you know that allow jumbo frames (like https://www.gr-ix.gr/specs/ does) and how you notice benefit from it?
Best regards,
Kurt Kraut
Hello folks, First of all, thank you all for this amazing debate. So many important ideas were exposed here and I wish we keep going on this. I've seen many opposition to my proposal but I still remain on the side of jumbo frame adoption for IXP. I'm pretty confident there is no need for a specific MTU consensus and not all IXP participants are obligated to raise their interface MTU if the IXP starts allowing jumbo frames. One of the reasons I'm so surprised with concerns about compatibility and breaking the internet I've seen here is the offers I get from my IP transit providers: half of them offered me jumbo frame capable ports by default, it wasn't a request. When this subject became important to me and I open support tickets, half of them replied something like 'You don't need to request it. From our end the max MTU is X'. The lowest X I got was 4400 and the highest 9260 bytes. All my Tier-1 providers already provided me jumbo frames IP transit. Even my south american IP Transit provider activated my link with 9k MTU by default. So we have Tier-1 backbones moving jumbo frames around continents, why in a controlled L2 enviroment that usually resides in a single building and managed by a single controller having jumbo frames is that concerning? Best regards, Kurt Kraut 2016-03-09 19:22 GMT-03:00 Tassos Chatzithomaoglou <achatz@forthnet.gr>:
I must be missing something very obvious here, because i cannot think of any reason why an IXP shouldn't enable the maximum possible MTU on its infrastructure to be available to its customers. Then it's clearly customers' decision on what MTU to use on their devices, as long as:
* It fits inside IXP's MTU * It suits with any other customer's (exchanging traffic with) MTU
-- Tassos
Hi,
I'm trying to convince my local Internet Exchange location (and it is not small, exceed 1 terabit per second on a daily basis) to adopt jumbo
Kurt Kraut via NANOG wrote on 9/3/16 16:26: frames.
For IPv6 is is hassle free, Path MTU Discovery arranges the max MTU per connection/destination.
For IPv4, it requires more planning. For instance, two datacenters tend to exchange relevant traffic because customers with disaster recovery in mind (saving the same content in two different datacenters, two different suppliers). In most cases, these datacenters are quite far from each other, even in different countries. In this context, jumbo frames would allow max speed even the latency is from a tipical international link.
Could anyone share with me Internet Exchanges you know that allow jumbo frames (like https://www.gr-ix.gr/specs/ does) and how you notice benefit from it?
Best regards,
Kurt Kraut
* nanog@nanog.org (Kurt Kraut via NANOG) [Thu 10 Mar 2016, 00:59 CET]:
I'm pretty confident there is no need for a specific MTU consensus and not all IXP participants are obligated to raise their interface MTU if the IXP starts allowing jumbo frames.
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
So we have Tier-1 backbones moving jumbo frames around continents, why in a controlled L2 enviroment that usually resides in a single building and managed by a single controller having jumbo frames is that concerning?
Because the L3 devices connected to it aren't controlled by a single entity. -- Niels.
Niels Bakker wrote on 10/3/16 02:44:
* nanog@nanog.org (Kurt Kraut via NANOG) [Thu 10 Mar 2016, 00:59 CET]:
I'm pretty confident there is no need for a specific MTU consensus and not all IXP participants are obligated to raise their interface MTU if the IXP starts allowing jumbo frames.
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
Isn't that the case for IXP's current/default MTU? If an IXP currently uses 1500, what effect will it have to its customers if it's increased to 9200 but not announced to them? -- Tassos
Hi, On 3/10/2016 9:23 AM, Tassos Chatzithomaoglou wrote:
Niels Bakker wrote on 10/3/16 02:44:
* nanog@nanog.org (Kurt Kraut via NANOG) [Thu 10 Mar 2016, 00:59 CET]:
I'm pretty confident there is no need for a specific MTU consensus and not all IXP participants are obligated to raise their interface MTU if the IXP starts allowing jumbo frames.
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
Isn't that the case for IXP's current/default MTU? If an IXP currently uses 1500, what effect will it have to its customers if it's increased to 9200 but not announced to them?
none. everyone has agreed on 1500. it is near impossible to get close to everyone to agree on 9200 (or similar number) and implement it (at the same time or in a separate VLAN) (Nick argues, and i see the problem). The agreement and actions of the (various) operators of L3 devices connected at the IXP is what matters and seems not trivial. They are not under one control. Frank
I think that’s the problem in a nutshell…until every vendor agrees on the size of a “jumbo” packet/frame (and as such, allows that size to be set with a non-numerical configuration flag). As is, every vendor has a default that results in 1500-byte IP MTU, but changing that requires entering a value…which varies from vendor to vendor. The IEEE *really* should be the ones driving this particular standardization, but it seems that they’ve explicitly decided not to. This is…annoying to say the least. Have their been any efforts on the IETF side of things to standardize this, at least for IPv4/v6 packets? -C
On Mar 9, 2016, at 10:38 PM, Frank Habicht <geier@geier.ne.tz> wrote:
Hi,
On 3/10/2016 9:23 AM, Tassos Chatzithomaoglou wrote:
Niels Bakker wrote on 10/3/16 02:44:
* nanog@nanog.org (Kurt Kraut via NANOG) [Thu 10 Mar 2016, 00:59 CET]:
I'm pretty confident there is no need for a specific MTU consensus and not all IXP participants are obligated to raise their interface MTU if the IXP starts allowing jumbo frames.
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
Isn't that the case for IXP's current/default MTU? If an IXP currently uses 1500, what effect will it have to its customers if it's increased to 9200 but not announced to them?
none. everyone has agreed on 1500. it is near impossible to get close to everyone to agree on 9200 (or similar number) and implement it (at the same time or in a separate VLAN) (Nick argues, and i see the problem). The agreement and actions of the (various) operators of L3 devices connected at the IXP is what matters and seems not trivial. They are not under one control.
Frank
There was one draft few years ago https://tools.ietf.org/html/draft-mlevy-ixp-jumboframes-00#section-3.1 On 17/03/2016 20:49, Chris Woodfield wrote:
Have their been any efforts on the IETF side of things to standardize this, at least for IPv4/v6 packets?
Put MTU in BGP announcements? Imagine how much fun we could have if you could make routing decisions based on available path MTU... Regards, Baldur
On Thu, 10 Mar 2016 08:23:30 +0200 Tassos Chatzithomaoglou <achatz@forthnet.gr> wrote:
Niels Bakker wrote on 10/3/16 02:44:
* nanog@nanog.org (Kurt Kraut via NANOG) [Thu 10 Mar 2016, 00:59 CET]:
I'm pretty confident there is no need for a specific MTU consensus and not all IXP participants are obligated to raise their interface MTU if the IXP starts allowing jumbo frames.
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
Isn't that the case for IXP's current/default MTU? If an IXP currently uses 1500, what effect will it have to its customers if it's increased to 9200 but not announced to them?
None. Until someone actually tries to make use of the higher MTU. Then things start breaking. Let's say I'm a customer at this IXP. I have 100 peers. I have one peer that likes large MTUs, so I set my L3 MTU to 9000 (or whatever I agree with this peer). Now I have broken connectivity towards my 99 other peers who are all still at 1500. So today you need a separate VLAN for Jumbo's, which some IXPs have. On this VLAN you will only find the peers that actually care about Jumboframes. The majority of IXP participants don't bother to connect to this VLAN for varying reasons. If the number of interested parties is too low, IXPs may well decide it is not worth the investment of time and resources to set this up, implement monitoring for it, deal with customers messing up their configs, etc. In order for Jumboframes to be successful on IXPs _on a large scale_ the technology has to change. There needs to be a mechanism to negotiate MTU for each L2 neighbor individually. Something like draft-van-beijnum-multi-mtu-03, which was mentioned before in this thread. With this in place individual sets of peers could safely use different MTUs on the same VLAN, and IXPs would have a migration path towards supporting larger framesizes. -- Kind regards, Martin Pels Network Engineer LeaseWeb Technologies B.V. T: +31 20 316 0232 M: E: m.pels@tech.leaseweb.com W: http://www.leaseweb.com Luttenbergweg 8, 1101 EC Amsterdam, Netherlands
Martin Pels wrote on 10/3/2016 4:15 μμ:
On Thu, 10 Mar 2016 08:23:30 +0200 Tassos Chatzithomaoglou <achatz@forthnet.gr> wrote:
Niels Bakker wrote on 10/3/16 02:44:
* nanog@nanog.org (Kurt Kraut via NANOG) [Thu 10 Mar 2016, 00:59 CET]:
I'm pretty confident there is no need for a specific MTU consensus and not all IXP participants are obligated to raise their interface MTU if the IXP starts allowing jumbo frames. You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
Isn't that the case for IXP's current/default MTU? If an IXP currently uses 1500, what effect will it have to its customers if it's increased to 9200 but not announced to them? None. Until someone actually tries to make use of the higher MTU. Then things start breaking.
I can understand the above issue. But as i said that's customer's decision. Exactly the same will happen if the customer increases its mtu now.
In order for Jumboframes to be successful on IXPs _on a large scale_ the technology has to change. There needs to be a mechanism to negotiate MTU for each L2 neighbor individually. Something like draft-van-beijnum-multi-mtu-03, which was mentioned before in this thread. With this in place individual sets of peers could safely use different MTUs on the same VLAN, and IXPs would have a migration path towards supporting larger framesizes.
Agreed. But that doesn't forbid the IXPs to use the max MTU now. -- Tassos
On 10 March 2016 at 02:44, Niels Bakker <niels=nanog@bakker.net> wrote:
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
I think what was meant, no global consensus is needed, each IXP can decide themselves what is edgeMTU and coreMTU VLAN MTU sizes in /this/ IXP. Heck, have customers vote on webpage. I guess edgeMTU obviously will be 1500, coreMTU something else, 4470, 9000 and 9100 seem like reasonable suspects. And to me it really isn't super important what it is, as long as it allows tunneling overhead over edgeMTU. I would use use the coreMTU in any IXP offering it, regardless what the exact MTU in that IXP is. -- ++ytti
On Thu, 10 Mar 2016, Saku Ytti wrote:
On 10 March 2016 at 02:44, Niels Bakker <niels=nanog@bakker.net> wrote:
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
I think what was meant, no global consensus is needed, each IXP can decide themselves what is edgeMTU and coreMTU VLAN MTU sizes in /this/ IXP. Heck, have customers vote on webpage. I guess edgeMTU obviously will be 1500, coreMTU something else, 4470, 9000 and 9100 seem like
9180 (L3 MTU) is the obvious choice. All major core routing platforms support it, and it's used in at least two "classic" L2 protocols (see my earlier email). -- Mikael Abrahamsson email: swmike@swm.pp.se
On Thu, 10 Mar 2016, Niels Bakker wrote:
You're wrong here. The IXP switch platform cannot send ICMP Packet Too Big messages. That's why everybody must agree on one MTU.
"Someone" should do an inventory of the market to find out how many commonly used platforms limit MRU to less than 9180 (L3), if MTU is set to 1500. Because if most platforms do not limit MRU, then the impact of MTU mismatch on the peering LAN is actually a lot less than otherwise. However, I stand by my earlier statement that we need to include MTU/MRU in ND messages, so that this can be negotiated on a LAN where not all devices support large MTU. -- Mikael Abrahamsson email: swmike@swm.pp.se
Mikael Abrahamsson wrote:
However, I stand by my earlier statement that we need to include MTU/MRU in ND messages, so that this can be negotiated on a LAN where not all devices support large MTU.
this would introduce a degree of network complexity that is unnecessary and would be prone to problems. It might be nice to have an auto-configurable maximum frame size per broadcast domain, though. Nick
Mikael Abrahamsson wrote on 10/3/16 18:21:
However, I stand by my earlier statement that we need to include MTU/MRU in ND messages, so that this can be negotiated on a LAN where not all devices support large MTU.
Isn't this already supported? https://tools.ietf.org/html/rfc4861#section-4.6.4 -- Tassos
On Thu, 10 Mar 2016, Tassos Chatzithomaoglou wrote:
Mikael Abrahamsson wrote on 10/3/16 18:21:
However, I stand by my earlier statement that we need to include MTU/MRU in ND messages, so that this can be negotiated on a LAN where not all devices support large MTU.
Isn't this already supported? https://tools.ietf.org/html/rfc4861#section-4.6.4
That is in RAs, and pertains to a prefix. Also, if a device can't use the advertised MTU, they will keep lower MTU which might be causing MTU mismatch. For instance, if I announce 9000 as MTU and a device is bridged via wifi, wifi chips generally only supports around 2300 in MTU, so you'll have MTU mismatch with MTU blackholing within the same L2 network. This is bad. -- Mikael Abrahamsson email: swmike@swm.pp.se
On 10/Mar/16 00:22, Tassos Chatzithomaoglou wrote:
I must be missing something very obvious here, because i cannot think of any reason why an IXP shouldn't enable the maximum possible MTU on its infrastructure to be available to its customers. Then it's clearly customers' decision on what MTU to use on their devices, as long as:
* It fits inside IXP's MTU * It suits with any other customer's (exchanging traffic with) MTU
This is a valid point. Mark.
participants (24)
-
Aris Lambrianidis
-
Baldur Norddahl
-
Chris Woodfield
-
David Bass
-
Frank Habicht
-
Grzegorz Janoszka
-
Hugo Slabbert
-
Job Snijders
-
joel jaeggli
-
Joel Maslak
-
Kurt Erik Lindqvist
-
Kurt Kraut
-
Mark Andrews
-
Mark Tinka
-
Martin Pels
-
Mikael Abrahamsson
-
Mike Hammett
-
Nick Hilliard
-
Niels Bakker
-
Nikolay Shopik
-
Saku Ytti
-
Stefan Neufeind
-
Tassos Chatzithomaoglou
-
William Herrin