RE: Jumbo Frames (was Re: MAE-EAST Moving? from Tysons corner toreston VA. )
These are important for many reasons. Without these techniques, we can't even do line rate GigE on "common place" servers, let alone have any CPU left over to do more then just send packets.
Actually, my testing shows a falure to utilize even 100baseTX fully. Even in a switched FDX environment (no collisions) I can't achieve line rate without bumping the packet size up. Considering that the smallest box is a quad-CPU SMP machine (550Mhz), I don't think that there is a CPU shortage <grin>.
The your problem probably lies elsewhere. A decent operating system (e.g. FreeBSD) can do line rate on 100baseTX with something along the line of a Pentium-166. Not exactly a very powerful machine by current standards. (And btw this was measured three years ago...) Steinar Haug, Nethelp consulting, sthaug@nethelp.no
sthaug@nethelp.no: Monday, June 19, 2000 9:25 AM
Actually, my testing shows a falure to utilize even 100baseTX fully. Even in a switched FDX environment (no collisions) I can't achieve line rate without bumping the packet size up. Considering that the smallest box is a quad-CPU SMP machine (550Mhz), I don't think that there is a CPU shortage <grin>.
The your problem probably lies elsewhere. A decent operating system (e.g. FreeBSD) can do line rate on 100baseTX with something along the line of a Pentium-166. Not exactly a very powerful machine by current standards. (And btw this was measured three years ago...)
Steinar, I should have re-caveated, for your benefit. I am not testing with a bazillion-byte file. I am testing with query/response against a RDBMS host. IOW, a typically real-world(tm) practical application. The responses range from 3-50KB, with anomalies out to 100KB. The slow-start algorithm has been identified as the real culprit. Not wanting to carve up all the IP stacks, I bump MTU up to effectively reduce the impact of the slow-start algorithm (which is obsolete in a switched environment anyway, worse than useless). Measurments are taken at the RDBMS host, as well as the client.
On Mon, 19 Jun 2000, Roeland M.J. Meyer wrote:
I should have re-caveated, for your benefit. I am not testing with a bazillion-byte file. I am testing with query/response against a RDBMS host. IOW, a typically real-world(tm) practical application. The responses range from 3-50KB, with anomalies out to 100KB. The slow-start algorithm has been identified as the
Erm... no, then your problem is opening and closing TCP connections all the time. Don't do that. It hurts you in a lot of other ways. It really isn't appropriate to go around saying "you need larger MTUs to fill a 100 meg link, period" when you really mean "in one particular situation where I am opening and closing TCP connections and only sending a very small amount of data over each, you need larger MTUs". I wouldn't be so quick to say slow start is useless, either. Perhaps with small window sizes, but as soon as they get big enough...
Marc Slemko: Monday, June 19, 2000 10:06 AM
On Mon, 19 Jun 2000, Roeland M.J. Meyer wrote:
I should have re-caveated, for your benefit. I am not testing with a bazillion-byte file. I am testing with query/response against a RDBMS host. IOW, a typically real-world(tm)
practical
application. The responses range from 3-50KB, with anomalies out to 100KB. The slow-start algorithm has been identified as the
Erm... no, then your problem is opening and closing TCP connections all the time. Don't do that.
I don't have much choice there. Each query/response is a new connection. Even SQLnet is limited with batch query optimization.
It hurts you in a lot of other ways.
It really isn't appropriate to go around saying "you need larger MTUs to fill a 100 meg link, period" when you really mean "in one
Yes, it does. I'm still scraping off the charred back-side meat. particular
situation where I am opening and closing TCP connections and only sending a very small amount of data over each, you need larger MTUs".
Hm, I don't remember the "period" and I thought that I'd outlined my case a few messages back.
I wouldn't be so quick to say slow start is useless, either. Perhaps with small window sizes, but as soon as they get big enough...
Here is where you may not have thought it through enough. On a dedicated FDX link, what need is there for slow-start? Only the transmitter and receiver are on the wire and the other-end has a separate transmit circuit to talk back with (the other side of the FDX link). If the switch can't keep up then I need a switch that can. In this case, I happen to know that the switch is fine. I'm feeding CAT5 straight from the switch to the NIC on the server. The other side is similarly connected. Slow-start is a legacy requirement for non-switched networks and is still exists for legacy reasons. In switched FDX environments, it would be real nice if I could just turn it off, as a configuration issue. In fact, there's a lot of stuff that could probably be stripped from a stack, for switched FDX environs and modern SMP hosts. Even switched 100baseTX could benefit.
participants (4)
-
Marc Slemko
-
Roeland M.J. Meyer
-
Roeland Meyer (E-mail)
-
sthaugļ¼ nethelp.no