-----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Paul Vixie Sent: Sunday, March 30, 2008 10:35 AM To: nanog@merit.edu Subject: Re: latency (was: RE: cooling door)
swmike@swm.pp.se (Mikael Abrahamsson) writes:
Programmers who do client/server applications are starting to notice this and I know of companies that put latency-inducing applications in the development servers so that the programmer is exposed to the same conditions in the development environment as in the real world. This means for some that they have to write more advanced SQL queries to get everything done in a single query instead of asking multiple and changing the queries depending on what the first query result was.
while i agree that turning one's SQL into transactions that are more like applets (such that, for example, you're sending over the content for a potential INSERT that may not happen depending on some SELECT, because the end-to-end delay of getting back the SELECT result is so much higher than the cost of the lost bandwidth from occasionally sending a useless INSERT) will take better advantage of modern hardware and software architecture (which means in this case, streaming), it's also necessary to teach our SQL servers that ZFS "recordsize=128k" means what it says, for file system reads and writes. a lot of SQL users who have moved to a streaming model using a lot of transactions have merely seen their bottleneck move from the network into the SQL server.
I have seen first hand (worked for a company and diagnosed issues with their applications from a network perspective, prompting a major re-write of the software), where developers work with their SQL servers, application servers, and clients all on the same L2 switch. They often do not duplicate the environment they are going to be deploying the application into, and therefore assume that the "network" is going to perform the same. So, when there are problems they blame the network. Often the root problem is the architecture of the application itself and not the "network." All the servers and client workstations have Gigabit connections to the same L2 switch, and they are honestly astonished when there are issues running the same application over a typical enterprise network with clients of different speeds (10/100/1000, full and/or half duplex). Surprisingly, to me, they even expect the same performance out of a WAN. Application developers today need a "network" guy on their team. One who can help them understand how their proposed application architecture would perform over various customer networks, and that can make suggestions as to how the architecture can be modified to allow the performance of the application to take advantage of the networks' capabilities. Mikael (seems to) complain that developers have to put latency inducing applications into the development environment. I'd say that those developers are some of the few who actually have a clue, and are doing the right thing.
Also, protocols such as SMB and NFS that use message blocks over TCP have to be abandonded and replaced with real streaming protocols and large window sizes. Xmodem wasn't a good idea back then, it's not a good idea now (even though the blocks now are larger than the 128 bytes of 20- 30 years ago).
i think xmodem and kermit moved enough total data volume (expressed as a factor of transmission speed) back in their day to deserve an honourable retirement. but i'd agree, if an application is moved to a new environment where everything (DRAM timing, CPU clock, I/O bandwidth, network bandwidth, etc) is 10X faster, but the application only runs 2X faster, then it's time to rethink more. but the culprit will usually not be new network latency. -- Paul Vixie
It may be difficult to switch to a streaming protocol if the underlying data sets are block-oriented. Fred Reimer, CISSP, CCNP, CQS-VPN, CQS-ISS Senior Network Engineer Coleman Technologies, Inc. 954-298-1697