On Mon, Jan 5, 2009 at 4:11 PM, Roland Dobbins <rdobbins@cisco.com> wrote:
In my experience, once one has an understanding of the performance envelopes and has built a lab which contains examples of the functional elements of the system (network infrastructure, servers, apps, databases, clients, et. al.), one can extrapolate pretty accurately well out to orders of magnitude.
It's one of those things where the difference between theory and practice is smaller in theory than it is in practice, though... But yeah, sometimes things like load balancers fail, or routers run out of table space, or whatever. I've had enough enterprise customers worry about what will happen to their VPN sites if some neighborhood kid annoys his gamer buddies and gets a few Gbps of traffic to knock down their DSLAM and its upstream feeds or whatever.
The problem is that many organizations don't do the above prior to freezing the design and initiating deployment.
Back in the mid-90s I had one networking software development customer that had a room with 500 PCs on racks, and some switches that would let them dump groups of 50s of them together with whatever server they were testing. That was a lot more impressive back then when PCs were full-sized devices that needed keyboards and monitors (grouped on KVMs, at least), as opposed to being 1Us or blades or virtual machines. ---- Thanks; Bill Note that this isn't my regular email account - It's still experimental so far. And Google probably logs and indexes everything you send it.