
Hello all, I would appreciate any help with the following. First, what problems arise in a network that routes without information about the sender? In other words, imagine that the IP header did not contain the source address - what problems would this raise? Are optimal routing, security, billing, traceability, network management, etc., suddenly intractable? Second, to what degree do you believe Internet traffic exhibits self-similarity, and how does this change with varying levels of traffic aggregation? I have read papers from Bellcore and others suggesting that no matter the level of aggregation (be it a single Internet user or a major NAP), Internet traffic exhibits high degrees of self-similarity - but more recent research does not seem to agree with this. I am sorry if this is an inappropriate forum for the topics I have raised, however after reading through this list for some time it is apparent that we have assembled here a good deal of real-world experience, something I believe is requisite for a meaningful discussion of real-world network issues. Very interested to hear what you have to say, Nate Boyd MIT Laboratory for Computer Science

First, what problems arise in a network that routes without information about the sender? In other words, imagine that the IP header did not contain the source address - what problems would this raise? Are optimal routing, security, billing, traceability, network management, etc., suddenly intractable?
With respect to routing, while conventional unicast forwarding doesn't normally use the source address in the packet, multicast forwarding and RSVP packet classification do. While multicast routing isn't particularly pretty no matter what you do, source-specific routing in this case yields a much, much more attractive result than the source-independent alternatives (e.g. spanning tree) you'd be forced to in the absense of a source address. And while I'm not sure RSVP is all that good an idea, others might think so. As for the issues other than routing, in a perfect world a field in the header which can be set to any random number by the originator of the packet wouldn't be depended upon for anything important, but the world is less than perfect.
Second, to what degree do you believe Internet traffic exhibits self-similarity, and how does this change with varying levels of traffic aggregation? I have read papers from Bellcore and others suggesting that no matter the level of aggregation (be it a single Internet user or a major NAP), Internet traffic exhibits high degrees of self-similarity - but more recent research does not seem to agree with this.
It has been several years since I was in a position to see this, but on routers filling circuits with very highly aggregated traffic you just about couldn't measure anything without finding a result which contradicted the notion of aggregation-independent self-similarity. We used to find routers with 9 milliseconds worth of output queue would consistantly fill long-haul T3 circuits to 5 minute average percentage loads in the mid-90's with 5 minute average output queue packet drops staying well below 1% (filling circuits at low loss rates with aggregation-independent self-similar traffic should require output queues on the order of the delay-bandwidth product. 9 ms wasn't even close to this). And if you measured enough data points to draw a plot of output load versus packet loss, and overlaid the M/M/1/K queuing prediction on the same plot, you'd find that assuming Poisson arrivals gave you results that weren't that far off from what the equipment was doing. If the bandwidth of the circuitry by which end users connected to your network was at least equal to the bandwidth of the network trunks (a situation which wasn't uncommon not that long ago) I could perhaps see why self-similar behaviour might occur. If you're filling big, wide trunks with traffic aggregated from bizillions of 33.6 kbps modems (a situation which is probably more representative of what networks do today), however, expecting self-similar behaviour doesn't make even the least bit of intuitive sense. Dennis Ferguson

boydn@jacana.lcs.mit.edu said:
Second, to what degree do you believe Internet traffic exhibits self-similarity, and how does this change with varying levels of traffic aggregation? I have read papers from Bellcore and others suggesting that no matter the level of aggregation (be it a single Internet user or a major NAP), Internet traffic exhibits high degrees of self-similarity - but more recent research does not seem to agree with this.
I think the key here is aggregation and congestion control algorithms. In the case of the Bellcore study, they were looking at high speed devices on LANs, and possibly aggregation levels in the few thousand devices (not all active) per line. The current OC3C pipes at the core of major networks are seeing aggregations orders of magniture greater in terms of concurrent active sessions (I remember hearing numbers approaching 10^6 over a year ago.) This combined with many areas of congestion and the end-to-end nature of congestion control makes any simple model, either statistical independence or self similar, suspect. just my opinion, jerry
participants (3)
-
Dennis Ferguson
-
Jerry Scharf
-
Nathan Boyd