Hi All, I'm a PhD student currently studying intra-domain traffic engineering, and I have two questions that I really wish to hear some opinions from you network operators. I'm experimenting with a prediction-based intra-domain traffic engineering technique. The technique uses traffic demand matrices observed in the history to predict future traffic demands, and computes a routing that minimizes maximum link utilization (MLU) for those future demands. I evaluate the performance of the technique using Abilene traffic traces collected at every 5 minutes interval. The results show that when the model is able to predict the real traffic matrix, the technique can achieve close to optimal MLU. However, when the model makes wrong prediction, the technique suffers very high MLU (as high as 140%). Basically, I have the following two questions: 1. In the traces I have, there exist several intervals with a huge, sudden increase of traffic on some links. The prediction model I use cannot predict those 'big spikes'. Do these 'big spikes' really happen in operational networks? Or are they merely measurement errors? If they really happen, is there a gradual ramp up of traffic in smaller time scale, say, on the order of tens of seconds? Or do these 'big spikes' really occur very quickly, say, in a few seconds? 2. I have the option to make a tradeoff between average case performance and worst case performance guarantee, but I don't know which one is deemed more important by you. Are ISP networks currently optimized for worst case or average case performance? Is the trade-off between these two an appealing idea, or may the ISP networks are already doing it? I really appreciate any feedback from you about the above two questions, and your help will be acknowledged in any publication about this work. Thanks, Edgar