Hi NANOG folks, We have a situation (which has come up in the past) that I'd like some opinions on. Periodically, we have researchers who develop projects which will do things like randomly port probe off-campus addresses. The most recent instance of this is a group studying "bottlenecks" on the internet. Thus, they hit hosts (again, semi-randomly) on both the commodity internet and on I2 (abeline) to look for places where there is "traffic congestion". The problem is that many of their "random targets" consider the probes to be either malicious in nature, or outright attacks. As a result of this, we, of course, get complaints. One suggestion that I received fro a co-worker to help to mitigate this is to have the researchers run the experiments off of a www host, and to have the default page explain the experiment and also provide contact info. We also discussed having the researchers contact ISPs and other large providers to see if they can get permission to use addresses in their space as targets, and then providing the ISPs with info from the testing. How do you view the issue of experiments that probe random sites? Should this be accepted as "reasonable", or should it be disallowed? Something in between? What other suggestions might you have about how such experiments could be run without triggering alarms? Please send any suggestions directly to me and once I have some answers, I'll post a compilation to the list. Thanks! John John K. Lerchey Computer and Network Security Coordinator Computing Services Carnegie Mellon University