I wonder, if there were a real alert, what the odds are that one wouldn't hear about it in 1 minute, 5 minutes, etc even if they didn't personally get it.
Obviously edge cases are possible, you were deep in a cave with your soccer team, but there must be mathematical modeling of that sort of information dispersion.
It would have to account for other possible channels, word of mouth, facebook, twitter, &c posts or really any informatonal source you were on on the internet (e.g., news sites), TV, radio, people screaming in the streets, etc.
You could do this, in principle, but you’d need a whole bunch of assumptions. What you want is a big graph of all people with weighted edges. The weights are the effective bitrates, or the chance per unit time that a message is sent and received along the edge (that’s going to mean guessing at some plausible numbers for each medium). We don’t have such a giant graph so we’d have to construct it and claim that for these purposes it has the right properties and represents the real world. You do this by saying, for each person, make a “word of mouth” edge to another randomly chosen person with a some probability. And so on. There’s more guessing here about those probabilities, but this has been studied quite a bit, at least for real networks where the graph is available (e.g. twitter and facebook are favourites among people who research social networks). Once you’ve got this big graph of all the people and the chance of a message going between any two of them pairwise, you write down a big matrix, Q = [q_ij] which tells you that a message at person i has such and such a chance to go to person j in one time unit. You then pick the people who got the initial message and make a vector x_0 = [x_i] where the entries are 0 if they didn’t get it and 1/n if they did (n is the number of people who got it). Now you can say, x(t) = exp(tQ) * x_0 and ask all the sorts of questions that you ask. That gives you the chance at each time that each person is receiving the message. To answer “what are the chances someone heard about it in one minute”, sum up x*dt for all times from 0 to 1 minute, subtract out x_0 (because they already got it) and add up the probabilities that are left. If Q is very big, this is expensive to compute (matrix exponentials are expensive) but I think you could scale the whole thing down to a representative sample population. It might be fun to do this a little bit more seriously than a hastily written mailing list post but I think it would always rely on a lot of guesses so would have to be taken with a very big grain of salt. As well, this is just one way you could model the process and there are a number of obvious criticisms (memorylessness jumps right out). Cheers, -w