Hello, It might be interesting if some people were to post when they received their first attack packet, and where it came from, if they happened to be logging. Here is the first packet we logged: Jan 25 00:29:37 EST 216.66.11.120 --Phil ISPrime
On Sat, Jan 25, 2003 at 06:58:46AM -0500, Phil Rosenthal wrote:
It might be interesting if some people were to post when they received their first attack packet, and where it came from, if they happened to be logging.
Here is the first packet we logged: Jan 25 00:29:37 EST 216.66.11.120
Interestingly, looking through my logs for UDP 1434, I saw a sequential scan of my subnet like so: Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33 IN Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.2,1434 PR udp len 20 33 IN Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.3,1434 PR udp len 20 33 IN All from 206.176.210.74, all source port 53 (probably trying to use people's DNS firewall rules to get around being filtered). After that, I saw nothing until the storm started last night from many different source IPs, which was at Jan 24 21:31:53 PST for me. -c
* Clayton Fiske (clay@bloomcounty.org) [030125 12:55] writeth:
On Sat, Jan 25, 2003 at 06:58:46AM -0500, Phil Rosenthal wrote:
It might be interesting if some people were to post when they received their first attack packet, and where it came from, if they happened to be logging.
Here is the first packet we logged: Jan 25 00:29:37 EST 216.66.11.120
Interestingly, looking through my logs for UDP 1434, I saw a sequential scan of my subnet like so:
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33 IN
I'm not sure that going back that far is going to offer anything conclusive, as it could have been any number of scanners looking for vulnerabilities. Looking at my logs back to the 19th, I have isolated hits on the 19th and 23rd. However, they really started to come in force at 22:29:39 MDT, two seconds after Clayton's. My first attempt came from an IP owned by Level 3 Comm. Jan 23 02:43:44 c6509-core 10829487: 47w0d: %SEC-6-IPACCESSLOGP: list 130 denied udp 192.41.65.170(48962) -> 166.70.10.63(1434), 1 packet Jan 24 22:29:39 c6509-core 10966964: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 65.57.250.28(1210) -> 204.228.150.9(1434), 1 packet Jan 24 22:29:44 border 7577864: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 129.219.122.204(1170) -> 204.228.132.100(1434), 1 packet Jan 24 22:29:50 border 7577865: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 212.67.198.3(1035) -> 166.70.22.47(1434), 1 packet Jan 24 22:29:52 xmission-paix 425068: 7w0d: %SEC-6-IPACCESSLOGP: list 100 denied udp 61.103.121.140(3546) -> 166.70.22.87(1434), 1 packet Jan 24 22:29:52 border 7577868: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 65.57.250.28(1210) -> 204.228.132.18(1434), 1 packet Jan 24 22:29:55 c6509-core 10966977: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 61.103.121.140(3546) -> 166.70.10.8(1434), 1 packet Jan 24 22:29:57 c6509-core 10966979: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 12.24.139.231(3315) -> 204.228.140.81(1434), 1 packet Jan 24 22:29:58 c6509-core 10966980: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 140.115.113.252(3780) -> 207.135.133.228(1434), 1 packet Jan 24 22:29:59 c6509-core 10966981: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 17.193.12.215(3117) -> 207.135.155.209(1434), 1 packet Jan 24 22:30:00 border 7577873: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 209.15.147.225(4543) -> 204.228.133.186(1434), 1 packet
Our first (this is EST): Jan 25 00:29:44 external.firewall1.oct.nac.net firewalld[109]: deny in eth0 404 udp 20 114 61.103.121.140 66.246.x.x 3546 14 34 (default) 61.103.121.140 = a host somewhere on GBLX On Sat, 25 Jan 2003, Pete Ashdown wrote:
* Clayton Fiske (clay@bloomcounty.org) [030125 12:55] writeth:
On Sat, Jan 25, 2003 at 06:58:46AM -0500, Phil Rosenthal wrote:
It might be interesting if some people were to post when they received their first attack packet, and where it came from, if they happened to be logging.
Here is the first packet we logged: Jan 25 00:29:37 EST 216.66.11.120
Interestingly, looking through my logs for UDP 1434, I saw a sequential scan of my subnet like so:
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33 IN
I'm not sure that going back that far is going to offer anything conclusive, as it could have been any number of scanners looking for vulnerabilities. Looking at my logs back to the 19th, I have isolated hits on the 19th and 23rd. However, they really started to come in force at 22:29:39 MDT, two seconds after Clayton's. My first attempt came from an IP owned by Level 3 Comm.
Jan 23 02:43:44 c6509-core 10829487: 47w0d: %SEC-6-IPACCESSLOGP: list 130 denied udp 192.41.65.170(48962) -> 166.70.10.63(1434), 1 packet Jan 24 22:29:39 c6509-core 10966964: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 65.57.250.28(1210) -> 204.228.150.9(1434), 1 packet Jan 24 22:29:44 border 7577864: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 129.219.122.204(1170) -> 204.228.132.100(1434), 1 packet Jan 24 22:29:50 border 7577865: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 212.67.198.3(1035) -> 166.70.22.47(1434), 1 packet Jan 24 22:29:52 xmission-paix 425068: 7w0d: %SEC-6-IPACCESSLOGP: list 100 denied udp 61.103.121.140(3546) -> 166.70.22.87(1434), 1 packet Jan 24 22:29:52 border 7577868: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 65.57.250.28(1210) -> 204.228.132.18(1434), 1 packet Jan 24 22:29:55 c6509-core 10966977: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 61.103.121.140(3546) -> 166.70.10.8(1434), 1 packet Jan 24 22:29:57 c6509-core 10966979: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 12.24.139.231(3315) -> 204.228.140.81(1434), 1 packet Jan 24 22:29:58 c6509-core 10966980: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 140.115.113.252(3780) -> 207.135.133.228(1434), 1 packet Jan 24 22:29:59 c6509-core 10966981: 47w1d: %SEC-6-IPACCESSLOGP: list 130 denied udp 17.193.12.215(3117) -> 207.135.155.209(1434), 1 packet Jan 24 22:30:00 border 7577873: 30w2d: %SEC-6-IPACCESSLOGP: list 100 denied udp 209.15.147.225(4543) -> 204.228.133.186(1434), 1 packet
-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
According to Clayton Fiske:
Interestingly, looking through my logs for UDP 1434, I saw a sequential scan of my subnet like so:
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33 IN Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.2,1434 PR udp len 20 33 IN Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.3,1434 PR udp len 20 33 IN
All from 206.176.210.74, all source port 53 (probably trying to use people's DNS firewall rules to get around being filtered).
After that, I saw nothing until the storm started last night from many different source IPs, which was at Jan 24 21:31:53 PST for me.
Ditto on the sequential scan well before the actual action, except that mine came on Jan. 19th: Jan 19 10:59:11 Deny inbound UDP from 67.8.33.179/1 to xxx.xxx.xxx.xxx ... ... The scan went across several subnets I manage inside 209.67.0.0 serially. My sources were all from 67.8.33.179, all source port 1. The actual worm propagation began to hit my logs at 00:28:16 EST Jan 25. Cheers. -travis
Here are the IPs I got at 5:29:40 GMT, the time I got 10 packets / second +-----------------+ | source | +-----------------+ | 216.069.032.086 | Kentucky Community and Technical College System | 066.223.041.231 | Interland | 216.066.011.120 | Hurricane Electric | 216.098.178.081 | V-Span, Inc. +-----------------+ Here the traffic on port 1434 broken down to seconds around that time (note: I get data from diverse sources, so clock drifts may be an issue) | 05:29:33 | 7 | | 05:29:34 | 8 | | 05:29:35 | 4 | | 05:29:36 | 8 | | 05:29:37 | 7 | | 05:29:38 | 7 | | 05:29:39 | 5 | | 05:29:40 | 10 | | 05:29:41 | 12 | | 05:29:42 | 14 | | 05:29:43 | 12 | | 05:29:44 | 16 | | 05:29:45 | 18 | | 05:29:46 | 20 | On Sat, 25 Jan 2003 17:32:17 -0500 "Travis Pugh" <tdp@discombobulated.net> wrote:
According to Clayton Fiske:
Interestingly, looking through my logs for UDP 1434, I saw a sequential scan of my subnet like so:
Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.1,1434 PR udp len 20 33 IN Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.2,1434 PR udp len 20 33 IN Jan 16 08:15:51 206.176.210.74,53 -> x.x.x.3,1434 PR udp len 20 33 IN
All from 206.176.210.74, all source port 53 (probably trying to use people's DNS firewall rules to get around being filtered).
After that, I saw nothing until the storm started last night from many different source IPs, which was at Jan 24 21:31:53 PST for me.
Ditto on the sequential scan well before the actual action, except that mine came on Jan. 19th:
Jan 19 10:59:11 Deny inbound UDP from 67.8.33.179/1 to xxx.xxx.xxx.xxx ... ...
The scan went across several subnets I manage inside 209.67.0.0 serially. My sources were all from 67.8.33.179, all source port 1. The actual worm propagation began to hit my logs at 00:28:16 EST Jan 25.
Cheers.
-travis
-- -------------------------------------------------------------------- jullrich@euclidian.com Collaborative Intrusion Detection join http://www.dshield.org
+-----------------+ | 216.069.032.086 | Kentucky Community and Technical College System | 066.223.041.231 | Interland | 216.066.011.120 | Hurricane Electric | 216.098.178.081 | V-Span, Inc. +-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them -- actually, there are some good people over there). However, it appears that one of the 'root' boxes of this attack was at HE. This is the third or fourth time I've seen theit netblocks mentioned as the source of some of the first packets. -- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
On Sun, 26 Jan 2003, Alex Rubenstein wrote:
+-----------------+ | 216.069.032.086 | Kentucky Community and Technical College System | 066.223.041.231 | Interland | 216.066.011.120 | Hurricane Electric | 216.098.178.081 | V-Span, Inc. +-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them -- actually, there are some good people over there).
However, it appears that one of the 'root' boxes of this attack was at HE. This is the third or fourth time I've seen theit netblocks mentioned as the source of some of the first packets.
Looking at the router traffic graphs for the east and west coast the attack started at the same time just before 9:30 PST or 12:30 EST. I'm sure the owners of some of the infected boxes would be able to give a better chronology based on when their logs for other services (i.e. HTTP) they might have been running stopped. After looking at flow stats and figuring out that this wasn't an attack by a single compromised box we blocked udp port 1434 on several of our core routers. We then went back and contacted customers whose IPs showed up in our flow stats. Some where reachable and coordinated with our support to disconnect their MSSQL servers or otherwise shutdown MSSQL. We then went through all our customer aggregation switches looking for ports that had the pattern of the attack, i.e. 25000 pps inbound to our switch, 10 packets outbound on a 100 Mbps port. We shutdown about 7 customer ports in New York and about 16 in California. These customers were contacted and the majority of them have patched their machines, a few are still off. Some Hurricane sites like our San Jose site were unaffected (no change from normal traffic levels) indicating any Windows users there had previously patched. Mike. +----------------- H U R R I C A N E - E L E C T R I C -----------------+ | Mike Leber Direct Internet Connections Voice 510 580 4100 | | Hurricane Electric Web Hosting Colocation Fax 510 580 4151 | | mleber@he.net http://www.he.net | +-----------------------------------------------------------------------+
Just to add to this. We noticed a sudden burst and terminated ports to customers infected as well. I never noticed anything odd from HE and we also applied 1434 blocks very quickly. Thankfully, our most infected customer crashed his internal core and took him off line anyway:). ----- Original Message ----- From: "Mike Leber" <mleber@he.net> To: "Alex Rubenstein" <alex@nac.net> Cc: "Johannes Ullrich" <jullrich@euclidian.com>; "Travis Pugh" <tdp@discombobulated.net>; <nanog@merit.edu> Sent: Saturday, January 25, 2003 10:17 PM Subject: Re: Tracing where it started
On Sun, 26 Jan 2003, Alex Rubenstein wrote:
+-----------------+ | 216.069.032.086 | Kentucky Community and Technical College System | 066.223.041.231 | Interland | 216.066.011.120 | Hurricane Electric | 216.098.178.081 | V-Span, Inc. +-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them -- actually, there are some good people over there).
However, it appears that one of the 'root' boxes of this attack was at
HE.
This is the third or fourth time I've seen theit netblocks mentioned as the source of some of the first packets.
Looking at the router traffic graphs for the east and west coast the attack started at the same time just before 9:30 PST or 12:30 EST. I'm sure the owners of some of the infected boxes would be able to give a better chronology based on when their logs for other services (i.e. HTTP) they might have been running stopped.
After looking at flow stats and figuring out that this wasn't an attack by a single compromised box we blocked udp port 1434 on several of our core routers. We then went back and contacted customers whose IPs showed up in our flow stats. Some where reachable and coordinated with our support to disconnect their MSSQL servers or otherwise shutdown MSSQL. We then went through all our customer aggregation switches looking for ports that had the pattern of the attack, i.e. 25000 pps inbound to our switch, 10 packets outbound on a 100 Mbps port. We shutdown about 7 customer ports in New York and about 16 in California. These customers were contacted and the majority of them have patched their machines, a few are still off.
Some Hurricane sites like our San Jose site were unaffected (no change from normal traffic levels) indicating any Windows users there had previously patched.
Mike.
+----------------- H U R R I C A N E - E L E C T R I C -----------------+ | Mike Leber Direct Internet Connections Voice 510 580 4100 | | Hurricane Electric Web Hosting Colocation Fax 510 580 4151 | | mleber@he.net http://www.he.net | +-----------------------------------------------------------------------+
+-----------------+ | 216.069.032.086 | Kentucky Community and Technical College System | 066.223.041.231 | Interland | 216.066.011.120 | Hurricane Electric | 216.098.178.081 | V-Span, Inc. +-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them -- actually, there are some good people over there).
First of all: This worm started so fast, to find its source we have to look into the past, not at the 'flash point'. HE.net is a very large colo provider, so I am in no way surprised that they show up. Same for Interland. -- -------------------------------------------------------------------- jullrich@euclidian.com Collaborative Intrusion Detection join http://www.dshield.org
Morning all, In light of the recent attack, and the dramatic impact it had on internet connectivity. I was wondering if any operators (esp of exchange pts) would provide information on utilization. Especially any common backplane %s. I have received information on router utilizations, some routers it seems may have held up better then others. That information is useful. But I am working on some optical exchange point/optical metro designs and this might have a dramatic impact if one considers things like OBGP, Uni 1.0, ODSI etc etc. A working hypothesis on the affect of this type of attack on a dynamically allocated bandwidth network (such as an optical exchange running OBGP etc) would have had a drastic affect on resources. All the available spare capacity would have likely be allocated out. So the "bucket" would have run dry. Understanding that exchange points of this type (or metro area dynamic layer1 transport networks) will manage the total bandwidth needs to always maintain adequate available capacity. With the rapid onset of an attack such as the one sat morning. Models I have show that not only would the spare capacity been utilized quickly but that in a tiered (colored) customer system. That the lower service level customers (lead colored, silver etc) would have had their capacity confiscated and reallocated to the Platinum and Gold customers. The impact would have been much greater. Especially if the "lead" customers where not using their links for a simple off-hours server backup link, or redundant circuits to production circuits on another network. If they were low cost IP providers attempted to complete with the lowest cost server, they would have been drastically affected. The affect might have caused a cascading type failure. If enough IP service providers were affected (disconnected) and their peering circuits or metro links disconnected, this traffic would have rerouted and flooded other IXs and private peering links. Without taking into consideration the BGP adds/withdraws load. They traffic levels alone would have had a sever impact on border routers and networks. At least that would be by assessment. One other considerations is that optical IXs will have a greater impact on the internet, possibly good and bad. With larger circuit sizes of OC48 and OC192 for peering. An attack would have a greater ability to flood more traffic. A failure of a peering session here would cause a reroute of greater traffic. A possible benfit might be that larger circuit sizes might mean that an attack might not be able to overwhelm the larger capacities especially if backbone sizes are the constricting factor, not peering circuits or optical VPN circuits at the optical IX. Any feedback, devil's advocate position, voodoo or "other" is welcome. Dave -- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] www.smoton.net [Peering Site under development] Smotons (Smart Photons) trump dumb photons
----- Original Message ----- | One other considerations is that optical IXs will have a greater | impact on the internet, possibly good and bad. With larger circuit | sizes of OC48 and OC192 for peering. An attack would have a greater | ability to flood more traffic. A failure of a peering session here | would cause a reroute of greater traffic. A possible benfit might be | that larger circuit sizes might mean that an attack might not be able | to overwhelm the larger capacities especially if backbone sizes are | the constricting factor, not peering circuits or optical VPN circuits | at the optical IX. Although this MS-SQL worm used a lot of bandwidth because of the embedded exploit code, usually worms scan first and try exploiting after. Such scan requires few bytes, so even a T-3 would carry a lot of host scans per second, and could case many routers to die on the receiving end because of packets-per-second or news-arps-per-second or syslogs-per-second limitations. I think the worst danger of large circuits would be the uplink capacity; a bunch of infected hosts would easily fill up a T-3 trying to scan for new hosts to attack, limiting the worm propagations speed, but an OC-192 might end up carrying all of the scan traffic and infect more hosts faster. Rubens
I have received information on router utilizations, some routers it seems may have held up better then others. That information is useful. But I am working on some optical exchange point/optical metro designs and this might have a dramatic impact if one considers things like OBGP, Uni 1.0, ODSI etc etc.
A working hypothesis on the affect of this type of attack on a dynamically allocated bandwidth network (such as an optical exchange running OBGP etc) would have had a drastic affect on resources. All the available spare capacity would have likely be allocated out. So the "bucket" would have run dry. Understanding that exchange points of this type (or metro area dynamic layer1 transport networks) will manage the total bandwidth needs to always maintain adequate available capacity
The problem with ONI and MP<whatever>S models is that there is little or no correlation between the topology aware layers. You will most likely sooner or later end up in a situation where the layers will start to oscillate. So, I would not build a optical IX around this model - if I would build a optical IX at all. - kurtis -
David Diaz <techlist@smoton.net> writes:
With the rapid onset of an attack such as the one sat morning. Models I have show that not only would the spare capacity been utilized quickly but that in a tiered (colored) customer system. That the lower service level customers (lead colored, silver etc) would have had
Does your model(s) also take into account that people's capital structure may not allow them the luxury of leaving multiple OC-X ports wired up and sitting idle waiting for a surge? One thing I found somewhat interesting among the "dymanic" allocation of resources type infrastructure was the fact that my capacity planning is on the order of weeks, while the exchanges assume something on the order of minutes. I don't have enough capital sitting around that I can afford to deploy and hook up a bunch of OC-x ports to an exchange and then sit there waiting for them to be used maybe sometimes in the future, for sure, etc etc. So perhaps the thought of an optical exchange running out of resources might be a bit of an overkill at this stage? /vijay
Actually, I think that was the point of the dynamic provisioning ability. The UNI 1.0 protocol or the previous ODSI, were to allow the routers to provision their own capacity. The tests in the real world done actually worked although I still believe they are under NDA. The point was to provision or reprovision capacity as needed. Without getting into the arguments of whether this is a good idea, the point was to "pay" for what you used when you used it. The biggest technical factor was "how the heck do you bill it." If a customer goes from their normal OC3 ---> OC12 for 4hrs three times in a month... what do you bill them for? Do you take it down to the DS0/min level and just multiple or do you do a flat rate or a per upgrade??? The point was you could bump up on the fly as needed, capacity willing, then down. The obvious factor is having enough spare capacity in the bucket. This should not be an issue within the 4 walls of a colo. If it's a beyond the 4 walls play then there should be spare capacity available that normally serves as redundancy in the mesh. The other interesting factor is that now you have sort of aTDMA arrangement going on( very loose analogy here). In that your day can theoretically be divided into 3 time zones. In the zone: 8am - 4pm ----- Business users, Financial backbones etc 4pm -12am ----- Home users, DSL, Cable, Peer to Peer 12am - 8am ---- Remote backup services, forgein users etc Some of the same capacity can be reused based on peer needs. This sort of addressed the "how do i design my backbone" argument. Where engineers ahve to decide whether to built for peak load and provide max QoS but also the highest cost backbone; or whether to built for avg sustained utilization. This way you can theoretically get the best of both worlds. As long as the billing goes along with that. You are right this is a future play. But though it was interesting from the perspective of what if all this technology was enabled today, what affect would the mSQL worm have had. Would some of these technologies have exacerbated the problems we saw. Trying to get better feedback on the future issues, so far some of the offline comments and perspectives have been helpful and inciteful as well as yours... Dave At 20:12 +0000 1/30/03, Vijay Gill wrote:
David Diaz <techlist@smoton.net> writes:
With the rapid onset of an attack such as the one sat morning. Models I have show that not only would the spare capacity been utilized quickly but that in a tiered (colored) customer system. That the lower service level customers (lead colored, silver etc) would have had
Does your model(s) also take into account that people's capital structure may not allow them the luxury of leaving multiple OC-X ports wired up and sitting idle waiting for a surge?
One thing I found somewhat interesting among the "dymanic" allocation of resources type infrastructure was the fact that my capacity planning is on the order of weeks, while the exchanges assume something on the order of minutes. I don't have enough capital sitting around that I can afford to deploy and hook up a bunch of OC-x ports to an exchange and then sit there waiting for them to be used maybe sometimes in the future, for sure, etc etc.
So perhaps the thought of an optical exchange running out of resources might be a bit of an overkill at this stage?
/vijay
-- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] www.smoton.net [Peering Site under development] Smotons (Smart Photons) trump dumb photons
David Diaz <techlist@smoton.net> writes:
was to "pay" for what you used when you used it. The biggest technical factor was "how the heck do you bill it."
Actually I'd think the biggest technical factor would be the trained monkey that would sit at the switch and do OIR of line cards on the router as appropriate and reroute patches.
If a customer goes from their normal OC3 ---> OC12 for 4hrs three times in a month... what do you bill them for? Do you take it down to the DS0/min level and just multiple or do you do a flat rate or a per upgrade???
Does this include the monkey cost as the monkey switches the ports around? (well, technically you can get software switchable oc3/oc12 ports, but substitute for 48/192 and go from there)
The point was you could bump up on the fly as needed, capacity willing, then down. The obvious factor is having enough spare capacity in the bucket. This should not be an issue within the 4
And the monkey. I really don't have enough capital sitting around to leave a spare port idle for the 4 hours a day I need it.
This sort of addressed the "how do i design my backbone" argument. Where engineers ahve to decide whether to built for peak load and provide max QoS but also the highest cost backbone; or whether to built for avg sustained utilization. This way you can theoretically get the best of both worlds. As long as the billing goes along with that.
I don't plan to be buying service from anyone who is building to average sustained utilization (sic). My traffic tends to be bursty. /vijay
At 6:54 +0000 1/31/03, Vijay Gill wrote:
David Diaz <techlist@smoton.net> writes:
was to "pay" for what you used when you used it. The biggest technical factor was "how the heck do you bill it."
Actually I'd think the biggest technical factor would be the trained monkey that would sit at the switch and do OIR of line cards on the router as appropriate and reroute patches.
If a customer goes from their normal OC3 ---> OC12 for 4hrs three times in a month... what do you bill them for? Do you take it down to the DS0/min level and just multiple or do you do a flat rate or a per upgrade???
Does this include the monkey cost as the monkey switches the ports around? (well, technically you can get software switchable oc3/oc12 ports, but substitute for 48/192 and go from there)
No monkeys. I was referring to the protocols that people have been working on that automatically "reprovision" on the fly. The very simplistic view is (and this can be within your own network). Router A ---> Optical box/mesh ---> Router B Router A determines it needs to upgrade from OC3 to OC12 sends request and AUTH pwd ---> Optical mesh ----> Router B acks, says ok I do have capacity and your AUTH pwd verified ---> OC12 ---> Optical ---> OC12 ---> Router B Actually there are different ways to do this. It goes beyond what I was asking here. But I would be happy to expand on it. You can actually on day one have a OC48 handing off 1310nm to the Optical switch. The switch could then provision OC3s, OC12s off that. The switches Im speaking of do virtual concant., so they can slice and dice the pipe. Nothing says you have to use the whole thing on day 1. Actually that was sort of the point for a lot of people that were interested. They could have an OC48 and have 2 x OC12s off of that going to two different locations/peers. If peer numbers 3 and 4 show up at the mesh/box then it's a simple point and click to provision that as soon as they are hot. Sidenote: As far as monkeys go. You dont need a monkey since the protocol is theoretically doing it on the fly from layer3 down to layer1. Not to mention that CNM (customer network management) exists, which allows customers to actually have READ and WRITE privs on their "owned" circuits. So your own monkeys could do it with point and click. Neat thing from using this as a wholesale carrier is the ability to actually take an OC192, sell an OC48 to a customer, have that customer sell an OC12 off of that and so on. Everyone would have their own pwd that allows them to view their circuit and those "below" them but not above etc etc. It's off topic but interesting. My posted comment was concerning if this technology of layer3 to layer1 integration/communication would have exacerbated the mSQL worm as it might have had more ability to grab larger peering pipes. On last thought. On the "leaving spare capacity comment. If you might mean to say that OC48 ports on your router are much more expensive then OC12, and therefore with 20 peers, buying 20 x OC48 ports when u usually use an avg of OC12 on each is cost prohibitive, I can understand that. 1) how do you do it today with those peers, since you dont like the avg sustained model. 2) what if you had 20 x OC12 ports but had 1 space OC48 port that would dynamically make layer1 connections to whichever peer needed that capacity at that moment? Forgeting the BGP config issue for the moment on the layer3 side. Would this be an improvement? Basically a hot spare OC48 that could replace any of the OC12s on the fly? dave
The point was you could bump up on the fly as needed, capacity willing, then down. The obvious factor is having enough spare capacity in the bucket. This should not be an issue within the 4
And the monkey. I really don't have enough capital sitting around to leave a spare port idle for the 4 hours a day I need it.
This sort of addressed the "how do i design my backbone" argument. Where engineers ahve to decide whether to built for peak load and provide max QoS but also the highest cost backbone; or whether to built for avg sustained utilization. This way you can theoretically get the best of both worlds. As long as the billing goes along with that.
I don't plan to be buying service from anyone who is building to average sustained utilization (sic). My traffic tends to be bursty.
/vijay
My posted comment was concerning if this technology of layer3 to layer1 integration/communication would have exacerbated the mSQL worm as it might have had more ability to grab larger peering pipes.
Were that to have been the case, it would probably would also have been responsible for some op-ex budgets being blown over the weekend, both as a result of capacity that would otherwise have been constrained automagically reprovisioning itself upward (ratcheting up the capacity comes at a price, right?), and as a result of accounting departments arguing over "you used it" versus "an attack caused an automatic system to provision bandwidth that I didn't really want so I don't want to pay for it." It's not hard to imagine a lot of edge customers infected with the latest flavor-of-the-week worm having conversations with their upstream providers about 95th percentile billing real soon now. Picture this aspect of the 1434/udp worm: It hits late on a Friday 1/24 (PST), in theory after lots of end-user IT shops have gone home for the weekend. January is a 31-day month - the 5% of samples tossed in a 95th percentile calculation represent a little over 37 hours of usage. Those IT shops have 37 hours to patch their systems, until Sunday (1/26) afternoon, and prevent their bill for January from being defined by 1434/udp worm usage. Oops, Sunday (1/26) was the Super Bowl. Missed the window. Systems get patched Monday (1/27). On Monday (2/3), lots of bills for January usage are going to be calculated. How many surprises will there be, and how much time in February will be devoted to Customer X disputing their January bill with Vendor Y? Auto-provisioning technology is quite exciting, being able to implement sweeping changes in many powered devices simultaneously with one point'n'click. In the interior of a network, the spending decisions that back the execution of that point'n'click are at least all within one organization. In a customer/vendor relationship, I can easily imagine the vendor wanting the customer to be able to run the dollar meter faster with the greatest of ease (and possibly associate some minimums with those increases, so that the click-down can't follow the click-up too closely), and any billing disputes mercifully only involve two parties. At an exchange point scenario, though, where two networks presumably have independently agreed to pay money to a third to connect via this optical switch, we now have the case where one can affect the monthly bill of the other by a point'n'click (again, I am making the assumption that the additional value represented by increased capacity will cause additional charges to be incurred - to two parties, now that we're in an exchange point scenario). The kind of policies that the control system now needs to implement undergo a dramatic shift in order to implement the business rules of an exchange point - from network R's perspective: network S may be have a specific cap on now much additional capacity it can cause network R to buy from exchange point E network T may have priority over network S when contending for limited headroom (without E revealing to S that T has priority) a total cap for monthly spending of $N with exchange point E may be set, after which all requests for additional capacity will be denied (and just for humor value) auto-provisioned capacity can only be added in response to legitimate traffic increases Billing disputes in the exchange point now involve three parties, and become more complex as a result - this, in theory, results in the technology not reducing op-ex but shifting it from the operations department to the accounting and legal departments. I get the picture that the control software can organize views hierarchically. Exchange points aren't organized hierarchically, though (well, the non-bell-shaped ones aren't), they're organized as a mesh. The nice thing about Ethernet-based exchanges is that: they allow the structure of the network to mirror the structure of the organization (as networks have a habit of doing) easily the use of VLAN tags allows backplane slot capacity to be divided between peers without the hard boundaries per-peer that slicing and dicing SONET imposes but still within an overall cap that a) sets a boundary on the traffic engineering problem space on the interior side of the connected router and b) can be periodically reviewed the business rules that the technology has to implement are relatively clean, easy to understand, free of dependencies between customers of the exchange (beyond their initial agreement to exchange traffic with each other). Optical switch technology, and the control systems that cause the technology to implement the business rules of an exchange point, have a ways to go before they're ready for prime-time. Stephen VP, Eng. PAIX
Stephen Stuart <stuart@tech.org> writes:
Optical switch technology, and the control systems that cause the technology to implement the business rules of an exchange point, have a ways to go before they're ready for prime-time.
We don't know anything we could do with 50ms provisioning without making a disaster (c) smd 2001. /vijay
We don't know anything we could do with 50ms provisioning without making a disaster (c) smd 2001.
indeed. but i sure would like one or two day provisioning, as opposed to 18 months.
The space where that problem exists is within and and at the edge of carrier networks. I think we would agree that provisioning of capacity directly between providers under the same modern-day exchange point roof with good old-fashioned direct cross-connects can happen with timeframes similar to or better than the providers' ability to next-day-ship an interface card to take delivery of the upgraded capacity. Stephen VP, Eng. PAIX
From: "Stephen Stuart"
Billing disputes in the exchange point now involve three parties, and become more complex as a result - this, in theory, results in the technology not reducing op-ex but shifting it from the operations department to the accounting and legal departments.
If a proper rulebased system were implemented, wouldn't this account for the issues? For example, implementation of an increase is only allowed by peer E if the traffic has been a gradual increase and X throughput has been met for T amount of time. Peer E would also have specific caps allotted for peer S and T along with priority in granting the increases. In the case of the worm, it is important to have a good traffic analyzer to recognize that the increase in bandwith has been too drastic to constitute a valid need. Of course, traffic patterns to vary abit in short periods of time, but the average sustained throughput and the average peak do not increase rapidly. What was seen with Saphire should never be confused with normal traffic and requests for bandwidth increments should be ignored by any automated system. Of course, I realize that to implement the necessary rules would add a complexity that could cost largs sums of money due to mistakes. -Jack
Of course, I realize that to implement the necessary rules would add a complexity that could cost largs sums of money due to mistakes.
Implementing the automation that can (correctly) implement the necessary rules is an enormous challenge, and it's unclear whether anyone in the marketplace will rise to meet it; the market for those additional (hard to implement) features is small, while the less complex features that just implement the auto-provisioning spend-o-rama between a vendor and its customers in the aforementioned hierarchical manner are easier to implement and have a broader market. Companies trying to make a profit in the software development game are naturally repelled by the former and attracted by the latter, especially when their target is a multi-vendor environment. Vendors making software to manage their gear and their gear alone often concentrate more on ringing all the bells and tooting all the whistles of their device rather than implementing a plethora of policies (sorry, I had to say it) cleanly and simply. Custom code can fill in the gaps, as we've seen so often in the service provider market. Once in a while, something truly useful escapes into the wild, like rancid, but the really good stuff tends to stay safely eithin the walls of the organization that developed it because it represents a competitive advantage. Stephen VP, Eng. PAIX
On Fri, 31 Jan 2003, Jack Bates wrote:
If a proper rulebased system were implemented, wouldn't this account for the issues? For example, implementation of an increase is only allowed by peer E if the traffic has been a gradual increase and X throughput has been met for T amount of time. Peer E would also have specific caps allotted for peer S and T along with priority in granting the increases. In the case of the worm, it is important to have a good traffic analyzer to recognize that the increase in bandwith has been too drastic to constitute a valid need.
If my regular saturday morning traffic is 50 Mbps and a worm generates another 100, then 150 Mbps is a valid need as being limited to my usual 50 Mbps would mean 67% packet loss, TCP sessions go into hibernation and I end up with 49.9% Mbps of worm traffic.
Of course, traffic patterns to vary abit in short periods of time, but the average sustained throughput and the average peak do not increase rapidly.
Sometimes they do: star report, mars probe, that kind of thing...
What was seen with Saphire should never be confused with normal traffic and requests for bandwidth increments should be ignored by any automated system.
So you're proposing the traffic is inspected very closely, and then either its rate limited/priority queued or more bandwidth is provisioned automatically? That sure adds a lot of complexity but I guess this is the only way to do it right.
Of course, I realize that to implement the necessary rules would add a complexity that could cost largs sums of money due to mistakes.
Right.
From: "Iljitsch van Beijnum"
If my regular saturday morning traffic is 50 Mbps and a worm generates another 100, then 150 Mbps is a valid need as being limited to my usual 50 Mbps would mean 67% packet loss, TCP sessions go into hibernation and I end up with 49.9% Mbps of worm traffic.
Of course, traffic patterns to vary abit in short periods of time, but
But a ruleset should be allowed for you as a business to make that decision. Do you allow the worm's traffic increase to increase your circuit and cost you money, or do you limit it based on suspected illegitimate traffic? the
average sustained throughput and the average peak do not increase rapidly.
Sometimes they do: star report, mars probe, that kind of thing...
And what do you do to handle traffic bursts now? Do you immediately jump up and scream, "I need a bigger pipe now! Step on it!" You plan for what you maximum capacity needs to be. The proposed system would still allow for maximum caps, but there are times when that amount of bandwidth is unnecessary for your particular network while another may need it at that time. For planned bursts in throughput, you can increase the amount manually. The automated system, however, should be configurable per peer to allow for what the business wants to spend. If a business doesn't want 100mb surprises, then they should be able to avoid them. On the flip side, if the business does, then they can allot for it.
What was seen with Saphire should never be confused with normal traffic and requests for bandwidth increments should be ignored by any automated system.
So you're proposing the traffic is inspected very closely, and then either its rate limited/priority queued or more bandwidth is provisioned automatically? That sure adds a lot of complexity but I guess this is the only way to do it right.
Traffic doesn't have to be inspected more closely than it is. It just needs to keep historical records and averages. The system knows what the current utilization is and can quickly calculate the rate of increase. As stated above, it should be the right of each peer to decide what they consider to be an acceptable rate of increase before allowing an automatic upgrade which will cost them money.
Of course, I realize that to implement the necessary rules would add a complexity that could cost largs sums of money due to mistakes.
Right.
Automation is rarely a simplistic process when the automation includes increasing expenditures. The factors involved in the automation process would also have to be worked into peering agreements, as both sides of a peering session would have to agree on what they find to be acceptable between them. -Jack
Actually, I think that was the point of the dynamic provisioning ability. The UNI 1.0 protocol or the previous ODSI, were to allow the routers to provision their own capacity. The tests in the real world done actually worked although I still believe they are under NDA.
The point was to provision or reprovision capacity as needed. Without getting into the arguments of whether this is a good idea, the point was to "pay" for what you used when you used it. The biggest technical factor was "how the heck do you bill it."
If a customer goes from their normal OC3 ---> OC12 for 4hrs three times in a month... what do you bill them for? Do you take it down to the DS0/min level and just multiple or do you do a flat rate or a per upgrade???
The point was you could bump up on the fly as needed, capacity willing, then down. The obvious factor is having enough spare capacity in the bucket. This should not be an issue within the 4 walls of a colo. If it's a beyond the 4 walls play then there should be spare capacity available that normally serves as redundancy in the mesh.
The other interesting factor is that now you have sort of aTDMA arrangement going on( very loose analogy here). In that your day can theoretically be divided into 3 time zones.
In the zone: 8am - 4pm ----- Business users, Financial backbones etc 4pm -12am ----- Home users, DSL, Cable, Peer to Peer 12am - 8am ---- Remote backup services, forgein users etc
Some of the same capacity can be reused based on peer needs.
This sort of addressed the "how do i design my backbone" argument. Where engineers ahve to decide whether to built for peak load and provide max QoS but also the highest cost backbone; or whether to built for avg sustained utilization. This way you can theoretically get the best of both worlds. As long as the billing goes along with that.
You are right this is a future play. But though it was interesting from the perspective of what if all this technology was enabled today, what affect would the mSQL worm have had. Would some of these technologies have exacerbated the problems we saw. Trying to get better feedback on the future issues, so far some of the offline comments and perspectives have been helpful and inciteful as well as yours...
Well the problem with optical bandwidth on demand is that you will have to pay for the network even when it isn't being used. Basically you have three billing principles, pay per usage, pay for the service, a mix of the two. With all the models you still need to distribute the cost over bandwidth and in worst case this will end up being higher per transfered data. - kurtis -
Well the feedback onlist and extensive offlist was great. The respondents seem to feel that because of the rapid onset of the attack, an dynamically allocated optical exchange might have exacerbated the problem. But this is also the benefits, it allows flexible bandwidth with a nonblocking backplane. So backbones with a critical event such as a webcast have the capacity they need when they need it. A common shared backplane architecture might provide a nature bottleneck. One can also see this as a possible growth problem the rest of the time. Respondents strayed away from the specific subject of the dynamics of the Optical exchange under an mSQL type attack and went into the pros and cons. The number one topic: Billing Billing was also the biggest challenge in implementation of the technology. Once the ability was there, and the real world tests showed this technology was actually functional. No one was exactly sure of the business algorithm to charge by. Most commentators were concerned about losing billing control. That a peer (possibly under attack) may actually cause fees to be assessed to your own backbone. It must be understood that your network must give approval for this to happen. And if you have CNM (customer network management) enabled and even running on a screen in your noc, u are aware immediately when this happens. Without that, you have your specific peer locked down to whatever size pipe you have chosen. On the billing, it might be flat rate with the ability to "burst" to a higher sized capacity. Perhaps this is a flat rate charge, or would allow you to burst a certain amount of hours etc. No one has gotten a clear picture. The simplest answer is probably to do, as was mentioned, a similar scheme as in IP. Bill to the 95th percentile. It seems fair. Use a multiplier of DS0s per hour x $ and go with that. You might even lock it down so that at a certain $ figure, no more bursting is allowed. I do not like that kind of billing to network control, but it would seem that CFOs would demand some kind of ceiling limit. As far as oscillation between protection scheme in different layers. This has been a problem with things like an IP over ATM network. It should not be a problem and there has been a lot of testing. It is true the possibility for thrashing is there but probably not at sub 50ms layers. We have that now over sonet private peering circuits. But even in a metro wide optical exchange scheme, the two farthest points on the mesh being ~100 miles, reroute time was 16ms. Those are the real world tests when we were testing the network as we were breaking routes. There were some discussions of rule sets. No conclusions. filters should probably be left to the backbones with very little control at the optical layer (IX). The only rule sets might be to service levels or billing. David At 9:11 +0100 2/4/03, Kurt Erik Lindqvist wrote:
Actually, I think that was the point of the dynamic provisioning ability. The UNI 1.0 protocol or the previous ODSI, were to allow the routers to provision their own capacity. The tests in the real world done actually worked although I still believe they are under NDA.
The point was to provision or reprovision capacity as needed. Without getting into the arguments of whether this is a good idea, the point was to "pay" for what you used when you used it. The biggest technical factor was "how the heck do you bill it."
If a customer goes from their normal OC3 ---> OC12 for 4hrs three times in a month... what do you bill them for? Do you take it down to the DS0/min level and just multiple or do you do a flat rate or a per upgrade???
The point was you could bump up on the fly as needed, capacity willing, then down. The obvious factor is having enough spare capacity in the bucket. This should not be an issue within the 4 walls of a colo. If it's a beyond the 4 walls play then there should be spare capacity available that normally serves as redundancy in the mesh.
The other interesting factor is that now you have sort of aTDMA arrangement going on( very loose analogy here). In that your day can theoretically be divided into 3 time zones.
In the zone: 8am - 4pm ----- Business users, Financial backbones etc 4pm -12am ----- Home users, DSL, Cable, Peer to Peer 12am - 8am ---- Remote backup services, forgein users etc
Some of the same capacity can be reused based on peer needs.
This sort of addressed the "how do i design my backbone" argument. Where engineers ahve to decide whether to built for peak load and provide max QoS but also the highest cost backbone; or whether to built for avg sustained utilization. This way you can theoretically get the best of both worlds. As long as the billing goes along with that.
You are right this is a future play. But though it was interesting from the perspective of what if all this technology was enabled today, what affect would the mSQL worm have had. Would some of these technologies have exacerbated the problems we saw. Trying to get better feedback on the future issues, so far some of the offline comments and perspectives have been helpful and inciteful as well as yours...
Well the problem with optical bandwidth on demand is that you will have to pay for the network even when it isn't being used. Basically you have three billing principles, pay per usage, pay for the service, a mix of the two. With all the models you still need to distribute the cost over bandwidth and in worst case this will end up being higher per transfered data.
- kurtis -
-- David Diaz dave@smoton.net [Email] pagedave@smoton.net [Pager] www.smoton.net [Peering Site under development] Smotons (Smart Photons) trump dumb photons
Here are the first ten minutes of packets that one of my firewalls intercepted: (PST Times) Jan 24 21:32:19: UDP Drop SRC=211.205.179.133 LEN=404 TOS=0x00 PREC=0x00 TTL=115 ID=22340 PROTO=UDP SPT=1739 DPT=1434 LEN=384 Jan 24 21:32:54: UDP Drop SRC=128.122.40.59 LEN=404 TOS=0x00 PREC=0x00 TTL=108 ID=1366 PROTO=UDP SPT=1086 DPT=1434 LEN=384 Jan 24 21:33:11: UDP Drop SRC=141.142.65.14 LEN=404 TOS=0x00 PREC=0x00 TTL=113 ID=28703 PROTO=UDP SPT=1896 DPT=1434 LEN=384 Jan 24 21:38:54: UDP Drop SRC=211.57.70.131 LEN=404 TOS=0x00 PREC=0x00 TTL=102 ID=9940 PROTO=UDP SPT=1654 DPT=1434 LEN=384 Jan 24 21:39:34: UDP Drop SRC=202.96.108.140 LEN=404 TOS=0x00 PREC=0x00 TTL=108 ID=17122 PROTO=UDP SPT=4742 DPT=1434 LEN=384 Jan 24 21:41:40: UDP Drop SRC=200.162.192.22 LEN=404 TOS=0x00 PREC=0x00 TTL=108 ID=21153 PROTO=UDP SPT=3121 DPT=1434 LEN=384 Jan 24 21:41:51: UDP Drop SRC=64.70.191.74 LEN=404 TOS=0x00 PREC=0x00 TTL=109 ID=46498 PROTO=UDP SPT=1046 DPT=1434 LEN=384 Jan 24 21:42:06: UDP Drop SRC=129.242.210.240 LEN=404 TOS=0x00 PREC=0x00 TTL=107 ID=2336 PROTO=UDP SPT=1574 DPT=1434 LEN=384 I checked, and none of these source addresses had sent any visible probes into my network within the prior month. The really weird thing is that while I was interactively watching router logs I saw a bunch of packets where neither the SRC nor DST were within my network. I looked up the MAC address of the packets, and they seemed to be coming from a client colocated box (apparently un-firewalled Linux). I wonder if there was a worm that spread previous to the attack to seed/start the attack by sending spoofed attack packets to a large list of known vulnerable servers. It does make sense though that the origin packets would have all been spoofed. Unfortunately I can't find any items like that in my log files. -Steve On Sun, Jan 26, 2003 at 12:09:33AM -0500, Alex Rubenstein eloquently stated:
+-----------------+ | 216.069.032.086 | Kentucky Community and Technical College System | 066.223.041.231 | Interland | 216.066.011.120 | Hurricane Electric | 216.098.178.081 | V-Span, Inc. +-----------------+
HE.net seems to be a reoccuring theme. (I speak to evil of them -- actually, there are some good people over there).
However, it appears that one of the 'root' boxes of this attack was at HE. This is the third or fourth time I've seen theit netblocks mentioned as the source of some of the first packets.
-- Alex Rubenstein, AR97, K2AHR, alex@nac.net, latency, Al Reuben -- -- Net Access Corporation, 800-NET-ME-36, http://www.nac.net --
-- Stephen Milton - Vice President (425) 881-8769 x102 ISOMEDIA.COM - Premium Internet Services (425) 869-9437 Fax milton@isomedia.com http://www.isomedia.com
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday 25 January 2003 17:32, Travis Pugh wrote: [snip]
Ditto on the sequential scan well before the actual action, except that mine came on Jan. 19th:
Jan 19 10:59:11 Deny inbound UDP from 67.8.33.179/1 to xxx.xxx.xxx.xxx
I have a similar packet (but only one) from the same host (time is ntp sync'd EST). Jan 20 12:55:47 firewall kernel: Packet log: input - ppp0 PROTO=17 67.8.33.179:1 65.83.153.253:1434 L=29 S=0x00 I=20300 F=0x0000 T=110 (#23)
The scan went across several subnets I manage inside 209.67.0.0 serially. My sources were all from 67.8.33.179, all source port 1. The actual worm propagation began to hit my logs at 00:28:16 EST Jan 25.
My first worm packet- Jan 25 00:32:52 firewall kernel: Packet log: input - ppp0 PROTO=17 131.128.163.118:1631 65.83.153.253:1434 L=404 S=0x00 I=2610 F=0x0000 T=113 (#23) and continued until Jan 25 11:48:44 firewall kernel: Packet log: input - ppp0 PROTO=17 151.99.167.133:30725 65.83.153.253:1434 L=404 S=0x00 I=2 F=0x0000 T=111 (#23) when BS.N apparently shutdown 1434. - -- Redundancy? You can say that again! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) Comment: Brian Coyle, GCIA http://www.giac.org/GCIA.php iD8DBQE+Mz9gER3MuHUncBsRAuG3AJ0Xzd+QiDeX6LKHX4frfRF40xJK8gCfUgXw g7uoFXH2N72uwLudo2OuvpI= =Kw/8 -----END PGP SIGNATURE-----
On Sat, 25 Jan 2003, Brian Coyle wrote:
I have a similar packet (but only one) from the same host (time is ntp sync'd EST).
Jan 20 12:55:47 firewall kernel: Packet log: input - ppp0 PROTO=17 67.8.33.179:1 65.83.153.253:1434 L=29 S=0x00 I=20300 F=0x0000 T=110 (#23)
That's a busy machine apparently: Jan 19 01:13:16 gw ipmon[32123]: 01:13:15.993484 ed0 @0:20 b 67.8.33.179,1 -> 66.92.x.x,1434 PR udp len 20 29 IN (also EST, NTP synced) C
- -- Redundancy? You can say that again! -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) Comment: Brian Coyle, GCIA http://www.giac.org/GCIA.php
iD8DBQE+Mz9gER3MuHUncBsRAuG3AJ0Xzd+QiDeX6LKHX4frfRF40xJK8gCfUgXw g7uoFXH2N72uwLudo2OuvpI= =Kw/8 -----END PGP SIGNATURE-----
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Saturday 25 January 2003 22:30, Charles Sprickman wrote:
On Sat, 25 Jan 2003, Brian Coyle wrote:
I have a similar packet (but only one) from the same host (time is ntp sync'd EST).
Jan 20 12:55:47 firewall kernel: Packet log: input - ppp0 PROTO=17 67.8.33.179:1 65.83.153.253:1434 L=29 S=0x00 I=20300 F=0x0000 T=110 (#23)
That's a busy machine apparently:
Jan 19 01:13:16 gw ipmon[32123]: 01:13:15.993484 ed0 @0:20 b 67.8.33.179,1 -> 66.92.x.x,1434 PR udp len 20 29 IN
(also EST, NTP synced)
Additional correlations are being reported over on the intrusions@incidents.org list... http://www.sans.org/intrusions/ - -- 42 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.1 (GNU/Linux) Comment: Brian Coyle, GCIA http://www.giac.org/GCIA.php iD8DBQE+M1x6ER3MuHUncBsRAhiUAJ4+8RCpTicU4VWZzkXlR8grUjOBrQCfZHP9 VzmEQod+qeXiL50M/llrZvA= =LuxR -----END PGP SIGNATURE-----
PR> Date: Sat, 25 Jan 2003 06:58:46 -0500 PR> From: Phil Rosenthal PR> It might be interesting if some people were to post when they PR> received their first attack packet, and where it came from, PR> if they happened to be logging. I agree, except such high flow rates make even millisecond-scale time skew a huge issue... Eddy -- Brotsman & Dreger, Inc. - EverQuick Internet Division Bandwidth, consulting, e-commerce, hosting, and network building Phone: +1 (785) 865-5885 Lawrence and [inter]national Phone: +1 (316) 794-8922 Wichita ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Date: Mon, 21 May 2001 11:23:58 +0000 (GMT) From: A Trap <blacklist@brics.com> To: blacklist@brics.com Subject: Please ignore this portion of my mail signature. These last few lines are a trap for address-harvesting spambots. Do NOT send mail to <blacklist@brics.com>, or you are likely to be blocked.
Here is what we saw at MIT (names are subnets). These are the times when the flooding started to cause us problems. sloan 00:31:36 oc1-t1 00:32:07 nox-link 00:32:37 extr2-bb 00:33:13 All are EST. The numbers are accurate to *at best* a minute because of the delay before the Noc is scheduled to test them. -Jeff
participants (21)
-
Alex Rubenstein
-
Brian Coyle
-
Charles Sprickman
-
Clayton Fiske
-
David Diaz
-
E.B. Dreger
-
Iljitsch van Beijnum
-
Jack Bates
-
Jeffrey I. Schiller
-
Johannes Ullrich
-
Kurt Erik Lindqvist
-
Mike Leber
-
Pete Ashdown
-
Phil Rosenthal
-
Randy Bush
-
Rubens Kuhl Jr.
-
Scott Granados
-
Stephen Milton
-
Stephen Stuart
-
Travis Pugh
-
Vijay Gill