Scalability issues in the Internet routing system
I guess it's time to have a look at the actual scalability issues we face in the Internet routing system. Maybe the area of action becomes a bit more clear with such an assessment. In the current Internet routing system we face two distinctive scalability issues: 1. The number of prefixes*paths in the routing table and interdomain routing system (BGP) This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is: capacity = prefix * path * churnfactor / second I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this. 2. The number of longest match prefixes in the forwarding table This problem scales with the number of prefixes and the number of packets per second the router has to process under full or expected load. The required capacity for a routers forwarding plane is: capacity = prefixes * packets / second This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic. Here I think Moore's law doesn't cope with the increase in projected growth in longest prefix match prefixes and link speed. Doing longest prefix matches in hardware is relatively complex. Even more so for the additional bits in IPv6. Doing perfect matches in hardware is much easier though... -- Andre
On Oct 18, 2005, at 11:30 AM, Andre Oppermann wrote:
1. The number of prefixes*paths in the routing table and interdomain routing system (BGP)
This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is:
capacity = prefix * path * churnfactor / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Especially since this does not have to be done in real time. BGP updates can take many seconds to process without end users thinking anything is amiss.
2. The number of longest match prefixes in the forwarding table
This problem scales with the number of prefixes and the number of packets per second the router has to process under full or expected load. The required capacity for a routers forwarding plane is:
capacity = prefixes * packets / second
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
Here I think Moore's law doesn't cope with the increase in projected growth in longest prefix match prefixes and link speed. Doing longest prefix matches in hardware is relatively complex. Even more so for the additional bits in IPv6. Doing perfect matches in hardware is much easier though...
You are mistaken in one of your assumptions. The FIB is generated asynchronously to packets being forwarded, and usually not even by the same processor (at least for routers "in the core"). Therefore things like pps / link speed are orthogonal to longest match. (Unless you are claiming the number of new prefixes is related to link speed. But I don't think anyone considers a link which has nothing but BGP updates on it a realistic or useful metric. -- TTFN, patrick
Patrick W. Gilmore wrote:
On Oct 18, 2005, at 11:30 AM, Andre Oppermann wrote:
2. The number of longest match prefixes in the forwarding table
This problem scales with the number of prefixes and the number of packets per second the router has to process under full or expected load. The required capacity for a routers forwarding plane is:
capacity = prefixes * packets / second
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
Here I think Moore's law doesn't cope with the increase in projected growth in longest prefix match prefixes and link speed. Doing longest prefix matches in hardware is relatively complex. Even more so for the additional bits in IPv6. Doing perfect matches in hardware is much easier though...
You are mistaken in one of your assumptions. The FIB is generated asynchronously to packets being forwarded, and usually not even by the same processor (at least for routers "in the core"). Therefore things like pps / link speed are orthogonal to longest match. (Unless you are claiming the number of new prefixes is related to link speed. But I don't think anyone considers a link which has nothing but BGP updates on it a realistic or useful metric.
I'm not talking about BGP here but the actual forwarding on the line card or wherever it happens for any particular architecture. The ASIC thingie. It has to do longest-match lookups for every packet that comes in to figure out the egres interface. -- Andre
On Oct 18, 2005, at 12:46 PM, Andre Oppermann wrote:
I'm not talking about BGP here but the actual forwarding on the line card or wherever it happens for any particular architecture. The ASIC thingie. It has to do longest-match lookups for every packet that comes in to figure out the egres interface.
Depends on the way the FIB is engineered. -- TTFN, patrick
At 11:30 AM 10/18/2005, Andre Oppermann wrote:
I guess it's time to have a look at the actual scalability issues we face in the Internet routing system. Maybe the area of action becomes a bit more clear with such an assessment.
In the current Internet routing system we face two distinctive scalability issues:
1. The number of prefixes*paths in the routing table and interdomain routing system (BGP)
This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is:
capacity = prefix * path * churnfactor / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Moore will keep up reasonably with both the CPU needed to keep BGP perking, and with memory requirements for the RIB, as well as other non-data-path functions of routers.
2. The number of longest match prefixes in the forwarding table
This problem scales with the number of prefixes and the number of packets per second the router has to process under full or expected load. The required capacity for a routers forwarding plane is:
capacity = prefixes * packets / second
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
Here I think Moore's law doesn't cope with the increase in projected growth in longest prefix match prefixes and link speed. Doing longest prefix matches in hardware is relatively complex. Even more so for the additional bits in IPv6. Doing perfect matches in hardware is much easier though...
Several items regarding FIB lookup: 1) The design of the FIB need not be the same as the RIB. There is plenty of room for creativity in router design in this space. Specifically, the FIB could be dramatically reduced in size via aggregation. The number of egress points (real or virtual) and/or policies within a router is likely FAR smaller than the total number of routes. It's unclear if any significant effort has been put into this. 2) Nothing says the design of the FIB lookup hardware has to be longest match. Other designs are quite possible. Again, some creativity in design could go a long way. The end result must match that which would be provided by longest-match lookup, but that doesn't mean the ASIC/FPGA or general purpose CPUs on the line card actually have to implement the mechanism in that fashion. 3) Don't discount novel uses of commodity components. There are fast CPU chips available today that may be appropriate to embed on line cards with a bit of firmware, and may be a lot more cost effective and sufficiently fast compared to custom ASICs of a few years ago. The definition of what's hardware and what's software on line cards need not be entirely defined by whether the design is executed entirely by a hardware engineer or a software engineer. Finally, don't discount the value and performance of software-based routers. MPLS was first "sold" as a way to deal with core routers not handling Gigabit links. The idea was to get the edge routers to take over. Present CPU technology, especially with good embedded systems software design, is quite capable of performing the functions needed for edge routers in many circumstances. It may well make sense to consider a mix of router types based on port count and speed at edges and/or chassis routers with line cards that are using general purpose CPUs for forwarding engines instead of ASICs for lower-volume sites. If we actually wind up with the core of most backbones running MPLS after all, well, we've got the technology so use it. Inter-AS routers for backbones, will likely need to continue to be large, power-hungry boxes so that policy can be separately applied on the borders. I should point out that none of this really is about scalability of the routing system of the Internet, it's all about hardware and software design to allow the present system to scale. Looking at completely different and more scalable routing would require finding a better way to do things than the present BGP approach.
Daniel Senie wrote: [many interesting hw design approaches that you can buy already deleted]
I should point out that none of this really is about scalability of the routing system of the Internet, it's all about hardware and software design to allow the present system to scale. Looking at completely different and more scalable routing would require finding a better way to do things than the present BGP approach.
I disagree. By your description there is no problem scaling the current model to much bigger numbers of prefixes and paths. Then why not simply do it??? Apparently there ain't a problem then? For routing you have to ways: The BGP (DFZ) way or the aggregation (by whatever arbitrary level/layer divider de jour) way. If you have third, different (not just a modification or merge of partial aspects of the former two) you may be eglible for a Nobel prize in mathematics in a few decades. They run behind a bit, you know... -- Andre
Daniel,
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Moore will keep up reasonably with both the CPU needed to keep BGP perking, and with memory requirements for the RIB, as well as other non-data-path functions of routers.
That's only true if the rate of prefix addition can be constrained to be below the rate dictated by Moore's law, which is the entire point of the discussion. There is nothing today that acts as a pressure against bloat except portions of the net periodically blowing up as their hardware capacity is exceeded.
Several items regarding FIB lookup:
1) The design of the FIB need not be the same as the RIB. There is plenty of room for creativity in router design in this space. Specifically, the FIB could be dramatically reduced in size via aggregation. The number of egress points (real or virtual) and/or policies within a router is likely FAR smaller than the total number of routes. It's unclear if any significant effort has been put into this.
In fact, there has been. In a previous life, we actually had some FIB pre-processing that did a great deal of aggregation prior to pushing the FIB down into hardware. We found that it was workable, but consumed extensive CPU resources to keep up with the churn. Thus, this turns out to be a tool to push the problem from the FIB back up to the CPU. Previous comments still apply, and this just increases the CPU burn rate.
2) Nothing says the design of the FIB lookup hardware has to be longest match. Other designs are quite possible.
Longest match is fundamental in the workings of all of the classless protocols today. Changing this means changing almost every protocol.
3) Don't discount novel uses of commodity components. There are fast CPU chips available today that may be appropriate to embed on line cards with a bit of firmware, and may be a lot more cost effective and sufficiently fast compared to custom ASICs of a few years ago. The definition of what's hardware and what's software on line cards need not be entirely defined by whether the design is executed entirely by a hardware engineer or a software engineer.
This has been tried as well. Tony
One question - which percent of routing table of any particular router is REALLY used, say, during 1 week? I have a strong impression, that answer wil not be more than 20% even in biggerst backbones, and will be (more likely) below 1% in the rest of the world. Which makes a hige space for optimization. ----- Original Message ----- From: "Daniel Senie" <dts@senie.com> To: <nanog@nanog.org> Sent: Tuesday, October 18, 2005 9:50 AM Subject: Re: Scalability issues in the Internet routing system
At 11:30 AM 10/18/2005, Andre Oppermann wrote:
I guess it's time to have a look at the actual scalability issues we face in the Internet routing system. Maybe the area of action becomes a bit more clear with such an assessment.
In the current Internet routing system we face two distinctive
scalability
issues:
1. The number of prefixes*paths in the routing table and interdomain routing system (BGP)
This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is:
capacity = prefix * path * churnfactor / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Moore will keep up reasonably with both the CPU needed to keep BGP perking, and with memory requirements for the RIB, as well as other non-data-path functions of routers.
2. The number of longest match prefixes in the forwarding table
This problem scales with the number of prefixes and the number of packets per second the router has to process under full or expected load. The required capacity for a routers forwarding plane is:
capacity = prefixes * packets / second
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
Here I think Moore's law doesn't cope with the increase in projected growth in longest prefix match prefixes and link speed. Doing longest prefix matches in hardware is relatively complex. Even more so for the additional bits in IPv6. Doing perfect matches in hardware is much easier though...
Several items regarding FIB lookup:
1) The design of the FIB need not be the same as the RIB. There is plenty of room for creativity in router design in this space. Specifically, the FIB could be dramatically reduced in size via aggregation. The number of egress points (real or virtual) and/or policies within a router is likely FAR smaller than the total number of routes. It's unclear if any significant effort has been put into this.
2) Nothing says the design of the FIB lookup hardware has to be longest match. Other designs are quite possible. Again, some creativity in design could go a long way. The end result must match that which would be provided by longest-match lookup, but that doesn't mean the ASIC/FPGA or general purpose CPUs on the line card actually have to implement the mechanism in that fashion.
3) Don't discount novel uses of commodity components. There are fast CPU chips available today that may be appropriate to embed on line cards with a bit of firmware, and may be a lot more cost effective and sufficiently fast compared to custom ASICs of a few years ago. The definition of what's hardware and what's software on line cards need not be entirely defined by whether the design is executed entirely by a hardware engineer or a software engineer.
Finally, don't discount the value and performance of software-based routers. MPLS was first "sold" as a way to deal with core routers not handling Gigabit links. The idea was to get the edge routers to take over. Present CPU technology, especially with good embedded systems software design, is quite capable of performing the functions needed for edge routers in many circumstances. It may well make sense to consider a mix of router types based on port count and speed at edges and/or chassis routers with line cards that are using general purpose CPUs for forwarding engines instead of ASICs for lower-volume sites. If we actually wind up with the core of most backbones running MPLS after all, well, we've got the technology so use it. Inter-AS routers for backbones, will likely need to continue to be large, power-hungry boxes so that policy can be separately applied on the borders.
I should point out that none of this really is about scalability of the routing system of the Internet, it's all about hardware and software design to allow the present system to scale. Looking at completely different and more scalable routing would require finding a better way to do things than the present BGP approach.
On Oct 23, 2005, at 11:33 PM, Alexei Roudnev wrote:
One question - which percent of routing table of any particular router is REALLY used, say, during 1 week?
I have a strong impression, that answer wil not be more than 20% even in biggerst backbones, and will be (more likely) below 1% in the rest of the world. Which makes a hige space for optimization.
As of the last time that I looked at it (admittedly quite awhile ago), something like 80% of the forwarding table had at least one hit per minute. This may well have changed given the number of traffic engineering prefixes that are circulating. Tony
As of the last time that I looked at it (admittedly quite awhile ago), something like 80% of the forwarding table had at least one hit per minute. This may well have changed given the number of traffic engineering prefixes that are circulating.
Tony
Yea, but that's just me pinging everything and google and yahoo fighting over who has the most complete list of x rated sites.
On Mon, 24 Oct 2005, Blaine Christian wrote:
As of the last time that I looked at it (admittedly quite awhile ago), something like 80% of the forwarding table had at least one hit per minute. This may well have changed given the number of traffic engineering prefixes that are circulating.
Tony
Yea, but that's just me pinging everything and google and yahoo fighting over who has the most complete list of x rated sites.
and this probably depends greatly on the network, user-population, business involved. Is it even a metric worth tracking?
On Tue, 25 Oct 2005 16:28:05 -0000, "Christopher L. Morrow" said:
On Mon, 24 Oct 2005, Blaine Christian wrote:
Yea, but that's just me pinging everything and google and yahoo fighting over who has the most complete list of x rated sites.
and this probably depends greatly on the network, user-population, business involved. Is it even a metric worth tracking?
It's a fight for eyeballs, isn't it? Routing table hits caused by spidering from search engines will give a good indication of what percent of the address space the spiders are covering. Of course, you need views from a number of places, and some adjusting for the fact that the webservers are usually clumped in very small pockets of address space. On the other hand, if it can be established that 80% of the routing table is hit every <N minutes>, which would tend to argue against caching a very small subset, but the vast majority of the routing table hits are just spiders, that may mean that a cache miss isn't as important as we thought... Anybody got actual measured numbers on how much of the hits are just spiders and Microsoft malware scanning for vulnerable hosts?
On Tue, 25 Oct 2005 Valdis.Kletnieks@vt.edu wrote:
On Tue, 25 Oct 2005 16:28:05 -0000, "Christopher L. Morrow" said:
On Mon, 24 Oct 2005, Blaine Christian wrote:
Yea, but that's just me pinging everything and google and yahoo fighting over who has the most complete list of x rated sites.
and this probably depends greatly on the network, user-population, business involved. Is it even a metric worth tracking?
It's a fight for eyeballs, isn't it? Routing table hits caused by spidering from search engines will give a good indication of what percent of the
oops, I should have not replied to Blaine/Tony but directly to tony's message :( The real question was: "If the percentage of hits is dependent on 'user population', 'business', 'network' is it even worth metricing for the purpose of design of the device/protocol?" Unless of course you want a 'sport' and 'offroad' switch on your router :)
Assume you have determined that a percentage (20%, 80%, whatever) of the routing table is really used for a fixed time period. If you design a forwarding system that can do some packets per second for those most used routes, all you need to DDoS it is a zombie network that would send packets to all other destinations... rate-limiting and dampening would probably come into place, and a new arms race would start, killing operator's abilities to fast renumber sites or entire networks and new troubleshooting issues for network operators. Isn't just simpler to forward at line-rate ? IP look ups are fast nowadays, due to algorithmic and architecture improvements... even packet classification (which is n-tuple version of the IP look up problem) is not that hard anymore. Algorithms can be updated on software-based routers, and performance gains far exceed Moore's Law and projected prefix growth rates... and routers that cannot cope with that can always be changed to handle IGP-only routes and default gateway to a router that can keep up with full routing. (actually, hardware-based routers based on limited size CAMs are more vulnerable to obsolescence by routing table growth than software ones) Let's celebrate the death of "ip route-cache", not hellraise this fragility. Rubens On 10/24/05, Alexei Roudnev <alex@relcom.net> wrote:
One question - which percent of routing table of any particular router is REALLY used, say, during 1 week?
I have a strong impression, that answer wil not be more than 20% even in biggerst backbones, and will be (more likely) below 1% in the rest of the world. Which makes a hige space for optimization.
----- Original Message ----- From: "Daniel Senie" <dts@senie.com> To: <nanog@nanog.org> Sent: Tuesday, October 18, 2005 9:50 AM Subject: Re: Scalability issues in the Internet routing system
At 11:30 AM 10/18/2005, Andre Oppermann wrote:
I guess it's time to have a look at the actual scalability issues we face in the Internet routing system. Maybe the area of action becomes a bit more clear with such an assessment.
In the current Internet routing system we face two distinctive
scalability
issues:
1. The number of prefixes*paths in the routing table and interdomain routing system (BGP)
This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is:
capacity = prefix * path * churnfactor / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Moore will keep up reasonably with both the CPU needed to keep BGP perking, and with memory requirements for the RIB, as well as other non-data-path functions of routers.
2. The number of longest match prefixes in the forwarding table
This problem scales with the number of prefixes and the number of packets per second the router has to process under full or expected load. The required capacity for a routers forwarding plane is:
capacity = prefixes * packets / second
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
Here I think Moore's law doesn't cope with the increase in projected growth in longest prefix match prefixes and link speed. Doing longest prefix matches in hardware is relatively complex. Even more so for the additional bits in IPv6. Doing perfect matches in hardware is much easier though...
Several items regarding FIB lookup:
1) The design of the FIB need not be the same as the RIB. There is plenty of room for creativity in router design in this space. Specifically, the FIB could be dramatically reduced in size via aggregation. The number of egress points (real or virtual) and/or policies within a router is likely FAR smaller than the total number of routes. It's unclear if any significant effort has been put into this.
2) Nothing says the design of the FIB lookup hardware has to be longest match. Other designs are quite possible. Again, some creativity in design could go a long way. The end result must match that which would be provided by longest-match lookup, but that doesn't mean the ASIC/FPGA or general purpose CPUs on the line card actually have to implement the mechanism in that fashion.
3) Don't discount novel uses of commodity components. There are fast CPU chips available today that may be appropriate to embed on line cards with a bit of firmware, and may be a lot more cost effective and sufficiently fast compared to custom ASICs of a few years ago. The definition of what's hardware and what's software on line cards need not be entirely defined by whether the design is executed entirely by a hardware engineer or a software engineer.
Finally, don't discount the value and performance of software-based routers. MPLS was first "sold" as a way to deal with core routers not handling Gigabit links. The idea was to get the edge routers to take over. Present CPU technology, especially with good embedded systems software design, is quite capable of performing the functions needed for edge routers in many circumstances. It may well make sense to consider a mix of router types based on port count and speed at edges and/or chassis routers with line cards that are using general purpose CPUs for forwarding engines instead of ASICs for lower-volume sites. If we actually wind up with the core of most backbones running MPLS after all, well, we've got the technology so use it. Inter-AS routers for backbones, will likely need to continue to be large, power-hungry boxes so that policy can be separately applied on the borders.
I should point out that none of this really is about scalability of the routing system of the Internet, it's all about hardware and software design to allow the present system to scale. Looking at completely different and more scalable routing would require finding a better way to do things than the present BGP approach.
Vice versa. DDOS attack will never work by this way, because this router will (de facto) prioritize long established streams vs. new and random ones, so it will not notice DDOS attack at all - just some DDOS packets will be delayed or lost. You do not need to forward 100% packets on line card rate; forwarding 95% packets on card rate and have other processing (with possible delays) thru central CPU can work good enough. It is all about tricks and optimizations - fast routing is not state of art and can be optimized by many ways. For now, it was not necessary; when it became necessary - it will be done in 1/2 year. ----- Original Message ----- From: "Rubens Kuhl Jr." <rubensk@gmail.com> To: <nanog@nanog.org> Sent: Tuesday, October 25, 2005 9:21 PM Subject: Re: Scalability issues in the Internet routing system Assume you have determined that a percentage (20%, 80%, whatever) of the routing table is really used for a fixed time period. If you design a forwarding system that can do some packets per second for those most used routes, all you need to DDoS it is a zombie network that would send packets to all other destinations... rate-limiting and dampening would probably come into place, and a new arms race would start, killing operator's abilities to fast renumber sites or entire networks and new troubleshooting issues for network operators. Isn't just simpler to forward at line-rate ? IP look ups are fast nowadays, due to algorithmic and architecture improvements... even packet classification (which is n-tuple version of the IP look up problem) is not that hard anymore. Algorithms can be updated on software-based routers, and performance gains far exceed Moore's Law and projected prefix growth rates... and routers that cannot cope with that can always be changed to handle IGP-only routes and default gateway to a router that can keep up with full routing. (actually, hardware-based routers based on limited size CAMs are more vulnerable to obsolescence by routing table growth than software ones) Let's celebrate the death of "ip route-cache", not hellraise this fragility. Rubens On 10/24/05, Alexei Roudnev <alex@relcom.net> wrote:
One question - which percent of routing table of any particular router is REALLY used, say, during 1 week?
I have a strong impression, that answer wil not be more than 20% even in biggerst backbones, and will be (more likely) below 1% in the rest of the world. Which makes a
space for optimization.
----- Original Message ----- From: "Daniel Senie" <dts@senie.com> To: <nanog@nanog.org> Sent: Tuesday, October 18, 2005 9:50 AM Subject: Re: Scalability issues in the Internet routing system
At 11:30 AM 10/18/2005, Andre Oppermann wrote:
I guess it's time to have a look at the actual scalability issues we face in the Internet routing system. Maybe the area of action becomes a bit more clear with such an assessment.
In the current Internet routing system we face two distinctive
scalability
issues:
1. The number of prefixes*paths in the routing table and interdomain routing system (BGP)
This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is:
capacity = prefix * path * churnfactor / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Moore will keep up reasonably with both the CPU needed to keep BGP perking, and with memory requirements for the RIB, as well as other non-data-path functions of routers.
2. The number of longest match prefixes in the forwarding table
This problem scales with the number of prefixes and the number of packets per second the router has to process under full or expected load. The required capacity for a routers forwarding plane is:
capacity = prefixes * packets / second
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
Here I think Moore's law doesn't cope with the increase in projected growth in longest prefix match prefixes and link speed. Doing longest prefix matches in hardware is relatively complex. Even more so for the additional bits in IPv6. Doing perfect matches in hardware is much easier though...
Several items regarding FIB lookup:
1) The design of the FIB need not be the same as the RIB. There is plenty of room for creativity in router design in this space. Specifically, the FIB could be dramatically reduced in size via aggregation. The number of egress points (real or virtual) and/or policies within a router is likely FAR smaller than the total number of routes. It's unclear if any significant effort has been put into
hige this.
2) Nothing says the design of the FIB lookup hardware has to be longest match. Other designs are quite possible. Again, some creativity in design could go a long way. The end result must match that which would be provided by longest-match lookup, but that doesn't mean the ASIC/FPGA or general purpose CPUs on the line card actually have to implement the mechanism in that fashion.
3) Don't discount novel uses of commodity components. There are fast CPU chips available today that may be appropriate to embed on line cards with a bit of firmware, and may be a lot more cost effective and sufficiently fast compared to custom ASICs of a few years ago. The definition of what's hardware and what's software on line cards need not be entirely defined by whether the design is executed entirely by a hardware engineer or a software engineer.
Finally, don't discount the value and performance of software-based routers. MPLS was first "sold" as a way to deal with core routers not handling Gigabit links. The idea was to get the edge routers to take over. Present CPU technology, especially with good embedded systems software design, is quite capable of performing the functions needed for edge routers in many circumstances. It may well make sense to consider a mix of router types based on port count and speed at edges and/or chassis routers with line cards that are using general purpose CPUs for forwarding engines instead of ASICs for lower-volume sites. If we actually wind up with the core of most backbones running MPLS after all, well, we've got the technology so use it. Inter-AS routers for backbones, will likely need to continue to be large, power-hungry boxes so that policy can be separately applied on the borders.
I should point out that none of this really is about scalability of the routing system of the Internet, it's all about hardware and software design to allow the present system to scale. Looking at completely different and more scalable routing would require finding a better way to do things than the present BGP approach.
Alexei Roudnev wrote:
You do not need to forward 100% packets on line card rate; forwarding 95% packets on card rate and have other processing (with possible delays) thru central CPU can work good enough..
heh. in the words of Randy, "i encourage my competitors to build a router this way". reality is that any "big, fast" router is forwarding in hardware - typically an ASIC or some form of programmable processor. the lines here are getting blurry again .. Moore's Law means that packet-forwarding can pretty much be back "in software" in something which almost resembles a general-purpose processor - or maybe more than a few of them working in parallel (ref: <http://www-03.ibm.com/chips/news/2004/0609_cisco.html>). if you've built something to be 'big' and 'fast' its likely that you're also forwarding in some kind of 'distributed' manner (as opposed to 'centralized'). as such - if you're building forwarding hardware capable of (say) 25M PPS and line-rate is 30M PPS, it generally isn't that much of a jump to build it for 30M PPS instead. i don't disagree that interfaces / backbones / networks are getting faster - but i don't think its yet a case of "Moore's law" becoming a problem - all that happens is one architects a system far more modular than before - e.g. ingress forwarding separate from egress forwarding. likewise, "FIB table growth" isn't yet a problem either - generally that just means "put in more SRAM" or "put in more TCAM space". IPv6 may change the equations around .. but we'll see .. cheers, lincoln.
likewise, "FIB table growth" isn't yet a problem either - generally that just means "put in more SRAM" or "put in more TCAM space".
IPv6 may change the equations around .. but we'll see ..
IPv6 will someday account for as many IPv4 networks as would exist then, and IPv6 prefixes are twice as large as IPv4 (64 bits prefix vs 32 bits prefix+address, remainder 64 bits addresses on IPv6 are strictly local), so despite a 3x cost increase (1 32 bit table for IPv4, 2 for IPv6) on memory structures and 2x increase on lookup engine(2 engines would be used for IPv6, one for IPv4), the same techonology that can run IPv4 can do IPv6 at the same speed. As this is not a usual demand today, even hardware routers limit the forwarding table to the sum of IPv4 and IPv6 prefixes, and forward IPv6 at half the rate of IPv4. Rubens
On Wed, Oct 26, 2005 at 12:10:39PM -0200, Rubens Kuhl Jr. wrote:
likewise, "FIB table growth" isn't yet a problem either - generally that just means "put in more SRAM" or "put in more TCAM space".
IPv6 may change the equations around .. but we'll see ..
IPv6 will someday account for as many IPv4 networks as would exist then, and IPv6 prefixes are twice as large as IPv4 (64 bits prefix vs 32 bits prefix+address, remainder 64 bits addresses on IPv6 are strictly local), so despite a 3x cost increase (1 32 bit table for IPv4, 2 for IPv6) on memory structures and 2x increase on lookup engine(2 engines would be used for IPv6, one for IPv4), the same techonology that can run IPv4 can do IPv6 at the same speed. As this is not a usual demand today, even hardware routers limit the forwarding table to the sum of IPv4 and IPv6 prefixes, and forward IPv6 at half the rate of IPv4.
s/64/128/ ...and total, complete, non-sense. please educate yourself more on reality of inet6 unicast forwarding before speculating. Thank you. James
IPv6 will someday account for as many IPv4 networks as would exist then, and IPv6 prefixes are twice as large as IPv4 (64 bits prefix vs 32 bits prefix+address, remainder 64 bits addresses on IPv6 are strictly local), so despite a 3x cost increase (1 32 bit table for IPv4, 2 for IPv6) on memory structures and 2x increase on lookup engine(2 engines would be used for IPv6, one for IPv4), the same techonology that can run IPv4 can do IPv6 at the same speed. As this is not a usual demand today, even hardware routers limit the forwarding table to the sum of IPv4 and IPv6 prefixes, and forward IPv6 at half the rate of IPv4.
s/64/128/
...and total, complete, non-sense. please educate yourself more on reality of inet6 unicast forwarding before speculating. Thank you.
From RFC 3513(Internet Protocol Version 6 (IPv6) Addressing Architecture): "For all unicast addresses, except those that start with binary value 000, Interface IDs are required to be 64 bits long and to be constructed in Modified EUI-64 format."
If Interface ID is 64 bits large, prefix would be 64 bits max, wouldn't it ? Usually it will be somewhere between 32 bits and 64 bits. As for 000 addresses: " Unassigned (see Note 1 below) 0000 0000 1/256 Unassigned 0000 0001 1/256 Reserved for NSAP Allocation 0000 001 1/128 [RFC1888] Unassigned 0000 01 1/64 Unassigned 0000 1 1/32 Unassigned 0001 1/16 1. The "unspecified address", the "loopback address", and the IPv6 Addresses with Embedded IPv4 Addresses are assigned out of the 0000 0000 binary prefix space. " Embedded IPv4 can be forwarded using IPv4 lookup, and all other 000 cases can be handled in slow-path as exceptions. IANA assignment starts at 001 and shouldn't get to any of the 000 sections. One interesting note though is Pekka Savola's RFC3627: "Even though having prefix length longer than /64 is forbidden by [ADDRARCH] section 2.4 for non-000/3 unicast prefixes, using /127 prefix length has gained a lot of operational popularity;" Are you arguing in the popularity sense ? Is RFC 3513 that apart from reality ? An October 2005(this month) article I found(http://www.usipv6.com/6sense/2005/oct/05.htm) says "Just as a reminder, IPv6 uses a 128-bit address, and current IPv6 unicast addressing uses the first 64 bits of this to actually describe the location of a node, with the remaining 64 bits being used as an endpoint identifier, not used for routing.", same as RFC 3513. Limiting prefix length to 64 bits is a good thing; it would be even better to guarantee that prefixes are always 32 bits or longer, in order to use exact match search on the first 32 bits of the address, and longest prefix match only on the remaining 32 bits of the prefix identifier. Rubens
One interesting note though is Pekka Savola's RFC3627: "Even though having prefix length longer than /64 is forbidden by [ADDRARCH] section 2.4 for non-000/3 unicast prefixes, using /127 prefix length has gained a lot of operational popularity;"
Are you arguing in the popularity sense ? Is RFC 3513 that apart from reality ? An October 2005(this month) article I found(http://www.usipv6.com/6sense/2005/oct/05.htm) says "Just as a reminder, IPv6 uses a 128-bit address, and current IPv6 unicast addressing uses the first 64 bits of this to actually describe the location of a node, with the remaining 64 bits being used as an endpoint identifier, not used for routing.", same as RFC 3513.
I'd have to say that RFC 3513 is out of touch with reality here, yes. As far as I know current routers with hardware based forwarding look at the full 128 bits - certainly our Juniper routers do.
Limiting prefix length to 64 bits is a good thing; it would be even better to guarantee that prefixes are always 32 bits or longer, in order to use exact match search on the first 32 bits of the address, and longest prefix match only on the remaining 32 bits of the prefix identifier.
Longer prefixes than 64 bits are already in use today (as an example, we use /124 for point to point links). It would be rather hard for a router vendor to introduce a new family of routers which completely broke backwards compatibility here, just in order to be "RFC 3513 compliant". Steinar Haug, Nethelp consulting, sthaug@nethelp.no
sthaug@nethelp.no (sthaug@nethelp.no) wrote:
I'd have to say that RFC 3513 is out of touch with reality here, yes. As far as I know current routers with hardware based forwarding look at the full 128 bits - certainly our Juniper routers do.
Ours do as well, but essentially, that's because they are internal to our network. Nobody would need that in the shared DFZ part, there I agree with Rubens. So although you would need the longer prefixes (right up to /128) in your routing core, you would not necessarily have to have them in your edge routers (as long as they don't directly connect to your core, like Cisco keeps telling us we should do). Dunno whether that's a feasible approach, probably not (big transit providers essentially pushing transit through the core), but if possible, it would lighten the routing burden a lot. Yours, Elmar. -- "Begehe nur nicht den Fehler, Meinung durch Sachverstand zu substituieren." (PLemken, <bu6o7e$e6v0p$2@ID-31.news.uni-berlin.de>) --------------------------------------------------------------[ ELMI-RIPE ]---
I'd have to say that RFC 3513 is out of touch with reality here, yes. As far as I know current routers with hardware based forwarding look at the full 128 bits - certainly our Juniper routers do.
Ours do as well, but essentially, that's because they are internal to our network. Nobody would need that in the shared DFZ part, there I agree with Rubens.
I agree about that part too.
So although you would need the longer prefixes (right up to /128) in your routing core, you would not necessarily have to have them in your edge routers (as long as they don't directly connect to your core, like Cisco keeps telling us we should do).
That's just it - even if you don't need to exchange longer than /64 prefixes with other providers, your routers still need to handle the longer prefixes in hardware (assuming you're using boxes with hardware based forwarding). Steinar Haug, Nethelp consulting, sthaug@nethelp.no
I think that to be "technically correct" is appropriate to say that we can have almost 2^64 networks (having reserved space doesn't mean that we can't use it in the future), and each network can accommodate up to 2^64 nodes. But is also true that it seems difficult in reality to reach that number of nodes. So it is so much inaccurate to say that IPv4 has 2^32 addresses than to say that IPv4 has 2^128, even if theoretically both figures are correct, because practical issues. Regards, Jordi
De: "Rubens Kuhl Jr." <rubensk@gmail.com> Responder a: <owner-nanog@merit.edu> Fecha: Thu, 27 Oct 2005 00:04:58 -0200 Para: James <james@towardex.com> CC: Lincoln Dale <ltd@interlink.com.au>, <nanog@nanog.org> Asunto: Re: Scalability issues in the Internet routing system
IPv6 will someday account for as many IPv4 networks as would exist then, and IPv6 prefixes are twice as large as IPv4 (64 bits prefix vs 32 bits prefix+address, remainder 64 bits addresses on IPv6 are strictly local), so despite a 3x cost increase (1 32 bit table for IPv4, 2 for IPv6) on memory structures and 2x increase on lookup engine(2 engines would be used for IPv6, one for IPv4), the same techonology that can run IPv4 can do IPv6 at the same speed. As this is not a usual demand today, even hardware routers limit the forwarding table to the sum of IPv4 and IPv6 prefixes, and forward IPv6 at half the rate of IPv4.
s/64/128/
...and total, complete, non-sense. please educate yourself more on reality of inet6 unicast forwarding before speculating. Thank you.
From RFC 3513(Internet Protocol Version 6 (IPv6) Addressing Architecture): "For all unicast addresses, except those that start with binary value 000, Interface IDs are required to be 64 bits long and to be constructed in Modified EUI-64 format."
If Interface ID is 64 bits large, prefix would be 64 bits max, wouldn't it ? Usually it will be somewhere between 32 bits and 64 bits.
As for 000 addresses:
" Unassigned (see Note 1 below) 0000 0000 1/256 Unassigned 0000 0001 1/256 Reserved for NSAP Allocation 0000 001 1/128 [RFC1888] Unassigned 0000 01 1/64 Unassigned 0000 1 1/32 Unassigned 0001 1/16
1. The "unspecified address", the "loopback address", and the IPv6 Addresses with Embedded IPv4 Addresses are assigned out of the 0000 0000 binary prefix space. "
Embedded IPv4 can be forwarded using IPv4 lookup, and all other 000 cases can be handled in slow-path as exceptions. IANA assignment starts at 001 and shouldn't get to any of the 000 sections.
One interesting note though is Pekka Savola's RFC3627: "Even though having prefix length longer than /64 is forbidden by [ADDRARCH] section 2.4 for non-000/3 unicast prefixes, using /127 prefix length has gained a lot of operational popularity;"
Are you arguing in the popularity sense ? Is RFC 3513 that apart from reality ? An October 2005(this month) article I found(http://www.usipv6.com/6sense/2005/oct/05.htm) says "Just as a reminder, IPv6 uses a 128-bit address, and current IPv6 unicast addressing uses the first 64 bits of this to actually describe the location of a node, with the remaining 64 bits being used as an endpoint identifier, not used for routing.", same as RFC 3513.
Limiting prefix length to 64 bits is a good thing; it would be even better to guarantee that prefixes are always 32 bits or longer, in order to use exact match search on the first 32 bits of the address, and longest prefix match only on the remaining 32 bits of the prefix identifier.
Rubens
************************************ The IPv6 Portal: http://www.ipv6tf.org Barcelona 2005 Global IPv6 Summit Information available at: http://www.ipv6-es.com This electronic message contains information which may be privileged or confidential. The information is intended to be for the use of the individual(s) named above. If you are not the intended recipient be aware that any disclosure, copying, distribution or use of the contents of this information, including attached files, is prohibited.
Forwarding is in line cards not because of CPU issues, but because of BUS issues. It means, that card can be software based easily. Anyway, as I said - it is only small, minor engineering question - how to forward having 2,000,000 routes. If internet will require such router - it will be crearted easily. Today we eed 160,000 routes - and it works (line cards,m software, etc - it DO WORK). ----- Original Message ----- From: "Lincoln Dale" <ltd@interlink.com.au> To: "Alexei Roudnev" <alex@relcom.net> Cc: <nanog@nanog.org>; "Daniel Senie" <dts@senie.com> Sent: Wednesday, October 26, 2005 2:42 AM Subject: Re: Scalability issues in the Internet routing system
Alexei Roudnev wrote:
You do not need to forward 100% packets on line card rate; forwarding
packets on card rate and have other processing (with possible delays)
95% thru
central CPU can work good enough..
heh. in the words of Randy, "i encourage my competitors to build a router this way".
reality is that any "big, fast" router is forwarding in hardware - typically an ASIC or some form of programmable processor. the lines here are getting blurry again .. Moore's Law means that packet-forwarding can pretty much be back "in software" in something which almost resembles a general-purpose processor - or maybe more than a few of them working in parallel (ref: <http://www-03.ibm.com/chips/news/2004/0609_cisco.html>).
if you've built something to be 'big' and 'fast' its likely that you're also forwarding in some kind of 'distributed' manner (as opposed to 'centralized').
as such - if you're building forwarding hardware capable of (say) 25M PPS and line-rate is 30M PPS, it generally isn't that much of a jump to build it for 30M PPS instead.
i don't disagree that interfaces / backbones / networks are getting faster - but i don't think its yet a case of "Moore's law" becoming a problem - all that happens is one architects a system far more modular than before - e.g. ingress forwarding separate from egress forwarding.
likewise, "FIB table growth" isn't yet a problem either - generally that just means "put in more SRAM" or "put in more TCAM space".
IPv6 may change the equations around .. but we'll see ..
cheers,
lincoln.
On Wed, 26 Oct 2005 08:53:50 PDT, Alexei Roudnev said:
Anyway, as I said - it is only small, minor engineering question - how to forward having 2,000,000 routes. If internet will require such router - it will be crearted easily. Today we eed 160,000 routes - and it works (line cards,m software, etc - it DO WORK).
Forwarding packets is only half the story. Building a routing table is the other half. Route flaps. Even if you have an algorithm that's O(n), 2M routes will take 12.5 times as long to crunch as 160K. If your routing protocol is O(n**2) on number of routes, that's about 150 times as much. Such a router is probably buildable. I'm not at all convinced that it's "easy" to do so at a price point acceptable for most sites that currently have full routing tables.
On Oct 26, 2005, at 12:12 PM, Valdis.Kletnieks@vt.edu wrote:
On Wed, 26 Oct 2005 08:53:50 PDT, Alexei Roudnev said:
Anyway, as I said - it is only small, minor engineering question - how to forward having 2,000,000 routes. If internet will require such router - it will be crearted easily. Today we eed 160,000 routes - and it works (line cards,m software, etc - it DO WORK).
Forwarding packets is only half the story. Building a routing table is the other half.
Route flaps. Even if you have an algorithm that's O(n), 2M routes will take 12.5 times as long to crunch as 160K. If your routing protocol is O (n**2) on number of routes, that's about 150 times as much.
Such a router is probably buildable. I'm not at all convinced that it's "easy" to do so at a price point acceptable for most sites that currently have full routing tables.
There are definitely performance challenges to overcome. Of course, most route processors are underpowered compared to the existing state of the art for processors so there is some wiggle room. With both Cisco and Juniper we have a nice period of hang time as "brand new" new routes get installed. Both vendors are playing with layers of abstraction to improve things once up and operational but increasing the amount of time to bring a device "online" is factor which influences purchasing decisions as well. It does seem appropriate to consider Gigabit sized routing/forwarding table interconnects and working on TCP performance optimization for BGP specifically, if any improvement remains. Combine those things with a chunky CPU and you are left with pushing data as fast as possible into the forwarding plane (need speedy ASIC table updates here). Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation. Regards, Blaine
Blaine Christian wrote:
It does seem appropriate to consider Gigabit sized routing/forwarding table interconnects and working on TCP performance optimization for BGP specifically, if any improvement remains. Combine those things with a chunky CPU and you are left with pushing data as fast as possible into the forwarding plane (need speedy ASIC table updates here).
I guess you got something wrong here. Neither BGP nor TCP (never has been) are a bottleneck regarding the subject of this discussion.
Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation.
You may want to read this thread from the beginning. The problem is not the routing plane or routing protocol but the forwarding plane or ASIC's or whatever. Both have very different scaling properties. The forwarding plane is at an disadvantage here because at the same time it faces growth in table size and less time to perform a lookup . With current CPU's you can handle a 2M prefix DFZ quite well without killing the budget. For the forwarding hardware this ain't the case unfortunatly. -- Andre
On Wed, 26 Oct 2005, Andre Oppermann wrote:
Blaine Christian wrote:
It does seem appropriate to consider Gigabit sized routing/forwarding table interconnects and working on TCP performance optimization for BGP specifically, if any improvement remains. Combine those things with a chunky CPU and you are left with pushing data as fast as possible into the forwarding plane (need speedy ASIC table updates here).
I guess you got something wrong here. Neither BGP nor TCP (never has been) are a bottleneck regarding the subject of this discussion.
i think he's describing initial table gather/flood and later massage of that into FIB on cards ... which relates to his earlier comment about 'people still care about how fast initial convergence happens' (which is true)
Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation.
You may want to read this thread from the beginning. The problem is not the routing plane or routing protocol but the forwarding plane or ASIC's
it's actually both... convergence is very, very important. Some of the conversation (which I admit I have only watched spottily) has covered this too.
or whatever. Both have very different scaling properties. The forwarding plane is at an disadvantage here because at the same time it faces growth in table size and less time to perform a lookup . With current CPU's you can handle a 2M prefix DFZ quite well without killing the budget. For the
really? are you sure about that? are you referrinng to linecard CPU or RIB->FIB creation cpu? (be it monolithic design or distributed design)
forwarding hardware this ain't the case unfortunatly.
this could be... I'm not sure I've seen a vendor propose the cost differentials though.
Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation.
You may want to read this thread from the beginning. The problem is not the routing plane or routing protocol but the forwarding plane or ASIC's or whatever. Both have very different scaling properties. The forwarding plane is at an disadvantage here because at the same time it faces growth in table size and less time to perform a lookup . With current CPU's you can handle a 2M prefix DFZ quite well without killing the budget. For the forwarding hardware this ain't the case unfortunatly.
Hi Andre... I hear what you are saying but don't agree with the above statement. The problem is with the system as a whole and I believe that was the point Vladis, and others, were making as well. The forwarding plane is only one part of the puzzle. How do you get the updates into the forwarding plane? How do you get the updates into the router in the first place and how fast can you do that? I have seen at least one case where the issue did not appear to be the ASICs but getting the information into them rapidly. If you go and create a new ASIC without taking into account the manner in which you get the data into it you probably won't sell many routers <grin>. BTW, I do agree that spinning new ASICs is a non-trivial task and is certainly the task you want to get started quickly when building a new system. I did read your comment on BGP lending itself to SMP. Can you elaborate on where you might have seen this? It has been a pretty monolithic implementation for as long as I can remember. In fact, that was why I asked the question, to see if anyone had actually observed a functioning multi-processor implementation of the BGP process. Regards, Blaine
Blaine Christian wrote:
Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation.
You may want to read this thread from the beginning. The problem is not the routing plane or routing protocol but the forwarding plane or ASIC's or whatever. Both have very different scaling properties. The forwarding plane is at an disadvantage here because at the same time it faces growth in table size and less time to perform a lookup . With current CPU's you can handle a 2M prefix DFZ quite well without killing the budget. For the forwarding hardware this ain't the case unfortunatly.
Hi Andre...
I hear what you are saying but don't agree with the above statement. The problem is with the system as a whole and I believe that was the point Vladis, and others, were making as well. The forwarding plane is only one part of the puzzle. How do you get the updates into the forwarding plane? How do you get the updates into the router in the first place and how fast can you do that? I have seen at least one case where the issue did not appear to be the ASICs but getting the information into them rapidly. If you go and create a new ASIC without taking into account the manner in which you get the data into it you probably won't sell many routers <grin>.
Sure, if you have a bottleneck at FIB insertion you fail much earlier. I'd say if that happens it's an engineering oversight or a design tradeoff. However I don't think this is the choke point in the entire routing table size equation. Depending on the type of prefix churn you don't have that many transactions reaching the FIB. Most far-away churn doesn't change the next hop for example. Local churn, when direct neighbors flap, mostly just changes the nexthop (egress interface). In a high performant ASIC/TCAM whatever FIB a nexthop change can be done quite trivially. Prefix drop can be handled by marking it invalid and garbage collecting it later. Prefix insertions may either salvage an invalidated prefix or have to be re-inserted. The insertion time depends on the algorithms of the FIB table implementation. For all practical purposes a FIB can be designed to be quite speedy in this regard without busting the budget. The link speed between two DFZ routers has seldomly been the limit for initial routing table exchanges. Neither has TCP. It is mostly dominated by the algorithm choice and CPU of the RIB processor on both ends.
BTW, I do agree that spinning new ASICs is a non-trivial task and is certainly the task you want to get started quickly when building a new system.
It is non-trivial for its prefix storage size and ultra-fast lookup times. Longest prefix match is probably the most difficult thing to scale properly as a search always must be done over a number of overlapping prefixes. To scale this much better and remove the bottleneck you may drop the 'overlapping' part or the 'longest-match' part and the world suddenly looks much brighter. This is the crucial thing that got forgotten during the IPng design phase which brought us IPv6. So far we have learned that limiting the number of IPv[46] prefixes in the DFZ is not an option for commercial and socio-technical reasons. That leaves only the other option of changing the routing lookup to something with better scaling properties.
I did read your comment on BGP lending itself to SMP. Can you elaborate on where you might have seen this? It has been a pretty monolithic implementation for as long as I can remember. In fact, that was why I asked the question, to see if anyone had actually observed a functioning multi-processor implementation of the BGP process.
I can make the SMP statement with some authority as I have done the internal design of the OpenBGPd RDE and my co-worker Claudio has implemented it. Given proper locking of the RIB a number of CPU's can crunch on it and handle neighbor communication indepently of each other. If you look at Oracle databases they manage to scale performance with factor 1.9-1.97 per CPU. There is no reason to believe we can't do this with the BGP 'database'. -- Andre
I did read your comment on BGP lending itself to SMP. Can you elaborate on where you might have seen this? It has been a pretty monolithic implementation for as long as I can remember. In fact, that was why I asked the question, to see if anyone had actually observed a functioning multi-processor implementation of the BGP process.
I can make the SMP statement with some authority as I have done the internal design of the OpenBGPd RDE and my co-worker Claudio has implemented it. Given proper locking of the RIB a number of CPU's can crunch on it and handle neighbor communication indepently of each other. If you look at Oracle databases they manage to scale performance with factor 1.9-1.97 per CPU. There is no reason to believe we can't do this with the BGP 'database'.
Neat! So you were thinking you would leave the actual route selection process monolithic and create separate processes per peer? I have seen folks doing something similar with separate MBGP routing instances. Had not heard of anyone attempting this for a "global" routing table with separate threads per neighbor (as opposed to per table). What do you do if you have one neighbor who wants to send you all 2M routes though? I am thinking of route reflectors specifically but also confederation EIBGP sessions. I think you hit the nail on the head regarding record locking. This is the thing that is going to bite you if anything will. I have heard none of the usual suspects speak up so I suspect that either this thread is now being ignored or no one has heard of an implementation like the one you just described.
Blaine Christian wrote:
I did read your comment on BGP lending itself to SMP. Can you elaborate on where you might have seen this? It has been a pretty monolithic implementation for as long as I can remember. In fact, that was why I asked the question, to see if anyone had actually observed a functioning multi-processor implementation of the BGP process.
I can make the SMP statement with some authority as I have done the internal design of the OpenBGPd RDE and my co-worker Claudio has implemented it. Given proper locking of the RIB a number of CPU's can crunch on it and handle neighbor communication indepently of each other. If you look at Oracle databases they manage to scale performance with factor 1.9-1.97 per CPU. There is no reason to believe we can't do this with the BGP 'database'.
Neat! So you were thinking you would leave the actual route selection process monolithic and create separate processes per peer? I have seen folks doing something similar with separate MBGP routing instances. Had not heard of anyone attempting this for a "global" routing table with separate threads per neighbor (as opposed to per table). What do you do if you have one neighbor who wants to send you all 2M routes though? I am thinking of route reflectors specifically but also confederation EIBGP sessions. I think you hit the nail on the head regarding record locking. This is the thing that is going to bite you if anything will. I have heard none of the usual suspects speak up so I suspect that either this thread is now being ignored or no one has heard of an implementation like the one you just described.
In BGP there is no 'global' route (actually path) selection in BGP. Everything is done per prefix+path. In the RIB you can just lock the prefix, insert the new path and recalculate which one wins. Then issue the update to the FIB, if any. Work done. Statistically there is very little contention on the prefix and the path records. For contention two updates for the same prefix would have to arrive at the same time from two different peers handled by different CPU's. I'd guess the SMP scaling factor for BGP is around 1.98. The 0.02 go lost for locking overhead and negative caching effects. Real serialization happens only at the FIB change queue. However serializing queues can be handled very efficiently on SMP too. -- Andre
Neat! So you were thinking you would leave the actual route selection process monolithic and create separate processes per peer? I have seen folks doing something similar with separate MBGP routing instances. Had not heard of anyone attempting this for a "global" routing table with separate threads per neighbor (as opposed to per table). What do you do if you have one neighbor who wants to send you all 2M routes though? I am thinking of route reflectors specifically but also confederation EIBGP sessions.
I think you hit the nail on the head regarding record locking. This is the thing that is going to bite you if anything will. I have heard none of the usual suspects speak up so I suspect that either this thread is now being ignored or no one has heard of an implementation like the one you just described.
In BGP there is no 'global' route (actually path) selection in BGP. Everything is done per prefix+path. In the RIB you can just lock the prefix, insert the new path and recalculate which one wins. Then issue the update to the FIB, if any. Work done. Statistically there is very little contention on the prefix and the path records. For contention two updates for the same prefix would have to arrive at the same time from two different peers handled by different CPU's. I'd guess the SMP scaling factor for BGP is around 1.98. The 0.02 go lost for locking overhead and negative caching effects. Real serialization happens only at the FIB change queue. However serializing queues can be handled very efficiently on SMP too.
Hey Andre, If you are intending to break the BGP process into per neighbor threads this does not sound like it would have beneficial impact on a single neighbor with the majority of the routes (thinking specifically of EIBGP and/or Route Reflectors). Was your idea specifically related to per neighbor processing or were you thinking you could break the BGP process itself into chunks?
Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation.
Why bother multithreading when you can just use multiple CPUs? :-) Nowadays, a CPU is not a chip, it is a core. "Core" is the name for a section of a chip which functions as a CPU. Cores are actually software written in a language such as VHDL (VHSIC Hardware Description Language). VHSIC stands for Very High Speed Integrated Circuit. The core is "compiled" into hardware on either an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit). An FPGA can be reconfigured by software at any time, for instance you could reprogram an FPGA to do route lookups for a specific set of prefixes and change the hardware whenever the prefix list changes. Most ASICs nowadays are actually hybrid chips because they contain an FPGA section. Now, back to cores. Since the CPU core is simply software, it is possible to install multiple copies of the core on an FPGA or an ASIC if there is enough space. The cores for RISC machines like ARM are much smaller than the core for a Pentium and therefore a simple RISC CPU core can be replicated more times. Now, with that information in hand, you will be able to understand just what Cisco and IBM have done in creating the CRS-1 chip with a minimum of 188 CPUs on the chip. http://www.eet.com/showArticle.jhtml?articleID=26806315 Just as the line between routers and switches has become blurred, so to has the line between hardware and software become blurred. --Michael Dillon
On Thu, 27 Oct 2005 Michael.Dillon@btradianz.com wrote:
Another thing, it would be interesting to hear of any work on breaking the "router code" into multiple threads. Being able to truly take advantage of multiple processors when receiving 2M updates would be the cats pajamas. Has anyone seen this? I suppose MBGP could be rather straightforward, as opposed to one big table, in a multi-processor implementation.
Why bother multithreading when you can just use multiple CPUs? :-)
Nowadays, a CPU is not a chip, it is a core. "Core" is the name for a section of a chip which functions as a CPU. Cores are actually software written in a language such as VHDL (VHSIC Hardware Description Language). VHSIC stands for Very High Speed Integrated Circuit. The core is "compiled" into hardware on either an FPGA (Field Programmable Gate Array) or an ASIC (Application Specific Integrated Circuit). An FPGA can be reconfigured by software at any time, for instance you could reprogram an FPGA to do route lookups for a specific set of prefixes and change the hardware whenever the prefix list changes.
Most ASICs nowadays are actually hybrid chips because they contain an FPGA section. Now, back to cores. Since the CPU core is simply software, it is possible to install multiple copies of the core on an FPGA or an ASIC if there is enough space. The cores for RISC machines like ARM are much smaller than the core for a Pentium and therefore a simple RISC CPU core can be replicated more times.
Now, with that information in hand, you will be able to understand just what Cisco and IBM have done in creating the CRS-1 chip with a minimum of 188 CPUs on the chip. http://www.eet.com/showArticle.jhtml?articleID=26806315
Just as the line between routers and switches has become blurred, so to has the line between hardware and software become blurred.
Thank you very much for the above description - very interesting. I do however note that multiple CPUs is not a replacement for multi-threading (in fact multi-threading takes advantage of multiple CPUs very well). When you add operating system (software) to your design of multi-cpu system on one chip substructure it looks like: ASIC/FPGA ---------------------- | | | | CPU1 CPU2 CPU3 CPU4 | | | | OS OS OS OS | | | | Pr1 Pr2 Pr3 Pr4 (Pr is short for Process) Which means the OS is replicated. In fact its even worse as the OS would have to be independent and use separate memory of fixed size. In actuality when you do have multiple CPUs you still want the design like: ASIC/FPGA ---------------------- | | | | CPU1 CPU2 CPU3 CPU4 | | | | ---------------------- OS ---------------------- | | process1-- process2-- | | | | thread1 thread2 thread3 thread4 In above typical system design the OS is shared code for entire system that controls what actual code is run on what CPU (and the OS functions are not replicated for each CPU/system) and the system can share physical memory easily. Each process is independent piece of code where as thread is multiple instances of the same process run simultaneously to take care of particular operation (the separation into process and thread is so that the actual programming code is not replicated in memory for each instance of process). And this system would still take advantage of multiple CPUs properly because OS make it so that each thread actually runs on its different CPU in parallel - but memory management and programing code is shared and not replicated as when you have fully independent parallel systems. -- William Leibzon Elan Networks william@elan.net
Alexei Roudnev wrote:
Forwarding is in line cards not because of CPU issues, but because of BUS issues.
i respectfully disagree. "centralized forwarding" only gets you so far on the performance scale. "distributed forwarding" is a (relatively) simple way to scale that performance. just take a look at any 'modern' router (as in, something this century) with a requirement of (say) >10M PPS. sure - there are reasons why one DOES have to go to a distributed model - 'bus limitations' as you say .. but i'd more classify those as phsycal chip-packaging limitations - how many pins you can put on a chip, how 'wide' the memory-bus needs to be as the PPS goes up.
It means, that card can be software based easily.
once again - disagree. it _may_ be that it means that forwarding can be in software - but for the most part the determining factor here is what is the PPS required for the function. i've previously posted a categorization of requirements in a router based on their function -- see <http://www.merit.edu/mail.archives/nanog/2005-09/msg00635.html> i think _software-based_ works for /some/ types of router functions - but nowhere near all - and certainly not a 'core' router this century.
Anyway, as I said - it is only small, minor engineering question - how to forward having 2,000,000 routes. If internet will require such router - it will be crearted easily. Today we eed 160,000 routes - and it works (line cards,m software, etc - it DO WORK).
if you're looking at routers based on their classification, clearly there isn't a requirement for all types of routers to have a full routing table. but for a 'core router' and 'transit/peering routers', the ability to work with a full routing-table view is probably a requirement - both now, and into the future. there have been public demonstrations of released routers supporting upwards of 1.5M IPv4+IPv6 prefixes and demonstrations on routing churn convergence time. <http://www.lightreading.com/document.asp?doc_id=63606> contains one such public test. cheers, lincoln.
----- Original Message ----- From: "Lincoln Dale" <ltd@interlink.com.au> To: "Alexei Roudnev" <alex@relcom.net> Cc: <nanog@nanog.org>; "Daniel Senie" <dts@senie.com> Sent: Wednesday, October 26, 2005 2:42 AM Subject: Re: Scalability issues in the Internet routing system
You do not need to forward 100% packets on line card rate; forwarding 95% packets on card rate and have other processing (with possible delays)
central CPU can work good enough.. heh. in the words of Randy, "i encourage my competitors to build a router
Alexei Roudnev wrote: thru this way".
reality is that any "big, fast" router is forwarding in hardware - typically an ASIC or some form of programmable processor. the lines here are getting blurry again .. Moore's Law means that packet-forwarding can pretty much be back "in software" in something which almost resembles a general-purpose processor - or maybe more than a few of them working in parallel (ref: <http://www-03.ibm.com/chips/news/2004/0609_cisco.html>).
if you've built something to be 'big' and 'fast' its likely that you're also forwarding in some kind of 'distributed' manner (as opposed to 'centralized').
as such - if you're building forwarding hardware capable of (say) 25M PPS and line-rate is 30M PPS, it generally isn't that much of a jump to build it for 30M PPS instead.
i don't disagree that interfaces / backbones / networks are getting faster - but i don't think its yet a case of "Moore's law" becoming a problem - all that happens is one architects a system far more modular than before - e.g. ingress forwarding separate from egress forwarding.
likewise, "FIB table growth" isn't yet a problem either - generally that just means "put in more SRAM" or "put in more TCAM space".
IPv6 may change the equations around .. but we'll see ..
cheers,
lincoln.
there have been public demonstrations of released routers supporting upwards of 1.5M IPv4+IPv6 prefixes and demonstrations on routing churn convergence time. <http://www.lightreading.com/ document.asp?doc_id=63606> contains one such public test.
The http://www.lightreading.com/document.asp? site=testing&doc_id=63606&page_number=6 part may be a bit misleading. For me it would be more interesting to see what happens when 500k routes completely disappear from the router then come back. I want to see a 500k route push from a neighboring CRS in that amount of time... Of course the routes can switch quick when you use a layer of indirection (folks have been doing that for a few years now). My question is how fast can you install routes from a standing start (or a 1/4 of a standing start if this is 2M prefixes). I will leave the question on whether it is actually worth an investment in time and resources as an exercise for the reader <grin>. Lightreading people, test it like that! It will be much more entertaining and perhaps even a bit enlightening to see how major vendors compare on "brand new" route installation into RIB and FIB. They only have to twiddle a couple bits to make indirection work quickly. Having to deal with a brand new prefix is a completely different problem.
If this 500K routes come from upstream, it is just _default_ so can be installed instantly if configuration is correct. If this 500K routes are from the peer, you switch (in reality) 10 - 20%, so it is simpler anyway. Even if it is multihome customer, there is not any need in _fast_ installation for these 500K routes. You just switch from one provider to another _some_ of the routes - if it takes 1 minute, nothing wrong happen. Then, calculate: 500K routes, say 32 bytes/route (if not compressed by some way), 16MB. T1 link, 100K/second, 160 seconds, 3 minutes. 100Mbit link, 10MB/second, 2 seconds. T1 wil not be suitable for full routing of course, so what? Just agaion - there are many tricks todo things right, out of theoretics of IPv6 commitees. ----- Original Message ----- From: "Blaine Christian" <blaine@blaines.net> To: "Lincoln Dale" <ltd@interlink.com.au> Cc: "Alexei Roudnev" <alex@relcom.net>; <nanog@nanog.org>; "Daniel Senie" <dts@senie.com> Sent: Wednesday, October 26, 2005 6:06 PM Subject: Re: Scalability issues in the Internet routing system
there have been public demonstrations of released routers supporting upwards of 1.5M IPv4+IPv6 prefixes and demonstrations on routing churn convergence time. <http://www.lightreading.com/ document.asp?doc_id=63606> contains one such public test.
The http://www.lightreading.com/document.asp? site=testing&doc_id=63606&page_number=6 part may be a bit misleading. For me it would be more interesting to see what happens when 500k routes completely disappear from the router then come back. I want to see a 500k route push from a neighboring CRS in that amount of time...
Of course the routes can switch quick when you use a layer of indirection (folks have been doing that for a few years now). My question is how fast can you install routes from a standing start (or a 1/4 of a standing start if this is 2M prefixes).
I will leave the question on whether it is actually worth an investment in time and resources as an exercise for the reader <grin>.
Lightreading people, test it like that! It will be much more entertaining and perhaps even a bit enlightening to see how major vendors compare on "brand new" route installation into RIB and FIB. They only have to twiddle a couple bits to make indirection work quickly. Having to deal with a brand new prefix is a completely different problem.
Alexei Roudnev wrote:
If this 500K routes come from upstream, it is just _default_ so can be installed instantly if configuration is correct.
mostly correct -- you're talking about a RIB->FIB optimization -- potentially no need to populate 500K FIB entries as they essentially result in the 'same' path. however, note that this works both ways -- these are 'more specific' prefixes so should always take priority over a '0/0' route. also note that if the upstream stops announcing a '0/0' route, then you're going to have to instantiate those 500K prefixes awfully quickly... it would be "broken" if an optimization such as this meant that you had even one second of blackholing traffic destined to one of those 500K prefixes while an 'optimization' instantiated forwarding entries that should have been there in the first place... in my humble view, i'd argue that this is but one part of building a router and there are potentially many many more things that one needs to optimize for.
If this 500K routes are from the peer, you switch (in reality) 10 - 20%, so it is simpler anyway.
Even if it is multihome customer, there is not any need in _fast_ installation for these 500K routes. You just switch from one provider to another _some_ of the routes - if it takes 1 minute, nothing wrong happen.
this is the whole "populate the forwarding table on demand" approach (a.k.a. "route cache") versus "prepopulate the forwarding table" (a.k.a. CEF). i think history has shown that the latter is far more necessary than the former. think DDoS attack. the former works provided you're not pushing traffic to bogus addresses. it may be that under 'normal' conditions you have traffic going to less than 20% of prefixes. but think of a worm/virus looking for new hosts to infect - typically guessing random ip-addresses to probe. cheers, lincoln.
Andre Oppermann wrote:
I guess it's time to have a look at the actual scalability issues we face in the Internet routing system. Maybe the area of action becomes a bit more clear with such an assessment.
In the current Internet routing system we face two distinctive scalability issues:
1. The number of prefixes*paths in the routing table and interdomain routing system (BGP)
This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is:
capacity = prefix * path * churnfactor / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Moore's law for CPUs is kaput. Really, Moore's Law is more of an observation, than a law. We need to stop fixating on Moore's law for the love of god. It doesn't exist in a vacuum, Components don't get on the curve for free. Each generation requires enormously more capital to engineer the improved Si process, innovation, process, which only get paid for by increasing demand. If the demand slows down then the investment won't be recovered and the cycle will stop, possibly before the physics limits, depending on the amount of demand, amount of investment required for the next turn etc. Also, no network I know is on the upgrade path at a velocity that they are swapping out components in a 18 month window. Ideally, for an economically viable network, you want to be on an upgrade cycle that lags Moore's observation. Getting routers off your books is not an 18 month cycle, it is closer to 48 months or even in some cases 60 months. Then we have the issue of an memory bandwidth to keep the ever changing prefixes updated and synced. /vijay
vijay gill wrote:
Andre Oppermann wrote:
I guess it's time to have a look at the actual scalability issues we face in the Internet routing system. Maybe the area of action becomes a bit more clear with such an assessment.
In the current Internet routing system we face two distinctive scalability issues:
1. The number of prefixes*paths in the routing table and interdomain routing system (BGP)
This problem scales with the number of prefixes and available paths to a particlar router/network in addition to constant churn in the reachablility state. The required capacity for a routers control plane is:
capacity = prefix * path * churnfactor / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
Moore's law for CPUs is kaput. Really, Moore's Law is more of an observation, than a law. We need to stop fixating on Moore's law for the love of god. It doesn't exist in a vacuum, Components don't get on the curve for free. Each generation requires enormously more capital to engineer the improved Si process, innovation, process, which only get paid for by increasing demand. If the demand slows down then the investment won't be recovered and the cycle will stop, possibly before the physics limits, depending on the amount of demand, amount of investment required for the next turn etc.
Predicting the future was a tricky business ten years ago and still is today. What makes you think the wheel stops turning today? Customer access speed will no increase? No more improvements in DSL, Cable and Wireless technologies? Come on, you're kidding. Right?
Also, no network I know is on the upgrade path at a velocity that they are swapping out components in a 18 month window. Ideally, for an economically viable network, you want to be on an upgrade cycle that lags Moore's observation. Getting routers off your books is not an 18 month cycle, it is closer to 48 months or even in some cases 60 months.
When you are buying a router today do you specify it to cope with 200k routes or more? Planning ahead is essential.
Then we have the issue of an memory bandwidth to keep the ever changing prefixes updated and synced.
Compared to link speed this is nothing. And nowhere near to memory bandwidth. -- Andre
Andre Oppermann wrote:
vijay gill wrote:
Moore's law for CPUs is kaput. Really, Moore's Law is more of an observation, than a law. We need to stop fixating on Moore's law for the love of god. It doesn't exist in a vacuum, Components don't get on the curve for free. Each generation requires enormously more capital to engineer the improved Si process, innovation, process, which only get paid for by increasing demand. If the demand slows down then the investment won't be recovered and the cycle will stop, possibly before the physics limits, depending on the amount of demand, amount of investment required for the next turn etc.
Predicting the future was a tricky business ten years ago and still is today. What makes you think the wheel stops turning today? Customer access speed will no increase? No more improvements in DSL, Cable and Wireless technologies? Come on, you're kidding. Right?
Missing the point. We can deal with increased speeds by going wider, the network topology data/control plane isn't going wider, THAT is where the moore's observation was targeted at.
Also, no network I know is on the upgrade path at a velocity that they are swapping out components in a 18 month window. Ideally, for an economically viable network, you want to be on an upgrade cycle that lags Moore's observation. Getting routers off your books is not an 18 month cycle, it is closer to 48 months or even in some cases 60 months.
When you are buying a router today do you specify it to cope with 200k routes or more? Planning ahead is essential.
And we're paying for it. But again, assuming that the prefix/memory bandwidth churn can be accommodated by the next generation of cpus. I am not going to throw out my router in 18 months. Its still on the books.
Then we have the issue of an memory bandwidth to keep the ever changing prefixes updated and synced.
Compared to link speed this is nothing. And nowhere near to memory bandwidth.
Each update to and fro from memory takes cycles, and as the routing tables become bigger, the frequency of access to the memory for keeping the system in sync impose a larger burden. This is orthogonal to link speed. /vijay
Hello All. I have seen people writing in now and again with things like this and I never thought I would be one of them. But here goes. After doing a netgear firewall upgrade which suplpied some extra information, I started noticing these odd pings to all over the place from a computer I have. I don't notce any attempted return traffic from the same Ip's. So I am at a loss as to what this could be or why. Any thoughts would be appreciated. - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:28 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:29 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:30 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:04 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:05 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:06 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Its a dell computer supplied to me by my employer that I use for 2 purposes. I installed firefox for keeping an eye on our MRTG graphs and to use SKYPE as required by them. Except for a Java developers toolkit that I had to install to help with development testing once, (which I uninstalled) and the myriad of crap software that came free with the computer, that is all that has ever been installed that I am aware of. I did recently, becouse of this, install startup cop to see what might be starting up on bootups. (at least non services wize) I uninstalled a few other items that I knew I would never used and disabled those that looked odd. But still.. the pings go out about once or twice a day. I have not had a chance to turn off skype to see if that is what is doing it. But it is in no way connected to talking via skype. But why on earth would skpe send out these random pings all over the globe? Thanks! Nicole
Doesn't look very odd to me, it's a very specific, rotating sequence.. 204.152.43.107 130.244.175.141 202.139.21.27 202.232.175.9 Then back to .107,.141,.27,.9 ad nauseum.. Something on your laptop is trying to a. call home, b. find the best way home, c. who knows.. Kill processes until it goes away and then go.. aha! Peter Kranz pkranz@unwiredltd.com -----Original Message----- From: owner-nanog@merit.edu [mailto:owner-nanog@merit.edu] On Behalf Of Nicole Sent: Tuesday, October 18, 2005 5:02 PM To: nanog@nanog.org Subject: Really odd pings going out Hello All. I have seen people writing in now and again with things like this and I never thought I would be one of them. But here goes. After doing a netgear firewall upgrade which suplpied some extra information, I started noticing these odd pings to all over the place from a computer I have. I don't notce any attempted return traffic from the same Ip's. So I am at a loss as to what this could be or why. Any thoughts would be appreciated. - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:28 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:29 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:30 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:04 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:05 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:06 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Its a dell computer supplied to me by my employer that I use for 2 purposes. I installed firefox for keeping an eye on our MRTG graphs and to use SKYPE as required by them. Except for a Java developers toolkit that I had to install to help with development testing once, (which I uninstalled) and the myriad of crap software that came free with the computer, that is all that has ever been installed that I am aware of. I did recently, becouse of this, install startup cop to see what might be starting up on bootups. (at least non services wize) I uninstalled a few other items that I knew I would never used and disabled those that looked odd. But still.. the pings go out about once or twice a day. I have not had a chance to turn off skype to see if that is what is doing it. But it is in no way connected to talking via skype. But why on earth would skpe send out these random pings all over the globe? Thanks! Nicole
the four IPs... in the mcsp.com domain swip.net domain the machine dsmmr.shoalhaven.net.au. in the iij.ad.jp domain --- some process is trying to call home... part of a botnet perhaps. --bill On Tue, Oct 18, 2005 at 05:02:23PM -0700, Nicole wrote:
Hello All. I have seen people writing in now and again with things like this and I never thought I would be one of them. But here goes.
After doing a netgear firewall upgrade which suplpied some extra information, I started noticing these odd pings to all over the place from a computer I have. I don't notce any attempted return traffic from the same Ip's. So I am at a loss as to what this could be or why. Any thoughts would be appreciated.
- Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:28 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:29 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:30 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:04 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:05 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:06 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match]
Its a dell computer supplied to me by my employer that I use for 2 purposes. I installed firefox for keeping an eye on our MRTG graphs and to use SKYPE as required by them. Except for a Java developers toolkit that I had to install to help with development testing once, (which I uninstalled) and the myriad of crap software that came free with the computer, that is all that has ever been installed that I am aware of.
I did recently, becouse of this, install startup cop to see what might be starting up on bootups. (at least non services wize) I uninstalled a few other items that I knew I would never used and disabled those that looked odd. But still.. the pings go out about once or twice a day.
I have not had a chance to turn off skype to see if that is what is doing it. But it is in no way connected to talking via skype. But why on earth would skpe send out these random pings all over the globe?
Thanks!
Nicole
On 10/18/05, bmanning@vacation.karoshi.com <bmanning@vacation.karoshi.com> wrote:
--- some process is trying to call home... part of a botnet perhaps.
I've found this tool to be very handy in finding out just what process is doing what. http://www.sysinternals.com/Utilities/TcpView.html btw, I don't think nanog is the most appropriate list for these types of questions, fyi.
On Tuesday, 2005-10-18 at 21:18 MST, Aaron Glenn <aaron.glenn@gmail.com> wrote:
I've found this tool to be very handy in finding out just what process is doing what.
But Tcpview doesn't show anything for icmp - which is what was happening in this case. However, if the "guilty" process is also using tcp, Tcpview will likely identify it. On the other hand, a firewall that limits outbound traffic to only "permitted" programs would probably nail the program involved (Zonealarm is one example of such a firewall).
btw, I don't think nanog is the most appropriate list for these types of questions, fyi.
Probably so. The newsgroup news:comp.security.misc might be a better place. Tony Rall
Hi Thanks to everyone who sent helpfull advice for tracking this down. I wanted to follow up and say that I tried every test imaginable and nothing was found. finnaly I got a period were I could do without skype and found that when skype was off.. No more pings were going out at random times to networks all over. So.. I have no idea why skpe is doing that. I even upgraded it (as it requested due to a newer version being out when I relogged back in) And once again the pings go out. Seems Damn odd for Skype to be doing that. Nicole On 19-Oct-05 the GW commando coersion squad reported Nicole said :
Hello All. I have seen people writing in now and again with things like this and I never thought I would be one of them. But here goes.
After doing a netgear firewall upgrade which suplpied some extra information, I started noticing these odd pings to all over the place from a computer I have. I don't notce any attempted return traffic from the same Ip's. So I am at a loss as to what this could be or why. Any thoughts would be appreciated.
- Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.43.107,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.175.141,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.21.27,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:27 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.175.9,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:28 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:29 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 01:10:30 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.243.2,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:204.152.167.33,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:130.244.203.3,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.139.215.213,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:02 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:202.232.11.117,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:04 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:05 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match] Mon, 2005-10-17 16:11:06 - ICMP packet - Source:192.168.1.3,[Echo Request],LAN - Destination:151.164.240.134,[Type:8],WAN [Forward] - [Outbound Default rule match]
Its a dell computer supplied to me by my employer that I use for 2 purposes. I installed firefox for keeping an eye on our MRTG graphs and to use SKYPE as required by them. Except for a Java developers toolkit that I had to install to help with development testing once, (which I uninstalled) and the myriad of crap software that came free with the computer, that is all that has ever been installed that I am aware of.
I did recently, becouse of this, install startup cop to see what might be starting up on bootups. (at least non services wize) I uninstalled a few other items that I knew I would never used and disabled those that looked odd. But still.. the pings go out about once or twice a day.
I have not had a chance to turn off skype to see if that is what is doing it. But it is in no way connected to talking via skype. But why on earth would skpe send out these random pings all over the globe?
Thanks!
Nicole
-- |\ __ /| (`\ | o_o |__ ) ) // \\ - nmh@daemontech.com - Powered by FreeBSD - BAND: http://www.myspace.com/theparts ------------------------------------------------------ "The term "daemons" is a Judeo-Christian pejorative. Such processes will now be known as "spiritual guides" - Politicaly Correct UNIX Page Celebrate Freedom? Today's evangelicals and fundamentalists dream the same dream as the fanatics who landed on Plymouth Rock four centuries ago: freedom to persecute anyone who deviates from their religious beliefs, freedom to establish a theocracy. -- Anon If you want to go backwards, you put it in 'R,' and if you want to go forward, you put it in 'D' -- Sen. Tom Harkin (D-IA)
On Tue, Oct 18, 2005 at 01:41:53PM -0400, vijay gill wrote:
Moore's law for CPUs is kaput. Really, Moore's Law is more of an observation, than a law. We need to stop fixating on Moore's law for the love of god. It doesn't exist in a vacuum, Components don't get on the curve for free. Each generation requires enormously more capital to engineer the improved Si process, innovation, process, which only get paid for by increasing demand. If the demand slows down then the investment won't be recovered and the cycle will stop, possibly before the physics limits, depending on the amount of demand, amount of investment required for the next turn etc.
Moore's "observation" would also seem to apply only to the highest end of components that are actually "available", not to what a particular vendor ships in a particular product. Of course we have the technology available to handle 1 million BGP routes or more if we really needed to. Processing BGP is fairly easy and linear, modern CPUs are cheap and almost absurdly powerful in comparison to the task at hand (they're made to run Windows code remember :P). But if there is no reason to make a product, it doesn't get made. Comparing consumer-grade PCs which are produced in the millions or billions with one small part of a high-end router which is produced in the thousands, and which is essentially an embedded product, is a non-starter. The product that is sold does what it needs to do and no more. There is no reason to design a scalable route processor which can be easily upgraded. There is no need to ship the latest state of the art Dual Xeon 3.6GHz with support for 64GB of DRAM that will last you for your BGP needs for the next 10 years, or even to throw in 128MB of SSRAM for a few thousand bucks more. Everyone needs to sell their latest and greatest new product on a regular basis to stay in business, it is no different for router vendors. They sell you a $20,000 route processor that you could pick up from their vendor directly at Qty 1 for $1000, and the markup goes into the what you are really paying for (R&D, software dev, testing, support, sales, and the bottom line). A few years later when you need something a little faster, you can buy a new router. Besides, you probably needed to come back for the next-gen platform and interfaces anyways. Most customers are barely qualified to add RAM to their million dollar router, and I doubt the vendors see any need to change this.
Also, no network I know is on the upgrade path at a velocity that they are swapping out components in a 18 month window. Ideally, for an economically viable network, you want to be on an upgrade cycle that lags Moore's observation. Getting routers off your books is not an 18 month cycle, it is closer to 48 months or even in some cases 60 months.
Want to venture a guess as to how many networks are still operating routers with parts at the end of that 60 month cycle (purchased in 2000), which were in turn based off of 1997 technology back when they were manufactured in 1998-1999? -- Richard A Steenbergen <ras@e-gerbil.net> http://www.e-gerbil.net/ras GPG Key ID: 0xF8B12CBC (7535 7F59 8204 ED1F CC1C 53AF 4C41 5ECA F8B1 2CBC)
Andre,
capacity = prefix * path * churnfactor / second
capacity = prefixes * packets / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
You'll note that the number of prefixes is key to both of your equations. If the number of prefixes exceeds Moore's law, then it will be very difficult to get either of your equations to remain under Moore's law on the left hand side. That's the whole point of the discussion. Tony
Tony Li wrote:
Andre,
capacity = prefix * path * churnfactor / second
capacity = prefixes * packets / second
I think it is safe, even with projected AS and IP uptake, to assume Moore's law can cope with this.
This one is much harder to cope with as the number of prefixes and the link speeds are rising. Thus the problem is multiplicative to quadratic.
You'll note that the number of prefixes is key to both of your equations. If the number of prefixes exceeds Moore's law, then it will be very difficult to get either of your equations to remain under Moore's law on the left hand side.
That's the whole point of the discussion.
Let me rephrase my statement so we aren't talking past each other. The control plane (BGP) scales pretty much linearly (as Richard has observed too) with the number of prefixes. It is unlikely that the growth in prefixes and prefix churn manages to exceed the increase in readily available control plane CPU power. For example a little VIA C3-800MHz can easily handle 10 current full feeds running OpenBDPd (for which I have done the internal data structures design). Guess what a $500 AMD Opteron or Intel P4 can handle. In addition BGP lends itself relatively well to scaling on SMP. So upcoming dual- or multicore CPU's help to keep at least pace with prefix growth. Conclusion: There is not much risk on the control plane running BGP even with high prefix growth. On the other hand the forwarding plane doesn't have the same scaling properties. It faces not one but two raising factors. The number of prefixes (after cooking) and the number of lookups per second (equal pps) as defined by link speed. Here a 10-fold increase in prefixes and a 10-fold increase in lookups/second may well exceed the advances in chip design and manufactoring capabilities. A 10-fold increase in prefixes means you have search 10 times as many prefixes (however optimized that is) and a 10-fold increase in link speed means you have only 1/10 the time for search you had before. There are many optimization thinkable to solve each of these. Some scale better in terms of price/performance, others dont. My last remark in the original mail meant that the scaling properties of longest-match in hardware are less good than for perfect matching. My personal conclusion is that we may have to move the DFZ routing to some sort of fixed sized (32bit for example) identifier on which the forwarding plane can do perfect matching. This is not unlike the rationale behind MPLS. However here we need something that administratively and politically works inter-AS like prefix+BGP today. Maybe the new 32bit AS number may serve as such a perfect match routing identifier. That'd make up to 4 billion possible entries in the DFZ routing system. Or about 16k at todays size of the DFZ. One AS == one routing policy. -- Andre
On Wed, 19 Oct 2005, Andre Oppermann wrote:
the rationale behind MPLS. However here we need something that administratively and politically works inter-AS like prefix+BGP today. Maybe the new 32bit AS number may serve as such a perfect match routing identifier.
Interesting idea.
That'd make up to 4 billion possible entries in the DFZ routing system. Or about 16k at todays size of the DFZ. One AS == one routing policy.
That means though that we still need a way for people without an ASN to multi-home. Because clearly the number of ASNs is quite restricted compared to the number of IPv6 prefixes. So: - we need to change that 4-byte AS draft to (4+X)-byte ASNs sharpish, X should be 4 probably (good luck with that). And change all IPv6 stacks in routers (and hosts, but that's easier). OR - we also need $AREA allocated IPs (which obviously operators would love to work on implementing) OR - we still will have some end-host "probe with every source address" and "change every stack" solution, one which adds a sort of supra-net to the internet which is only visible to end-hosts with this stack. Seems to me at least, pre-coffee. regards, -- Paul Jakma paul@clubi.ie paul@jakma.org Key ID: 64A2FF6A Fortune: ether leak
participants (24)
-
Aaron Glenn
-
Alexei Roudnev
-
Andre Oppermann
-
Blaine Christian
-
bmanning@vacation.karoshi.com
-
Christopher L. Morrow
-
Daniel Senie
-
Elmar K. Bins
-
James
-
JORDI PALET MARTINEZ
-
Lincoln Dale
-
Michael.Dillon@btradianz.com
-
Nicole
-
Patrick W. Gilmore
-
Paul Jakma
-
Peter Kranz
-
Richard A Steenbergen
-
Rubens Kuhl Jr.
-
sthaug@nethelp.no
-
Tony Li
-
Tony Rall
-
Valdis.Kletnieks@vt.edu
-
vijay gill
-
william(at)elan.net