Noisy prefixes in BGP

Hi, We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes: ┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │ │ 104.206.88.0/22 │ 62904 │ 316325 │ │ 2806:202::/32 │ 28458 │ 313775 │ │ 2a03:74c0::/32 │ 203304 │ 275372 │ │ 2a02:26f7:eccc::/48 │ 36183 │ 238053 │ │ 2a02:26f7:f90c::/48 │ 36183 │ 237923 │ │ 2a02:26f7:ec8c::/48 │ 36183 │ 237448 │ │ 2a02:26f7:ee0c::/48 │ 36183 │ 237287 │ │ 2a02:26f7:f00c::/48 │ 36183 │ 236646 │ │ 2a02:26f7:ec10::/48 │ 36183 │ 234305 │ │ 2a02:26f7:c44c::/48 │ 36183 │ 234204 │ │ 2a02:26f7:d650::/48 │ 36183 │ 233471 │ │ 2a02:26f7:d258::/48 │ 36183 │ 233192 │ │ 2a02:26f7:c54c::/48 │ 36183 │ 232795 │ │ 2a02:26f7:e08c::/48 │ 36183 │ 232303 │ You can also see live data here: https://www.ihr.live/en/bgp-monitor?prefix=169.145.140.0/23&maxHops=6&rrc=rrc25.ripe.net For the first prefix this has been going on since at least last June. Anyone knows what's happening? Thanks, Romain Fontugne

Hi Romain
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │
You might also want to check out these two update reports: https://www.potaroo.net/bgpupds/reports/bgpupd.html and https://www.potaroo.net/bgpupds/reports/v6-bgpupd.html These reports have been going on for a couple of decades now. It operates over a rolling 14 day window. Over the last 14 days in IPv4 the noisiest 50 prefixes generate 5% of the total update load, The 50 noisiest Origin AS's generate 24% of the total 14-day BGP update load The same has been going on in IPv6. The 50 noisiest prefixes (and a whole bunch of them originate in Akamai) generate a whopping 34% of the total IPv6 update load, and the noisiest 50 Origin AS's generate an even more impressive 74% of the total IpPv6 update load. Akamai's AS 36813 generated 27% of total IPv6 update load over the past 14 days. (There are 40,300 30 second MRAI intervals in a 14 day period so when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you!) Geoff

Hi Geoff,
The same has been going on in IPv6. The 50 noisiest prefixes (and a whole bunch of them originate in Akamai) generate a whopping 34% of the total IPv6 update load, and the noisiest 50 Origin AS's generate an even more impressive 74% of the total IpPv6 update load. Akamai's AS 36813 generated 27% of total IPv6 update load over the past 14 days.
Thanks that confirms what we see. If there is someone here from AS36183 I guess it is something worth looking at. Romain ________________________________________ From: Geoff Huston <gih902@gmail.com> Sent: Sunday, February 9, 2025 14:41 To: [IIJ] Fontugne Romain Cc: NANOG Subject: Re: Noisy prefixes in BGP Hi Romain We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes: ┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │ You might also want to check out these two update reports: https://www.potaroo.net/bgpupds/reports/bgpupd.html and https://www.potaroo.net/bgpupds/reports/v6-bgpupd.html These reports have been going on for a couple of decades now. It operates over a rolling 14 day window. Over the last 14 days in IPv4 the noisiest 50 prefixes generate 5% of the total update load, The 50 noisiest Origin AS's generate 24% of the total 14-day BGP update load The same has been going on in IPv6. The 50 noisiest prefixes (and a whole bunch of them originate in Akamai) generate a whopping 34% of the total IPv6 update load, and the noisiest 50 Origin AS's generate an even more impressive 74% of the total IpPv6 update load. Akamai's AS 36813 generated 27% of total IPv6 update load over the past 14 days. (There are 40,300 30 second MRAI intervals in a 14 day period so when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you!) Geoff

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi Romain, I have been looking at prefixes with large numbers of updates for a few years now. As Geoff pointed out, this is a long running problem and it’s not one that is going to (ever?) go away. I have auto-generated daily reports of general pollution seen in the DFZ from the previous day, which can be found here: https://github.com/DFZ-Name-and-Shame/dnas_stats [1]. Geoff pointed out “when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you”, however, there are peers generating millions of updates *per day* for the same prefix! There are multiple issues here which need to be unpacked to fully understand what’s going on… * Sometimes BGP updates a prefix as fast as BGP allows (this is what Geoff has pointed out). This could be for a range of reasons like a flapping link, or a redistribution issue. * Sometimes there is a software bug in BGP which re-transmits the update as fast as TCP allows. Here is an example prefix in a daily report, which was present in 4M updates, from a single peer of a single route collector: https://github.com/DFZ-Name-and-Shame/dnas_stats/blob/main/2023/03/15/202303... The ASN a few lines below has almost exactly the same number of BGP advertisements for that day: AS394119 (also that name, EXPERIMENTAL-COMPUTING-FACILITY, feels like a bit of a smoking gun!). Using a looking glass we can confirm that 394119 is the origin of that prefix: https://stat.ripe.net/widget/looking-glass#w.resource=2602:fe10:ff4::/48 Here is a screenshot from RIPE Stat at the same time, where they are recording nearly 30M updates for the same prefix per day: https://null.53bits.co.uk/uploads/images/networking/internetworking/ripe-pre... The difference is that the 30M number is across all RIPE RIS collectors. I have tried to de-dupe in my daily report and just choose the highest value from a single collector. ~4M updates per day is ~46 updates per second. So this is a BGP speaker stuck in an infinite loop sending an update as fast as it can and for some reason, and not registering in it’s internal state engine that the update has been sent. I’ve reach out to a few ASNs who’ve shown up in my daily reports for excessive announcements, and what I have seen is that sometimes it’s the ASN peering with the RIS collector, or sometimes it’s an ASN the route collector peer is peering with. In multiple cases, they have simply bounced their BGP session with their peer or with the collector, and the issues has gone away. * Sometimes a software bug causes the state of a route to falsely change. As an example of this, there was a bug in Cisco’s IOS-XR. If you were running soft reconfiguration inbound and RPKI, I think any RPKI state changes (to any prefix) was causing a route refresh. Or something like that. It’s been a couple of years, but you needed this specific combination of features, and IOS-XR was just churning out updates like it’s life depended on it. I reached out to an ASN who showed up in my daily report, they confirmed they had this bug, and eventually they fixed in. * These problems aren’t DFZ wide. Peer A might be sending a bajillion updates to peer B, but peer B sees there is no change in the route and correctly doesn’t forward the update onwards to it’s peers / public collectors. So this is probably happing a lot more than we see via RIS or RouteViews. Only some parts of the DFZ will be receiving the gratuitous updates/withdraws. I recall there was a conversation either here on NANOG or maybe it was at the IETF, within the last few years, about different NOSes that were / were not correctly identifying route updates received with no changes to the existing RIB entry, and [not] forwarding the update onwards. I’m not sure what came of this. * Some people aren’t monitoring and alerting based on CPU usage. There are plenty of old rust buckets connected to the DFZ who’s CPUs will be on fire as a result of a buggy peer and the operator is simply unaware. Some people are running cutting edge devices with 12 cores at 3Ghz and 64GBs of RAM, for which all this churn is no problem, so even if their CPU usage is monitored, it will be one core at 100% but the rest nearly idle, and crappy monitoring software will aggregate this and report the device has having <10% usage and operators think everything is fine. * There are no knobs in existing BGP implementations to detect and limit this behaviour in anyway. If you end contacting the operators of the ASNs in your original email, and getting this problem fixed, I’d be interested to know what the cause was in those cases. I’ve all but given up contacting operators that show up in my daily reports. It’s an endless endeavour, and some operators simply don’t respond to my multiple emails (also, I have a day job, and a personal life, and I also like sleeping and eating from time to time). With kind regards, James. [1] It stopped working recently, so it’s now "catching-up", which is why the data is a few days behind. -----BEGIN PGP SIGNATURE----- Version: ProtonMail wsG5BAEBCgBtBYJnqLEBCZCoEx+igX+A+0UUAAAAAAAcACBzYWx0QG5vdGF0 aW9ucy5vcGVucGdwanMub3Jn3djP7Jb9k2d+WsA2DtrRWbehxTINFz+sgXZ1 ZYC7j94WIQQ+k2NZBObfK8Tl7sKoEx+igX+A+wAAn4UP/j6fQ/Mlkb45Z0pB tKqHmN7VZ73uHkbDQoJrXhTSM3SNeUIUQsA5u9y55ep17NpAXA30+/ledThF OObJqaC4cpojoClh/3EHogZXWrJOH3G8xEDRl36rVVxYVzfdOsUpQ4Efpf0G mVBTzQAxFGU6RNu0+Ri/v5leuJVwZvKz3Gzt/dELm2bRpF6sCD3bmHob1Am+ lc0vPNHlvPy/Ap3RvjJo5It62ulwcMQgqUDoZWD9S8Df9pRt7x3IO8bI1BOa PWtbSNfjaQDNEs7Q1mVwY1jg5sIgO2wDqdn/o5787bsg+jmIiOBMGFrqY658 H1Ppfk/o1hKVpIV+sWUy6qX8iJ5rs5XrMPQXH1jVFqfCJqMz2fFGYzLPht+3 oKRFmPRPn3aXFFaveHOgy3vVE+ad4kwzDS7CBIM0WxKBKs3cl5QKM7kcmMNZ KU0EriHm9po25tJwwbzW6nyZ/tmHTWN8+aSbPgdeZnl0PsDe7BdwPnJe/YBa Ps6CSaCrd0a1k1oQliXFABCOZ0vjjmXP1yfSUj05/I/M5VQmTVcNYGRvCVpK d6Xljh1nT9PpKYzl/ruz2C6Bi0iET2qGIc47A4yEn7optanx/Cnw9mWt2FkE l9/gazByKhRd2i/2U/Fn0itfaLANLHl3pH45Ca/xe/KTYU9KPLi9EVmI0V72 23Gk2irf =Fau/ -----END PGP SIGNATURE-----

Thanks James, great tool, I have bookmarked that. It has amazing examples of how absurd announcements can be sometimes. Romain ________________________________________ From: James Bensley <lists+nanog@bensley.me> Sent: Sunday, February 9, 2025 22:43 To: NANOG; [IIJ] Fontugne Romain; Geoff Huston Subject: Re: Noisy prefixes in BGP -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi Romain, I have been looking at prefixes with large numbers of updates for a few years now. As Geoff pointed out, this is a long running problem and it’s not one that is going to (ever?) go away. I have auto-generated daily reports of general pollution seen in the DFZ from the previous day, which can be found here: https://github.com/DFZ-Name-and-Shame/dnas_stats [1]. Geoff pointed out “when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you”, however, there are peers generating millions of updates *per day* for the same prefix! There are multiple issues here which need to be unpacked to fully understand what’s going on… * Sometimes BGP updates a prefix as fast as BGP allows (this is what Geoff has pointed out). This could be for a range of reasons like a flapping link, or a redistribution issue. * Sometimes there is a software bug in BGP which re-transmits the update as fast as TCP allows. Here is an example prefix in a daily report, which was present in 4M updates, from a single peer of a single route collector: https://github.com/DFZ-Name-and-Shame/dnas_stats/blob/main/2023/03/15/202303... The ASN a few lines below has almost exactly the same number of BGP advertisements for that day: AS394119 (also that name, EXPERIMENTAL-COMPUTING-FACILITY, feels like a bit of a smoking gun!). Using a looking glass we can confirm that 394119 is the origin of that prefix: https://stat.ripe.net/widget/looking-glass#w.resource=2602:fe10:ff4::/48 Here is a screenshot from RIPE Stat at the same time, where they are recording nearly 30M updates for the same prefix per day: https://null.53bits.co.uk/uploads/images/networking/internetworking/ripe-pre... The difference is that the 30M number is across all RIPE RIS collectors. I have tried to de-dupe in my daily report and just choose the highest value from a single collector. ~4M updates per day is ~46 updates per second. So this is a BGP speaker stuck in an infinite loop sending an update as fast as it can and for some reason, and not registering in it’s internal state engine that the update has been sent. I’ve reach out to a few ASNs who’ve shown up in my daily reports for excessive announcements, and what I have seen is that sometimes it’s the ASN peering with the RIS collector, or sometimes it’s an ASN the route collector peer is peering with. In multiple cases, they have simply bounced their BGP session with their peer or with the collector, and the issues has gone away. * Sometimes a software bug causes the state of a route to falsely change. As an example of this, there was a bug in Cisco’s IOS-XR. If you were running soft reconfiguration inbound and RPKI, I think any RPKI state changes (to any prefix) was causing a route refresh. Or something like that. It’s been a couple of years, but you needed this specific combination of features, and IOS-XR was just churning out updates like it’s life depended on it. I reached out to an ASN who showed up in my daily report, they confirmed they had this bug, and eventually they fixed in. * These problems aren’t DFZ wide. Peer A might be sending a bajillion updates to peer B, but peer B sees there is no change in the route and correctly doesn’t forward the update onwards to it’s peers / public collectors. So this is probably happing a lot more than we see via RIS or RouteViews. Only some parts of the DFZ will be receiving the gratuitous updates/withdraws. I recall there was a conversation either here on NANOG or maybe it was at the IETF, within the last few years, about different NOSes that were / were not correctly identifying route updates received with no changes to the existing RIB entry, and [not] forwarding the update onwards. I’m not sure what came of this. * Some people aren’t monitoring and alerting based on CPU usage. There are plenty of old rust buckets connected to the DFZ who’s CPUs will be on fire as a result of a buggy peer and the operator is simply unaware. Some people are running cutting edge devices with 12 cores at 3Ghz and 64GBs of RAM, for which all this churn is no problem, so even if their CPU usage is monitored, it will be one core at 100% but the rest nearly idle, and crappy monitoring software will aggregate this and report the device has having <10% usage and operators think everything is fine. * There are no knobs in existing BGP implementations to detect and limit this behaviour in anyway. If you end contacting the operators of the ASNs in your original email, and getting this problem fixed, I’d be interested to know what the cause was in those cases. I’ve all but given up contacting operators that show up in my daily reports. It’s an endless endeavour, and some operators simply don’t respond to my multiple emails (also, I have a day job, and a personal life, and I also like sleeping and eating from time to time). With kind regards, James. [1] It stopped working recently, so it’s now "catching-up", which is why the data is a few days behind. -----BEGIN PGP SIGNATURE----- Version: ProtonMail wsG5BAEBCgBtBYJnqLEBCZCoEx+igX+A+0UUAAAAAAAcACBzYWx0QG5vdGF0 aW9ucy5vcGVucGdwanMub3Jn3djP7Jb9k2d+WsA2DtrRWbehxTINFz+sgXZ1 ZYC7j94WIQQ+k2NZBObfK8Tl7sKoEx+igX+A+wAAn4UP/j6fQ/Mlkb45Z0pB tKqHmN7VZ73uHkbDQoJrXhTSM3SNeUIUQsA5u9y55ep17NpAXA30+/ledThF OObJqaC4cpojoClh/3EHogZXWrJOH3G8xEDRl36rVVxYVzfdOsUpQ4Efpf0G mVBTzQAxFGU6RNu0+Ri/v5leuJVwZvKz3Gzt/dELm2bRpF6sCD3bmHob1Am+ lc0vPNHlvPy/Ap3RvjJo5It62ulwcMQgqUDoZWD9S8Df9pRt7x3IO8bI1BOa PWtbSNfjaQDNEs7Q1mVwY1jg5sIgO2wDqdn/o5787bsg+jmIiOBMGFrqY658 H1Ppfk/o1hKVpIV+sWUy6qX8iJ5rs5XrMPQXH1jVFqfCJqMz2fFGYzLPht+3 oKRFmPRPn3aXFFaveHOgy3vVE+ad4kwzDS7CBIM0WxKBKs3cl5QKM7kcmMNZ KU0EriHm9po25tJwwbzW6nyZ/tmHTWN8+aSbPgdeZnl0PsDe7BdwPnJe/YBa Ps6CSaCrd0a1k1oQliXFABCOZ0vjjmXP1yfSUj05/I/M5VQmTVcNYGRvCVpK d6Xljh1nT9PpKYzl/ruz2C6Bi0iET2qGIc47A4yEn7optanx/Cnw9mWt2FkE l9/gazByKhRd2i/2U/Fn0itfaLANLHl3pH45Ca/xe/KTYU9KPLi9EVmI0V72 23Gk2irf =Fau/ -----END PGP SIGNATURE-----

IMHO: It is better to prepare infrastructure for the much higher rate of prefix announcements. Accept this incident as the training. Test yourself - are you ready for the future? There were a few publications recently (I intentionally do not mention companies - search SigCom) of "excellent BGP traffic engineering tools" (sometimes even AI was mentioned) that split prefixes and announce/withdraw them very often to achieve "traffic load balancing on ingress to the AS". Vague remember that one company has mentioned 2k peering connections under the robot management. They claim something like 95% link load balancing (of course proportionally to the link's weight). I do not remember that anybody claimed $$$ savings (probably I was not reading carefully), but it is evident that pushing traffic through the cheapest peering is a very predictable goal. Hence, it is easy to predict that BGP prefix announcements will grow exponentially. Every big company would like better usage peering on ingress. Such behavior is very useful for the particular company. AI modeling helps a lot in this case, rigid policies are much more difficult to develop. Eduard -----Original Message----- From: NANOG <nanog-bounces+vasilenko.eduard=huawei.com@nanog.org> On Behalf Of Romain Fontugne via NANOG Sent: Sunday, February 9, 2025 22:15 To: James Bensley <lists+nanog@bensley.me>; NANOG <nanog@nanog.org>; Geoff Huston <gih902@gmail.com> Subject: Re: Noisy prefixes in BGP Thanks James, great tool, I have bookmarked that. It has amazing examples of how absurd announcements can be sometimes. Romain ________________________________________ From: James Bensley <lists+nanog@bensley.me> Sent: Sunday, February 9, 2025 22:43 To: NANOG; [IIJ] Fontugne Romain; Geoff Huston Subject: Re: Noisy prefixes in BGP -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Hi Romain, I have been looking at prefixes with large numbers of updates for a few years now. As Geoff pointed out, this is a long running problem and it's not one that is going to (ever?) go away. I have auto-generated daily reports of general pollution seen in the DFZ from the previous day, which can be found here: https://github.com/DFZ-Name-and-Shame/dnas_stats [1]. Geoff pointed out "when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you", however, there are peers generating millions of updates *per day* for the same prefix! There are multiple issues here which need to be unpacked to fully understand what's going on... * Sometimes BGP updates a prefix as fast as BGP allows (this is what Geoff has pointed out). This could be for a range of reasons like a flapping link, or a redistribution issue. * Sometimes there is a software bug in BGP which re-transmits the update as fast as TCP allows. Here is an example prefix in a daily report, which was present in 4M updates, from a single peer of a single route collector: https://github.com/DFZ-Name-and-Shame/dnas_stats/blob/main/2023/03/15/202303... The ASN a few lines below has almost exactly the same number of BGP advertisements for that day: AS394119 (also that name, EXPERIMENTAL-COMPUTING-FACILITY, feels like a bit of a smoking gun!). Using a looking glass we can confirm that 394119 is the origin of that prefix: https://stat.ripe.net/widget/looking-glass#w.resource=2602:fe10:ff4::/48 Here is a screenshot from RIPE Stat at the same time, where they are recording nearly 30M updates for the same prefix per day: https://null.53bits.co.uk/uploads/images/networking/internetworking/ripe-pre... The difference is that the 30M number is across all RIPE RIS collectors. I have tried to de-dupe in my daily report and just choose the highest value from a single collector. ~4M updates per day is ~46 updates per second. So this is a BGP speaker stuck in an infinite loop sending an update as fast as it can and for some reason, and not registering in it's internal state engine that the update has been sent. I've reach out to a few ASNs who've shown up in my daily reports for excessive announcements, and what I have seen is that sometimes it's the ASN peering with the RIS collector, or sometimes it's an ASN the route collector peer is peering with. In multiple cases, they have simply bounced their BGP session with their peer or with the collector, and the issues has gone away. * Sometimes a software bug causes the state of a route to falsely change. As an example of this, there was a bug in Cisco's IOS-XR. If you were running soft reconfiguration inbound and RPKI, I think any RPKI state changes (to any prefix) was causing a route refresh. Or something like that. It's been a couple of years, but you needed this specific combination of features, and IOS-XR was just churning out updates like it's life depended on it. I reached out to an ASN who showed up in my daily report, they confirmed they had this bug, and eventually they fixed in. * These problems aren't DFZ wide. Peer A might be sending a bajillion updates to peer B, but peer B sees there is no change in the route and correctly doesn't forward the update onwards to it's peers / public collectors. So this is probably happing a lot more than we see via RIS or RouteViews. Only some parts of the DFZ will be receiving the gratuitous updates/withdraws. I recall there was a conversation either here on NANOG or maybe it was at the IETF, within the last few years, about different NOSes that were / were not correctly identifying route updates received with no changes to the existing RIB entry, and [not] forwarding the update onwards. I'm not sure what came of this. * Some people aren't monitoring and alerting based on CPU usage. There are plenty of old rust buckets connected to the DFZ who's CPUs will be on fire as a result of a buggy peer and the operator is simply unaware. Some people are running cutting edge devices with 12 cores at 3Ghz and 64GBs of RAM, for which all this churn is no problem, so even if their CPU usage is monitored, it will be one core at 100% but the rest nearly idle, and crappy monitoring software will aggregate this and report the device has having <10% usage and operators think everything is fine. * There are no knobs in existing BGP implementations to detect and limit this behaviour in anyway. If you end contacting the operators of the ASNs in your original email, and getting this problem fixed, I'd be interested to know what the cause was in those cases. I've all but given up contacting operators that show up in my daily reports. It's an endless endeavour, and some operators simply don't respond to my multiple emails (also, I have a day job, and a personal life, and I also like sleeping and eating from time to time). With kind regards, James. [1] It stopped working recently, so it's now "catching-up", which is why the data is a few days behind. -----BEGIN PGP SIGNATURE----- Version: ProtonMail wsG5BAEBCgBtBYJnqLEBCZCoEx+igX+A+0UUAAAAAAAcACBzYWx0QG5vdGF0 aW9ucy5vcGVucGdwanMub3Jn3djP7Jb9k2d+WsA2DtrRWbehxTINFz+sgXZ1 ZYC7j94WIQQ+k2NZBObfK8Tl7sKoEx+igX+A+wAAn4UP/j6fQ/Mlkb45Z0pB tKqHmN7VZ73uHkbDQoJrXhTSM3SNeUIUQsA5u9y55ep17NpAXA30+/ledThF OObJqaC4cpojoClh/3EHogZXWrJOH3G8xEDRl36rVVxYVzfdOsUpQ4Efpf0G mVBTzQAxFGU6RNu0+Ri/v5leuJVwZvKz3Gzt/dELm2bRpF6sCD3bmHob1Am+ lc0vPNHlvPy/Ap3RvjJo5It62ulwcMQgqUDoZWD9S8Df9pRt7x3IO8bI1BOa PWtbSNfjaQDNEs7Q1mVwY1jg5sIgO2wDqdn/o5787bsg+jmIiOBMGFrqY658 H1Ppfk/o1hKVpIV+sWUy6qX8iJ5rs5XrMPQXH1jVFqfCJqMz2fFGYzLPht+3 oKRFmPRPn3aXFFaveHOgy3vVE+ad4kwzDS7CBIM0WxKBKs3cl5QKM7kcmMNZ KU0EriHm9po25tJwwbzW6nyZ/tmHTWN8+aSbPgdeZnl0PsDe7BdwPnJe/YBa Ps6CSaCrd0a1k1oQliXFABCOZ0vjjmXP1yfSUj05/I/M5VQmTVcNYGRvCVpK d6Xljh1nT9PpKYzl/ruz2C6Bi0iET2qGIc47A4yEn7optanx/Cnw9mWt2FkE l9/gazByKhRd2i/2U/Fn0itfaLANLHl3pH45Ca/xe/KTYU9KPLi9EVmI0V72 23Gk2irf =Fau/ -----END PGP SIGNATURE-----

hi james
I recall there was a conversation either here on NANOG or maybe it was at the IETF, within the last few years, about different NOSes that were / were not correctly identifying route updates received with no changes to the existing RIB entry, and [not] forwarding the update onwards. I’m not sure what came of this.
RFC 9324 and https://archive.psg.com/220214.nanog-rov-no-rr.pdf randy

Hi James, We see the same incredible noise from a few peers we have at RouteViews. So much so that they put quite a stress on our backend infrastructure (not only the collector itself, but the syncing of these updates back to our archive, and storage.) And, longer term, do researchers who use RouteViews data want us to keep this noise for ever and ever in our archive, as it consumes Tbytes of compressed disk space... And yes, like you, we reach out to the providers of the incredible noise, and they are usually unresponsive (PeeringDB contact entries are for what exactly I wonder). I feel your pain!! So we have to take the (for us) drastic step of shutting down the peer - we have to keep RouteViews useful and usable for the community, as we have been trying to do for the (almost) last 30 years. Of course, some folks want to study the incredible noise too, but ultimately none of it really helps keep the Internet infrastructure stable. I guess all we can do is keep highlighting the problem (I highlight Geoff's BGP Update report almost every BGP Best Practice training I run here in AsiaPac, for example) - but how to make noisy peers go away, longer term...? :-( philip -- James Bensley wrote on 9/2/2025 23:43:
Hi Romain,
I have been looking at prefixes with large numbers of updates for a few years now. As Geoff pointed out, this is a long running problem and it’s not one that is going to (ever?) go away.
I have auto-generated daily reports of general pollution seen in the DFZ from the previous day, which can be found here: https://github.com/DFZ-Name-and-Shame/dnas_stats [1].
Geoff pointed out “when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you”, however, there are peers generating millions of updates *per day* for the same prefix! There are multiple issues here which need to be unpacked to fully understand what’s going on…
* Sometimes BGP updates a prefix as fast as BGP allows (this is what Geoff has pointed out). This could be for a range of reasons like a flapping link, or a redistribution issue.
* Sometimes there is a software bug in BGP which re-transmits the update as fast as TCP allows. Here is an example prefix in a daily report, which was present in 4M updates, from a single peer of a single route collector: https://github.com/DFZ-Name-and-Shame/dnas_stats/blob/main/2023/03/15/202303...
The ASN a few lines below has almost exactly the same number of BGP advertisements for that day: AS394119 (also that name, EXPERIMENTAL-COMPUTING-FACILITY, feels like a bit of a smoking gun!). Using a looking glass we can confirm that 394119 is the origin of that prefix: https://stat.ripe.net/widget/looking-glass#w.resource=2602:fe10:ff4::/48
Here is a screenshot from RIPE Stat at the same time, where they are recording nearly 30M updates for the same prefix per day: https://null.53bits.co.uk/uploads/images/networking/internetworking/ripe-pre...
The difference is that the 30M number is across all RIPE RIS collectors. I have tried to de-dupe in my daily report and just choose the highest value from a single collector. ~4M updates per day is ~46 updates per second. So this is a BGP speaker stuck in an infinite loop sending an update as fast as it can and for some reason, and not registering in it’s internal state engine that the update has been sent.
I’ve reach out to a few ASNs who’ve shown up in my daily reports for excessive announcements, and what I have seen is that sometimes it’s the ASN peering with the RIS collector, or sometimes it’s an ASN the route collector peer is peering with. In multiple cases, they have simply bounced their BGP session with their peer or with the collector, and the issues has gone away.
* Sometimes a software bug causes the state of a route to falsely change. As an example of this, there was a bug in Cisco’s IOS-XR. If you were running soft reconfiguration inbound and RPKI, I think any RPKI state changes (to any prefix) was causing a route refresh. Or something like that. It’s been a couple of years, but you needed this specific combination of features, and IOS-XR was just churning out updates like it’s life depended on it. I reached out to an ASN who showed up in my daily report, they confirmed they had this bug, and eventually they fixed in.
* These problems aren’t DFZ wide. Peer A might be sending a bajillion updates to peer B, but peer B sees there is no change in the route and correctly doesn’t forward the update onwards to it’s peers / public collectors. So this is probably happing a lot more than we see via RIS or RouteViews. Only some parts of the DFZ will be receiving the gratuitous updates/withdraws.
I recall there was a conversation either here on NANOG or maybe it was at the IETF, within the last few years, about different NOSes that were / were not correctly identifying route updates received with no changes to the existing RIB entry, and [not] forwarding the update onwards. I’m not sure what came of this.
* Some people aren’t monitoring and alerting based on CPU usage. There are plenty of old rust buckets connected to the DFZ who’s CPUs will be on fire as a result of a buggy peer and the operator is simply unaware. Some people are running cutting edge devices with 12 cores at 3Ghz and 64GBs of RAM, for which all this churn is no problem, so even if their CPU usage is monitored, it will be one core at 100% but the rest nearly idle, and crappy monitoring software will aggregate this and report the device has having <10% usage and operators think everything is fine.
* There are no knobs in existing BGP implementations to detect and limit this behaviour in anyway.
If you end contacting the operators of the ASNs in your original email, and getting this problem fixed, I’d be interested to know what the cause was in those cases. I’ve all but given up contacting operators that show up in my daily reports. It’s an endless endeavour, and some operators simply don’t respond to my multiple emails (also, I have a day job, and a personal life, and I also like sleeping and eating from time to time).
With kind regards, James.
[1] It stopped working recently, so it’s now "catching-up", which is why the data is a few days behind.

Philip, I’ve kept meaning to ask you for decades but did you ever find the root cause of all this noise- is this due to a bug in some version of a router OS or is this being caused by a malfunctioning script at the provider that triggers BGP updates? From: NANOG <nanog-bounces+ops.lists=gmail.com@nanog.org> on behalf of Philip Smith <pfs@routeviews.org> Date: Monday, 10 February 2025 at 6:32 AM To: James Bensley <lists+nanog@bensley.me> Cc: NANOG <nanog@nanog.org> Subject: Re: Noisy prefixes in BGP I guess all we can do is keep highlighting the problem (I highlight Geoff's BGP Update report almost every BGP Best Practice training I run here in AsiaPac, for example) - but how to make noisy peers go away, longer term...? :-(
* These problems aren’t DFZ wide. Peer A might be sending a bajillion updates to peer B, but peer B sees there is no change in the route and correctly doesn’t forward the update onwards to it’s peers / public collectors. So this is probably happing a lot more than we see via RIS or RouteViews. Only some parts of the DFZ will be receiving the gratuitous updates/withdraws.

Hi Suresh, Could be all of what you suggest... I've really not had the time to go delving into these (plus it would need the willing cooperation of the AS in question, which might have happened 25 years ago when some of us were chasing down meaningless deaggregation). These massive only affect certain paths and, in RouteViews case, certain peers in certain locations (some locations they send us vast numbers of updates, others they are perfectly normal). I could speculate infrastructure issues, or internal routers failing to converge, or BGP attribute changes, or a script gone mad, or broken BGP implementations,... Not sure if it is those automatic load balancers tbh, they don't show the vast level of updates, but just generate a steady background noise (you can see the obvious candidates in Geoff's BGP Update Report). philip -- Suresh Ramasubramanian wrote on 10/2/2025 12:16:
Philip, I’ve kept meaning to ask you for decades but did you ever find the root cause of all this noise- is this due to a bug in some version of a router OS or is this being caused by a malfunctioning script at the provider that triggers BGP updates?
From: NANOG <nanog-bounces+ops.lists=gmail.com@nanog.org> on behalf of Philip Smith <pfs@routeviews.org> Date: Monday, 10 February 2025 at 6:32 AM To: James Bensley <lists+nanog@bensley.me> Cc: NANOG <nanog@nanog.org> Subject: Re: Noisy prefixes in BGP
I guess all we can do is keep highlighting the problem (I highlight Geoff's BGP Update report almost every BGP Best Practice training I run here in AsiaPac, for example) - but how to make noisy peers go away, longer term...? :-(
* These problems aren’t DFZ wide. Peer A might be sending a bajillion updates to peer B, but peer B sees there is no change in the route and correctly doesn’t forward the update onwards to it’s peers / public collectors. So this is probably happing a lot more than we see via RIS or RouteViews. Only some parts of the DFZ will be receiving the gratuitous updates/withdraws.

On 2025-02-09 07:43, James Bensley wrote:
* There are no knobs in existing BGP implementations to detect and limit this behaviour in anyway.
100% agreed. Looked into this a couple weeks ago on our $VENDOR_C gear, and we saw the prefixes Romain mentioned as well as many others in Geoff's report. I coded a CLI poller / screen-scraper to compare the different versions of the BGP table, in accordance with this tech article[1]. Seems to be as good as one can get with vanilla XR. When looking at a specific prefix in the table, there is also a timestamp for the last update. It's helpful, but doesn't give a sense for how frequently it is updating. Would be great to have logging / alerting if a locally originated prefix is updated more than once a second. Should also be easier to see high update load for those getting the updates downstream. A top N prefix frequency distribution based on short time windows (1-15 min) would have saved me work. -Brian [1] https://community.cisco.com/t5/service-providers-knowledge-base/bgp-churn-id...

Hello, We are looking into this issue. Thank you, Aaron Block --- Aaron Block Akamai Technologies ablock@akamai.com GPG KeyID: 0xD098B69F Senior Principal Network Engineer Voice: +1-617-444-2892 as20940
On Feb 9, 2025, at 1:56 AM, Romain Fontugne via NANOG <nanog@nanog.org> wrote:
Hi Geoff,
The same has been going on in IPv6. The 50 noisiest prefixes (and a whole bunch of them originate in Akamai) generate a whopping 34% of the total IPv6 update load, and the noisiest 50 Origin AS's generate an even more impressive 74% of the total IpPv6 update load. Akamai's AS 36813 generated 27% of total IPv6 update load over the past 14 days.
Thanks that confirms what we see. If there is someone here from AS36183 I guess it is something worth looking at.
Romain
________________________________________ From: Geoff Huston <gih902@gmail.com> Sent: Sunday, February 9, 2025 14:41 To: [IIJ] Fontugne Romain Cc: NANOG Subject: Re: Noisy prefixes in BGP
Hi Romain
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │
You might also want to check out these two update reports:
https://urldefense.com/v3/__https://www.potaroo.net/bgpupds/reports/bgpupd.h...
and
https://urldefense.com/v3/__https://www.potaroo.net/bgpupds/reports/v6-bgpup...
These reports have been going on for a couple of decades now. It operates over a rolling 14 day window.
Over the last 14 days in IPv4 the noisiest 50 prefixes generate 5% of the total update load, The 50 noisiest Origin AS's generate 24% of the total 14-day BGP update load
The same has been going on in IPv6. The 50 noisiest prefixes (and a whole bunch of them originate in Akamai) generate a whopping 34% of the total IPv6 update load, and the noisiest 50 Origin AS's generate an even more impressive 74% of the total IpPv6 update load. Akamai's AS 36813 generated 27% of total IPv6 update load over the past 14 days.
(There are 40,300 30 second MRAI intervals in a 14 day period so when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you!)
Geoff

Thanks Aaron! Romain ________________________________ From: Block, Aaron Sent: Monday, February 10, 2025 03:58 To: [IIJ] Fontugne Romain Cc: Geoff Huston; NANOG Subject: Re: [nanog] Noisy prefixes in BGP Hello, We are looking into this issue. Thank you, Aaron Block --- Aaron Block Akamai Technologies ablock@akamai.com GPG KeyID: 0xD098B69F Senior Principal Network Engineer Voice: +1-617-444-2892 as20940
On Feb 9, 2025, at 1:56 AM, Romain Fontugne via NANOG <nanog@nanog.org> wrote:
Hi Geoff,
The same has been going on in IPv6. The 50 noisiest prefixes (and a whole bunch of them originate in Akamai) generate a whopping 34% of the total IPv6 update load, and the noisiest 50 Origin AS's generate an even more impressive 74% of the total IpPv6 update load. Akamai's AS 36813 generated 27% of total IPv6 update load over the past 14 days.
Thanks that confirms what we see. If there is someone here from AS36183 I guess it is something worth looking at.
Romain
________________________________________ From: Geoff Huston <gih902@gmail.com> Sent: Sunday, February 9, 2025 14:41 To: [IIJ] Fontugne Romain Cc: NANOG Subject: Re: Noisy prefixes in BGP
Hi Romain
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │
You might also want to check out these two update reports:
https://urldefense.com/v3/__https://www.potaroo.net/bgpupds/reports/bgpupd.h...
and
https://urldefense.com/v3/__https://www.potaroo.net/bgpupds/reports/v6-bgpup...
These reports have been going on for a couple of decades now. It operates over a rolling 14 day window.
Over the last 14 days in IPv4 the noisiest 50 prefixes generate 5% of the total update load, The 50 noisiest Origin AS's generate 24% of the total 14-day BGP update load
The same has been going on in IPv6. The 50 noisiest prefixes (and a whole bunch of them originate in Akamai) generate a whopping 34% of the total IPv6 update load, and the noisiest 50 Origin AS's generate an even more impressive 74% of the total IpPv6 update load. Akamai's AS 36813 generated 27% of total IPv6 update load over the past 14 days.
(There are 40,300 30 second MRAI intervals in a 14 day period so when a prefix is being updated 33,000 times in 145 days its basically being updated as fast as many BGP implementations will let you!)
Geoff

I'm escalating this internally. I hope to have this resolved asap. - Jared On Sun, Feb 09, 2025 at 04:41:05PM +1100, Geoff Huston wrote:
Hi Romain
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │

On Sun, Feb 09, 2025 at 04:41:05PM +1100, Geoff Huston wrote:
Hi Romain
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │
We did a bunch of internal research and such and found a mismatch in behavior which triggered this, which was unexpected. We did some mitiigation work immediately and the bigger fix should be complete very soon. It takes some time to update thousands of devices. I also raised the issue again with my management about the prioritization of some monitoring project and helped the teams that needed to do the fix with an internal dashboard so they could more immediately see the issue internally. As is the usual with this, the bigger fix is always larger than one thinks, and having the "doesn't run on a server under someones desk" which now really is a VM in some location but is still tied to that person or could be orphaned project/code ... but that's the story of many systems :-) Please reach out if you see any bad routing behavior. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.

Hi Jared, Thanks, indeed I now see that the first prefix in the table I sent (172.224.198.0/24) is very quiet :)
I also raised the issue again with my management about the prioritization of some monitoring project and helped the teams that needed to do the fix with an internal dashboard so they could more immediately see the issue internally.
Every summer we get a handful of students to develop open source monitoring/measurement tools maybe that could be a good topic for them. I'm not sure exactly what metric would be the most useful for operators, but a public dashboard with number of updates obtained from RouteView/RIS data (and similar results to the one James and Geoff shared) should be doable. Happy to discuss that if there is any interest here about such tool. Romain ________________________________________ From: Jared Mauch <jared@puck.nether.net> Sent: Wednesday, February 19, 2025 02:13 To: Geoff Huston Cc: [IIJ] Fontugne Romain; NANOG Subject: Re: Noisy prefixes in BGP On Sun, Feb 09, 2025 at 04:41:05PM +1100, Geoff Huston wrote:
Hi Romain
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │
We did a bunch of internal research and such and found a mismatch in behavior which triggered this, which was unexpected. We did some mitiigation work immediately and the bigger fix should be complete very soon. It takes some time to update thousands of devices. I also raised the issue again with my management about the prioritization of some monitoring project and helped the teams that needed to do the fix with an internal dashboard so they could more immediately see the issue internally. As is the usual with this, the bigger fix is always larger than one thinks, and having the "doesn't run on a server under someones desk" which now really is a VM in some location but is still tied to that person or could be orphaned project/code ... but that's the story of many systems :-) Please reach out if you see any bad routing behavior. - Jared -- Jared Mauch | pgp key available via finger from jared@puck.nether.net clue++; | http://puck.nether.net/~jared/ My statements are only mine.

Hello, I've been following this thread, and I find it quite interesting. I’m curious if there is an official definition for "noisy prefix" (or perhaps "noisy AS"). Thank you, On 9/2/25 1:01 AM, Romain Fontugne via NANOG wrote:
Hi,
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │ │ 104.206.88.0/22 │ 62904 │ 316325 │ │ 2806:202::/32 │ 28458 │ 313775 │ │ 2a03:74c0::/32 │ 203304 │ 275372 │ │ 2a02:26f7:eccc::/48 │ 36183 │ 238053 │ │ 2a02:26f7:f90c::/48 │ 36183 │ 237923 │ │ 2a02:26f7:ec8c::/48 │ 36183 │ 237448 │ │ 2a02:26f7:ee0c::/48 │ 36183 │ 237287 │ │ 2a02:26f7:f00c::/48 │ 36183 │ 236646 │ │ 2a02:26f7:ec10::/48 │ 36183 │ 234305 │ │ 2a02:26f7:c44c::/48 │ 36183 │ 234204 │ │ 2a02:26f7:d650::/48 │ 36183 │ 233471 │ │ 2a02:26f7:d258::/48 │ 36183 │ 233192 │ │ 2a02:26f7:c54c::/48 │ 36183 │ 232795 │ │ 2a02:26f7:e08c::/48 │ 36183 │ 232303 │
You can also see live data here: https://www.ihr.live/en/bgp-monitor?prefix=169.145.140.0/23&maxHops=6&rrc=rrc25.ripe.net
For the first prefix this has been going on since at least last June. Anyone knows what's happening?
Thanks, Romain Fontugne

Hurricane Electric recently (a few months ago) started measuring repeated announcements, repeated withdrawals, in addition to flapping prefixes. https://bgp.he.net/report/netstats#_flap Stats are organized by prefix, ASN, and peer IP relative to flapping prefixes, repeated announcements, and repeated withdrawals. Mike. On 2/8/25 9:01 PM, Romain Fontugne via NANOG wrote:
Hi,
We are seeing in RIS data a constant flow of update messages from a few ASes, here is the list of the top prefixes:
┌─────────────────────┬────────────┬──────────────┐ │ prefix │ origin_asn │ num_announce │ │ varchar │ varchar │ int64 │ ├─────────────────────┼────────────┼──────────────┤ │ 169.145.140.0/23 │ 6979 │ 843376 │ │ 2a03:eec0:3212::/48 │ 22616 │ 435608 │ │ 172.224.198.0/24 │ 36183 │ 380117 │ │ 172.226.208.0/24 │ 36183 │ 374040 │ │ 172.226.148.0/24 │ 36183 │ 367083 │ │ 104.206.88.0/22 │ 62904 │ 316325 │ │ 2806:202::/32 │ 28458 │ 313775 │ │ 2a03:74c0::/32 │ 203304 │ 275372 │ │ 2a02:26f7:eccc::/48 │ 36183 │ 238053 │ │ 2a02:26f7:f90c::/48 │ 36183 │ 237923 │ │ 2a02:26f7:ec8c::/48 │ 36183 │ 237448 │ │ 2a02:26f7:ee0c::/48 │ 36183 │ 237287 │ │ 2a02:26f7:f00c::/48 │ 36183 │ 236646 │ │ 2a02:26f7:ec10::/48 │ 36183 │ 234305 │ │ 2a02:26f7:c44c::/48 │ 36183 │ 234204 │ │ 2a02:26f7:d650::/48 │ 36183 │ 233471 │ │ 2a02:26f7:d258::/48 │ 36183 │ 233192 │ │ 2a02:26f7:c54c::/48 │ 36183 │ 232795 │ │ 2a02:26f7:e08c::/48 │ 36183 │ 232303 │
You can also see live data here: https://www.ihr.live/en/bgp-monitor?prefix=169.145.140.0/23&maxHops=6&rrc=rrc25.ripe.net
For the first prefix this has been going on since at least last June. Anyone knows what's happening?
Thanks, Romain Fontugne
participants (12)
-
Alejandro Acosta
-
Block, Aaron
-
Brian Knight
-
Geoff Huston
-
James Bensley
-
Jared Mauch
-
Mike Leber
-
Philip Smith
-
Randy Bush
-
Romain Fontugne
-
Suresh Ramasubramanian
-
Vasilenko Eduard