In one case, when we were having an issue with a SIP trunk, we re-numbered our end to another IP in the same subnet. Same path from A to Z, but the packet loss mysteriously disappeared using the new IP. It sure seems like they are throttling somewhere. On Thu, Jul 30, 2015 at 9:15 PM, Matt Hoppes <mhoppes@indigowireless.com> wrote:
No. But I've seen Level3 just have really bad packet loss.
in this case RTP - are getting lost in the Level3 network somewhere. We've got a ticket open with Level3, but haven't gotten far yet. Has anyone else seen Level3 or other carriers rate-limiting UDP and breaking these legitimate services?
On Thu, Jul 30, 2015 at 3:45 PM, John Kristoff <jtk@cymru.com> wrote:
On Mon, 27 Jul 2015 19:42:46 +0530 Glen Kent <glen.kent@gmail.com> wrote:
Is it true that UDP is often subjected to stiffer rate limits than TCP?
Yes, although I'm not sure how widespread this is in most, if even many networks. Probably not very widely deployed today, but restrictions and limitations only seem to expand rather than recede.
I've done this, and not just for UDP, in a university environment. I implemented this at time the Slammer worm came out on all the ingress interfaces of user-facing subnets. This was meant as a more general solution to "capacity collapse" rather than strictly as security issue, because we were also struggling with capacity filling apps like Napster at the time, but Slammer was the tipping point. To summarize what we did for aggregate rates from host subnets (these were generally 100 Mb/s IPv4 /24-/25 LANs):
ICMP: 2 Mb/s UDP: 10 Mb/s MCAST: 10 Mb/s (separate UDP group) IGMP: 2 Mb/s IPSEC: 10 Mb/s (esp - can't ensure flow control of crypto traffic) GRE: 10 Mb/s Other: 10 Mb/s for everything else except for TCP
If traffic was staying local within the campus network, limits did not apply. There were no limits for TCP traffic. We generally did not apply limits to well defined and generally well managed server subnets. We were aware that certain measurement tools might produce misleading results, a trade-off we were willing to accept.
As far as I could tell, the limits generally worked well and helped minimize Slammer and more general problems. If ISPs could implement a similar mechanism, I think this could be a reasonable approach today still. Perhaps more necessary than ever before, but a big part of the problem is that the networks where you'd really want to see this sort of thing implemented, won't do it.
Is there a reason why this is often done so? Is this because UDP is stateless and any script kiddie could launch a DOS attack with a UDP stream?
State, some form of sender verification and that it and most other commonly used protocols besides TCP do not generally react to implicit congestion signals (drops usually).
Given the state of affairs these days how difficult is it going to be for somebody to launch a DOS attack with some other protocol?
There has been ICMP-based attacks and there are, at least in theory if not common in practice, others such as IGMP-based attacks. There have been numerous DoS (single D) attacks with TCP-based services precisely because of weaknesses or difficulties in managing unexpected TCP session behavior. The potential sending capacity of even a small set of hosts from around the globe, UDP, TCP or other protocol, could easily overwhelm many points of aggregation. All it takes is for an attacker to coerce that a sufficient subset of hosts to send the packets.
John
On Jul 30, 2015, at 22:12, Jason Baugher <jason@thebaughers.com> wrote:
To bring this discussion to specifics, we've been fighting an issue where our customers are experiencing poor audio quality on SIP calls. The only carrier between our customers and the hosted VoIP provider is Level3. From multiple wiresharks, it appears that a certain percentage of UDP packets