How to have open more than 65k concurrent connections?
Hi, I am somewhat new to networking. I have interest in running a Bittorrent tracker. I ran one for a bit, and my one Linux box running Opentracker gets overloaded. My connection is good, and most of it isn't being used. Just a lot of people connect, and use up all the 65k "free connections". I tried messing with the sysctls, but it didn't help too much (and just degraded the connection quality for everyone). It is not a malicious attack either as there is only a few connections per IP and they are sending proper Bittorrent tracker requests... So what can I do? How can I have have open more than 65k concurrent connections on standard GNU/Linux? Thanks for any ideas and suggestions. -John
Jorge Amodio (jmamodio) writes:
you have only 16-bits for port numbers.
65k port numbers != number of connections. The number of open connections (if we're talking TCP) is limited by the number of max file descriptors in the kernel (fs.file_max). See also: http://www.network-builders.com/maximum-simultaneous-network-connections-t56... You could have hundreds of thousands of connections to the same (destination IP, destination port). In practice, there are other limitations: http://www.kegel.com/c10k.html is good reading, even though it is a few years old.
On Thu, 14 Oct 2010 11:32:21 -0500 Jorge Amodio <jmamodio@gmail.com> wrote:
you have only 16-bits for port numbers.
Hint: That gives you 65K connections *per interface*. You can listen on more than one interface. This is probably off topic for this list though. The OP needs to find a network *programming* mailing list or forum. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
this has nothing to do with ports. as others have said, think of a web server. httpd listens on tcp80 (maybe 443 too) and all the facebooker's on earth hit that port. could be hundreds of thousands, and only one port. Available memory and open files will be the limiting factor as to how many established connections you can maintain with one host, providing there are not any external limitations such as port speed. On Oct 14, 2010, at 12:42 PM, D'Arcy J.M. Cain wrote:
Hint: That gives you 65K connections *per interface*. You can listen on more than one interface.
-- This message and any attachments may contain confidential and/or privileged information for the sole use of the intended recipient. Any review or distribution by anyone other than the person for whom it was originally intended is strictly prohibited. If you have received this message in error, please contact the sender and delete all copies. Opinions, conclusions or other information contained in this message may not be that of the organization.
and do not forget the ulimit and select limit of maximum open selects - but can be tuned.
On Thu, 14 Oct 2010 12:54:05 -0400 Greg Whynott <Greg.Whynott@oicr.on.ca> wrote:
this has nothing to do with ports. as others have said, think of a web server. httpd listens on tcp80 (maybe 443 too) and all the facebooker's on earth hit that port. could be hundreds of thousands, and only one port. Available memory and open files will be the limiting factor as to how many established connections you can maintain with one host, providing there are not any external limitations such as port speed.
You are correct. Brain fart here. I actually had to pull Stevens off the shelf for a quick refresher. Of course, every TCP connection is different but only includes one port on the server. The five-tuple that defines the connection includes the remote host (client) and port which is always unique at any one time. Other than local resource limits the total combinations is technically 256**6, i.e. every IP address times the number of ports. That's not even including IPV6. Still off-topic here though. The OP still needs to find the correct group to figure out his real problem. -- D'Arcy J.M. Cain <darcy@druid.net> | Democracy is three wolves http://www.druid.net/darcy/ | and a sheep voting on +1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.
On Thu, 14 Oct 2010, johndole@hush.ai wrote:
So what can I do? How can I have have open more than 65k concurrent connections on standard GNU/Linux?
This is not a networking (=moving IP packets) problem, this is a Linux problem. I'm sure it can be done, but nanog is not the place to look for it. -- Mikael Abrahamsson email: swmike@swm.pp.se
An incoming connection chews up an file descripter but does not require an ephemeral port. You can trivially have more that 65k incoming connections on a linux box, but you've only got 64511 ports per ip on the box, to use for outgoing connections. I've seen boxes supporting more than a million connections with tuning in the course of normal operation. On 10/14/10 9:03 AM, johndole@hush.ai wrote:
Hi,
I am somewhat new to networking. I have interest in running a Bittorrent tracker. I ran one for a bit, and my one Linux box running Opentracker gets overloaded. My connection is good, and most of it isn't being used. Just a lot of people connect, and use up all the 65k "free connections". I tried messing with the sysctls, but it didn't help too much (and just degraded the connection quality for everyone). It is not a malicious attack either as there is only a few connections per IP and they are sending proper Bittorrent tracker requests...
So what can I do? How can I have have open more than 65k concurrent connections on standard GNU/Linux?
Thanks for any ideas and suggestions.
-John
On 2010-10-14 12:53, Joel Jaeggli wrote:
you've only got 64511 ports per ip on the box, to use for outgoing connections.
As long as you're not connecting to the same destination IP/port pair, the same source IP/port pair can be reused. So even for outgoing connections there is virtually no limit. Simon -- NAT64/DNS64 open-source --> http://ecdysis.viagenie.ca STUN/TURN server --> http://numb.viagenie.ca vCard 4.0 --> http://www.vcarddav.org
On 2010-10-14 12:53, Joel Jaeggli wrote:
you've only got 64511 ports per ip on the box, to use for outgoing connections.
As long as you're not connecting to the same destination IP/port pair, the same source IP/port pair can be reused. So even for outgoing connections there is virtually no limit.
I suspect it has more to do with NAT connection tracking on his DSL router. Wayne
I believe the original poster was specifically requesting how to increase the File descriptor limits (ulimit -n) past 65k. This is where the limitation would come in most likely for connections he is talking about. As someone else said, probably not the best place for this, however you can look at /etc/security/limits.conf and play with soft and hard nofile limits. Try unlimited maybe. -----Original Message----- From: Simon Perreault [mailto:simon.perreault@viagenie.ca] Sent: Thursday, October 14, 2010 11:07 AM To: nanog@nanog.org Subject: Re: How to have open more than 65k concurrent connections? On 2010-10-14 12:53, Joel Jaeggli wrote:
you've only got 64511 ports per ip on the box, to use for outgoing connections.
As long as you're not connecting to the same destination IP/port pair, the same source IP/port pair can be reused. So even for outgoing connections there is virtually no limit. Simon -- NAT64/DNS64 open-source --> http://ecdysis.viagenie.ca STUN/TURN server --> http://numb.viagenie.ca vCard 4.0 --> http://www.vcarddav.org
On Thu, 14 Oct 2010, johndole@hush.ai wrote:
So what can I do? How can I have have open more than 65k concurrent connections on standard GNU/Linux?
Google around for "C500K" ( a reference to the old C10K ) which urbanairship recently posted about. Here are a few of the articles that should put you on the right track: http://blog.urbanairship.com/blog/2010/08/24/c500k-in-action-at-urban-airshi... http://blog.urbanairship.com/blog/2010/09/29/linux-kernel-tuning-for-c500k/ http://news.ycombinator.com/item?id=1740823 -- Simon Lyall | Very Busy | Web: http://www.darkmere.gen.nz/ "To stay awake all night adds a day to your life" - Stilgar | eMT.
participants (12)
-
Blake Pfankuch
-
D'Arcy J.M. Cain
-
Greg Whynott
-
Ingo Flaschberger
-
Joel Jaeggli
-
johndole@hush.ai
-
Jorge Amodio
-
Mikael Abrahamsson
-
Phil Regnauld
-
Simon Lyall
-
Simon Perreault
-
Wayne Lee