Linux tcp socket limit

What limits the maximum number of connections on a Linux server?

What kernel parameter or other settings control the maximum number of TCP sockets that can be open on a Linux server? What are the tradeoffs of allowing more connections? I noticed while load testing an Apache server with ab that it’s pretty easy to max out the open connections on the server. If you leave off ab’s -k option, which allows connection reuse, and have it send more than about 10,000 requests then Apache serves the first 11,000 or so requests and then halts for 60 seconds. A look at netstat output shows 11,000 connections in the TIME_WAIT state. Apparently, this is normal. Connections are kept open a default of 60 seconds even after the client is done with them for TCP reliability reasons. It seems like this would be an easy way to DoS a server and I’m wondering what the usual tunings and precautions for it are. Here’s my test output:

# ab -c 5 -n 50000 http://localhost/ This is ApacheBench, Version 2.0.40-dev apache-2.0 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Completed 5000 requests Completed 10000 requests apr_poll: The timeout specified has expired (70007) Total of 11655 requests completed 
 # netstat --inet -p | grep "localhost:www" | sed -e 's/ \+/ /g' | cut -d' ' -f 1-4,6-7 | sort | uniq -c 11651 tcp 0 0 localhost:www TIME_WAIT - 1 tcp 0 1 localhost:44423 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44424 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44425 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44426 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44428 SYN_SENT 7831/ab 

Источник

Increasing the maximum number of TCP/IP connections in Linux

I am programming a server and it seems like my number of connections is being limited since my bandwidth isn’t being saturated even when I’ve set the number of connections to «unlimited». How can I increase or eliminate a maximum number of connections that my Ubuntu Linux box can open at a time? Does the OS limit this, or is it the router or the ISP? Or is it something else?

@Software Monkey: I answered this anyway because I hope this might be useful to someone who actually is writing a server in the future.

@derobert: I saw that +1. Actually, I had the same thought after my previous comment, but thought I would let the comment stand.

5 Answers 5

Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently.

On the client side: Increase the ephermal port range, and decrease the tcp_fin_timeout

To find out the default values:

sysctl net.ipv4.ip_local_port_range sysctl net.ipv4.tcp_fin_timeout 

The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are:

This basically means your system cannot consistently guarantee more than (61000 — 32768) / 60 = 470 sockets per second. If you are not happy with that, you could begin with increasing the port_range . Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout . Suppose you do both, you should see over 1500 outbound connections per second, more readily.

Читайте также:  Linux mint локальная сеть windows

To change the values:

sysctl net.ipv4.ip_local_port_range="15000 61000" sysctl net.ipv4.tcp_fin_timeout=30 

The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system’s ability to handle concurrent connections in a sustainable manner for large periods of «activity.»

Default Sysctl values on a typical Linux box for tcp_tw_recycle & tcp_tw_reuse would be

net.ipv4.tcp_tw_recycle=0 net.ipv4.tcp_tw_reuse=0 

These do not allow a connection from a «used» socket (in wait state) and force the sockets to last the complete time_wait cycle. I recommend setting:

sysctl net.ipv4.tcp_tw_recycle=1 sysctl net.ipv4.tcp_tw_reuse=1 

This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets. Make sure to read post «Coping with the TCP TIME-WAIT» from Vincent Bernat to understand the implications. The net.ipv4.tcp_tw_recycle option is quite problematic for public-facing servers as it won’t handle connections from two different computers behind the same NAT device, which is a problem hard to detect and waiting to bite you. Note that net.ipv4.tcp_tw_recycle has been removed from Linux 4.12.

On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application’s capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application’s listen call, to an equal or higher integer.

sysctl net.core.somaxconn=1024 

txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.

ifconfig eth0 txqueuelen 5000 echo "/sbin/ifconfig eth0 txqueuelen 5000" >> /etc/rc.local 

Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog . Their default values are 1000 and 1024 respectively.

sysctl net.core.netdev_max_backlog=2000 sysctl net.ipv4.tcp_max_syn_backlog=2048 

Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.

Besides the above one more popular technique used by programmers is to reduce the number of tcp write calls. My own preference is to use a buffer wherein I push the data I wish to send to the client, and then at appropriate points I write out the buffered data into the actual socket. This technique allows me to use large data packets, reduce fragmentation, reduces my CPU utilization both in the user land and at kernel-level.

Источник

Max number of socket on Linux

It seems that the server is limited at ~32720 sockets. I have tried every known variable change to raise up this limit. But the server stay limited at 32720 opened socket, even if there is still 4Go of free memory and 80% of idle cpu. Here’s the configuration

~# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63931 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 798621 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 2048 cpu time (seconds, -t) unlimited max user processes (-u) 63931 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited net.netfilter.nf_conntrack_max = 999999 net.ipv4.netfilter.ip_conntrack_max = 999999 net.nf_conntrack_max = 999999 

Just so it’s said: If you need more than 32000 sockets at once, you have bigger problems than just that number being too low. A normal server doesn’t ever have more than a few hundred sockets (maybe even a couple thousand, for a busy server) open at once.

Читайте также:  Хром браузер для линукс

@TheSquad: do you have some security framework loaded, that limits the number of fd’s and/or connections?

Experience. Even extremely busy web sites rarely serve more than a couple thousand simultaneous clients — once they get to that point, they’re clustered or otherwise distributed to reduce load. And the QuakeNet IRC network, the best example i could think of for mass long-lived TCP client/server stuff, has maybe 80k simultaneous users spread over 40+ servers. That’s about 2k per.

@mvds: The limit is most likely not due to security stuff — security would kick in WAY before 32k sockets.

8 Answers 8

If you’re dealing with openssl and threads, go check your /proc/sys/vm/max_map_count and try to raise it.

In IPV4, the TCP layer has 16 bits for the destination port, and 16 bits for the source port.

Seeing that your limit is 32K I would expect that you are actually seeing the limit of outbound TCP connections you can make. You should be able to get a max of 65K sockets (this would be the protocol limit). This is the limit for total number of named connections. Fortunately, binding a port for incoming connections only uses 1. But if you are trying to test the number of connections from the same machine, you can only have 65K total outgoing connections (for TCP). To test the amount of incoming connections, you will need multiple computers.

Note: you can call socket(AF_INET. ) up to the number of file descriptors available, but you cannot bind them without increasing the number of ports available. To increase the range, do this:

echo «1024 65535» > /proc/sys/net/ipv4/ip_local_port_range (cat it to see what you currently have—the default is 32768 to 61000)

Perhaps it is time for a new TCP like protocol that will allow 32 bits for the source and dest ports? But how many applications really need more than 65 thousand outbound connections?

The following will allow 100,000 incoming connections on linux mint 16 (64 bit) (you must run it as root to set the limits)

#include #include #include #include #include #include void ShowLimit() < rlimit lim; int err=getrlimit(RLIMIT_NOFILE,&lim); printf("%1d limit: %1ld,%1ld\n",err,lim.rlim_cur,lim.rlim_max); >main() < ShowLimit(); rlimit lim; lim.rlim_cur=100000; lim.rlim_max=100000; int err=setrlimit(RLIMIT_NOFILE,&lim); printf("set returned %1d\n",err); ShowLimit(); int sock=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP); sockaddr_in maddr; maddr.sin_family=AF_INET; maddr.sin_port=htons(80); maddr.sin_addr.s_addr=INADDR_ANY; err=bind(sock,(sockaddr *) &maddr, sizeof(maddr)); err=listen(sock,1024); int sockets=0; while(true) < sockaddr_in raddr; socklen_t rlen=sizeof(raddr); err=accept(sock,(sockaddr *) &raddr,&rlen); if(err>=0) < ++sockets; printf("%1d sockets accepted\n",sockets); >> > 

Источник

Читайте также:  Калибровка touch screen linux

What limits the maximum number of connections on a Linux server?

What kernel parameter or other settings control the maximum number of TCP sockets that can be open on a Linux server? What are the tradeoffs of allowing more connections?

I noticed while load testing an Apache server with ab that it’s pretty easy to max out the open connections on the server. If you leave off ab’s -k option, which allows connection reuse, and have it send more than about 10,000 requests then Apache serves the first 11,000 or so requests and then halts for 60 seconds. A look at netstat output shows 11,000 connections in the TIME_WAIT state. Apparently, this is normal. Connections are kept open a default of 60 seconds even after the client is done with them for TCP reliability reasons.

It seems like this would be an easy way to DoS a server and I’m wondering what the usual tunings and precautions for it are.

# ab -c 5 -n 50000 http://localhost/ This is ApacheBench, Version 2.0.40-dev apache-2.0 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking localhost (be patient) Completed 5000 requests Completed 10000 requests apr_poll: The timeout specified has expired (70007) Total of 11655 requests completed 

Here’s the netstat command I run during the test:

 # netstat --inet -p | grep "localhost:www" | sed -e 's/ \+/ /g' | cut -d' ' -f 1-4,6-7 | sort | uniq -c 11651 tcp 0 0 localhost:www TIME_WAIT - 1 tcp 0 1 localhost:44423 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44424 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44425 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44426 SYN_SENT 7831/ab 1 tcp 0 1 localhost:44428 SYN_SENT 7831/ab 

Find below all possible solutions or suggestions for the above questions..

Suggestion: 1:

I finally found the setting that was really limiting the number of connections: net.ipv4.netfilter.ip_conntrack_max . This was set to 11,776 and whatever I set it to is the number of requests I can serve in my test before having to wait tcp_fin_timeout seconds for more connections to become available. The conntrack table is what the kernel uses to track the state of connections so once it’s full, the kernel starts dropping packets and printing this in the log:

Jun 2 20:39:14 XXXX-XXX kernel: ip_conntrack: table full, dropping packet. 

The next step was getting the kernel to recycle all those connections in the TIME_WAIT state rather than dropping packets. I could get that to happen either by turning on tcp_tw_recycle or increasing ip_conntrack_max to be larger than the number of local ports made available for connections by ip_local_port_range . I guess once the kernel is out of local ports it starts recycling connections. This uses more memory tracking connections but it seems like the better solution than turning on tcp_tw_recycle since the docs imply that that is dangerous.

With this configuration I can run ab all day and never run out of connections:

net.ipv4.netfilter.ip_conntrack_max = 32768 net.ipv4.tcp_tw_recycle = 0 net.ipv4.tcp_tw_reuse = 0 net.ipv4.tcp_orphan_retries = 1 net.ipv4.tcp_fin_timeout = 25 net.ipv4.tcp_max_orphans = 8192 net.ipv4.ip_local_port_range = 32768 61000 

The tcp_max_orphans setting didn’t have any effect on my tests and I don’t know why. I would think it would close the connections in TIME_WAIT state once there were 8192 of them but it doesn’t do that for me.

Источник

Оцените статью
Adblock
detector