Linux максимальное количество соединений

What is the theoretical maximum number of open TCP connections that a modern Linux box can have

Assuming infinite performance from hardware, can a Linux box support >65536 open TCP connections? I understand that the number of ephemeral ports (<65536) limits the number of connections from one local IP to one port on one remote IP. The tuple (local ip, local port, remote ip, remote port) is what uniquely defines a TCP connection; does this imply that more than 65K connections can be supported if more than one of these parameters are free. e.g. connections to a single port number on multiple remote hosts from multiple local IPs. Is there another 16 bit limit in the system? Number of file descriptors perhaps?

3 Answers 3

A single listening port can accept more than one connection simultaneously.

There is a ’64K’ limit that is often cited, but that is per client per server port, and needs clarifying.

Each TCP/IP packet has basically four fields for addressing. These are:

source_ip source_port destination_ip destination_port

Inside the TCP stack, these four fields are used as a compound key to match up packets to connections (e.g. file descriptors).

If a client has many connections to the same port on the same destination, then three of those fields will be the same — only source_port varies to differentiate the different connections. Ports are 16-bit numbers, therefore the maximum number of connections any given client can have to any given host port is 64K.

However, multiple clients can each have up to 64K connections to some server’s port, and if the server has multiple ports or either is multi-homed then you can multiply that further.

So the real limit is file descriptors. Each individual socket connection is given a file descriptor, so the limit is really the number of file descriptors that the system has been configured to allow and resources to handle. The maximum limit is typically up over 300K, but is configurable e.g. with sysctl.

The realistic limits being boasted about for normal boxes are around 80K for example single threaded Jabber messaging servers.

Источник

Is there a limit on number of tcp/ip connections between machines on linux?

I have a very simple program written in 5 min that opens a sever socket and loops through the request and prints to the screen the bytes sent to it. I then tried to benchmark how many connections I can hammer it with to try to find out how many concurrent users I can support with this program. On another machine (where the network between them is not saturated) I created a simple program that goes into a loop and connects to the server machine and send the bytes «hello world». When the loop is 1000-3000 the client finishes with all requests sent. When the loop goes beyond 5000 it starts to have time outs after finish the first X number of requests. Why is this? I have made sure to close my socket in the loop. Can you only create so many connections within a certain period of time? Is this limit only applicable between the same machines and I need not worry about this in production where 5000+ requests are all coming from different machines?

Читайте также:  Kali linux nethunter смартфон

you can monitor your sockets using ss -s command. And follow the steps to increase socket limit if needed

you can reuse TIMED_WAIT sockets like: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

6 Answers 6

There is a limit, yes. See ulimit .

In addition, you need to consider the TIME_WAIT state. Once a TCP socket is closed (by default) the port remains occupied in TIME_WAIT status for 2 minutes. This value is tunable. This will also «run you out of sockets» even though they are closed.

Run netstat to see the TIME_WAIT stuff in action.

P.S. The reason for TIME_WAIT is to handle the case of packets arriving after the socket is closed. This can happen because packets are delayed or the other side just doesn’t know that the socket has been closed yet. This allows the OS to silently drop those packets without a chance of «infecting» a different, unrelated socket connection.

I just checked with netstat on both machines, and there are indeed a ton of TIMED_WAIT on the client side but no TIMED_WAIT on the server side. Is this the bahaviour you are describing? Assuming yes: 1) Does this mean this won’t be an issue in production because the limit seems to be coming from the client side (running out of socket) and not he server side (where no sockets are created) 2) Is there a way to get around this so I can test my server with load similar to production?

The behavior of TIMED_WAIT is OS-specific. Yes you can get around it — it’s possible to change the TIMED_WAIT timeout e.g. from 120 seconds to 30 or even less.

When looking for the max performance you run into a lot of issue and potential bottlenecks. Running a simple hello world test is not necessarily going to find them all.

Possible limitations include:

  • Kernel socket limitations: look in /proc/sys/net for lots of kernel tuning..
  • process limits: check out ulimit as others have stated here
  • as your application grows in complexity, it may not have enough CPU power to keep up with the number of connections coming in. Use top to see if your CPU is maxed
  • number of threads? I’m not experienced with threading, but this may come into play in conjunction with the previous items.

Is your server single-threaded? If so, what polling / multiplexing function are you using?

Using select() does not work beyond the hard-coded maximum file descriptor limit set at compile-time, which is hopeless (normally 256, or a few more).

poll() is better but you will end up with the scalability problem with a large number of FDs repopulating the set each time around the loop.

epoll() should work well up to some other limit which you hit.

10k connections should be easy enough to achieve. Use a recent(ish) 2.6 kernel.

Читайте также:  Bluestacks аналоги для linux

How many client machines did you use? Are you sure you didn’t hit a client-side limit?

On my system hardcoded select limit was 1024 and it is indeed impossible to go above this (the limit is imposed by a data type that holds the map of file descriptors to watch).

The quick answer is 2^16 TCP ports, 64K.

The issues with system imposed limits is a configuration issue, already touched upon in previous comments.

The internal implications to TCP is not so clear (to me). Each port requires memory for it’s instantiation, goes onto a list and needs network buffers for data in transit.

Given 64K TCP sessions the overhead for instances of the ports might be an issue on a 32-bit kernel, but not a 64-bit kernel (correction here gladly accepted). The lookup process with 64K sessions can slow things a bit and every packet hits the timer queues, which can also be problematic. Storage for in transit data can theoretically swell to the window size times ports (maybe 8 GByte).

The issue with connection speed (mentioned above) is probably what you are seeing. TCP generally takes time to do things. However, it is not required. A TCP connect, transact and disconnect can be done very efficiently (check to see how the TCP sessions are created and closed).

There are systems that pass tens of gigabits per second, so the packet level scaling should be OK.

There are machines with plenty of physical memory, so that looks OK.

The performance of the system, if carefully configured should be OK.

The server side of things should scale in a similar fashion.

I would be concerned about things like memory bandwidth.

Consider an experiment where you login to the local host 10,000 times. Then type a character. The entire stack through user space would be engaged on each character. The active footprint would likely exceed the data cache size. Running through lots of memory can stress the VM system. The cost of context switches could approach a second!

Источник

Increasing the maximum number of TCP/IP connections in Linux

I am programming a server and it seems like my number of connections is being limited since my bandwidth isn’t being saturated even when I’ve set the number of connections to «unlimited». How can I increase or eliminate a maximum number of connections that my Ubuntu Linux box can open at a time? Does the OS limit this, or is it the router or the ISP? Or is it something else?

@Software Monkey: I answered this anyway because I hope this might be useful to someone who actually is writing a server in the future.

@derobert: I saw that +1. Actually, I had the same thought after my previous comment, but thought I would let the comment stand.

5 Answers 5

Maximum number of connections are impacted by certain limits on both client & server sides, albeit a little differently.

On the client side: Increase the ephermal port range, and decrease the tcp_fin_timeout

To find out the default values:

sysctl net.ipv4.ip_local_port_range sysctl net.ipv4.tcp_fin_timeout 

The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The fin_timeout defines the minimum time these sockets will stay in TIME_WAIT state (unusable after being used once). Usual system defaults are:

Читайте также:  Astra linux special edition пароль root

This basically means your system cannot consistently guarantee more than (61000 — 32768) / 60 = 470 sockets per second. If you are not happy with that, you could begin with increasing the port_range . Setting the range to 15000 61000 is pretty common these days. You could further increase the availability by decreasing the fin_timeout . Suppose you do both, you should see over 1500 outbound connections per second, more readily.

To change the values:

sysctl net.ipv4.ip_local_port_range="15000 61000" sysctl net.ipv4.tcp_fin_timeout=30 

The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system’s ability to handle concurrent connections in a sustainable manner for large periods of «activity.»

Default Sysctl values on a typical Linux box for tcp_tw_recycle & tcp_tw_reuse would be

net.ipv4.tcp_tw_recycle=0 net.ipv4.tcp_tw_reuse=0 

These do not allow a connection from a «used» socket (in wait state) and force the sockets to last the complete time_wait cycle. I recommend setting:

sysctl net.ipv4.tcp_tw_recycle=1 sysctl net.ipv4.tcp_tw_reuse=1 

This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets. Make sure to read post «Coping with the TCP TIME-WAIT» from Vincent Bernat to understand the implications. The net.ipv4.tcp_tw_recycle option is quite problematic for public-facing servers as it won’t handle connections from two different computers behind the same NAT device, which is a problem hard to detect and waiting to bite you. Note that net.ipv4.tcp_tw_recycle has been removed from Linux 4.12.

On the Server Side: The net.core.somaxconn value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application’s capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application’s listen call, to an equal or higher integer.

sysctl net.core.somaxconn=1024 

txqueuelen parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.

ifconfig eth0 txqueuelen 5000 echo "/sbin/ifconfig eth0 txqueuelen 5000" >> /etc/rc.local 

Similarly bump up the values for net.core.netdev_max_backlog and net.ipv4.tcp_max_syn_backlog . Their default values are 1000 and 1024 respectively.

sysctl net.core.netdev_max_backlog=2000 sysctl net.ipv4.tcp_max_syn_backlog=2048 

Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.

Besides the above one more popular technique used by programmers is to reduce the number of tcp write calls. My own preference is to use a buffer wherein I push the data I wish to send to the client, and then at appropriate points I write out the buffered data into the actual socket. This technique allows me to use large data packets, reduce fragmentation, reduces my CPU utilization both in the user land and at kernel-level.

Источник

Оцените статью
Adblock
detector