Tcp connection limit linux

Is there a limit on number of tcp/ip connections between machines on linux?

I have a very simple program written in 5 min that opens a sever socket and loops through the request and prints to the screen the bytes sent to it. I then tried to benchmark how many connections I can hammer it with to try to find out how many concurrent users I can support with this program. On another machine (where the network between them is not saturated) I created a simple program that goes into a loop and connects to the server machine and send the bytes «hello world». When the loop is 1000-3000 the client finishes with all requests sent. When the loop goes beyond 5000 it starts to have time outs after finish the first X number of requests. Why is this? I have made sure to close my socket in the loop. Can you only create so many connections within a certain period of time? Is this limit only applicable between the same machines and I need not worry about this in production where 5000+ requests are all coming from different machines?

you can monitor your sockets using ss -s command. And follow the steps to increase socket limit if needed

you can reuse TIMED_WAIT sockets like: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)

6 Answers 6

There is a limit, yes. See ulimit .

In addition, you need to consider the TIME_WAIT state. Once a TCP socket is closed (by default) the port remains occupied in TIME_WAIT status for 2 minutes. This value is tunable. This will also «run you out of sockets» even though they are closed.

Run netstat to see the TIME_WAIT stuff in action.

P.S. The reason for TIME_WAIT is to handle the case of packets arriving after the socket is closed. This can happen because packets are delayed or the other side just doesn’t know that the socket has been closed yet. This allows the OS to silently drop those packets without a chance of «infecting» a different, unrelated socket connection.

I just checked with netstat on both machines, and there are indeed a ton of TIMED_WAIT on the client side but no TIMED_WAIT on the server side. Is this the bahaviour you are describing? Assuming yes: 1) Does this mean this won’t be an issue in production because the limit seems to be coming from the client side (running out of socket) and not he server side (where no sockets are created) 2) Is there a way to get around this so I can test my server with load similar to production?

The behavior of TIMED_WAIT is OS-specific. Yes you can get around it — it’s possible to change the TIMED_WAIT timeout e.g. from 120 seconds to 30 or even less.

When looking for the max performance you run into a lot of issue and potential bottlenecks. Running a simple hello world test is not necessarily going to find them all.

Possible limitations include:

  • Kernel socket limitations: look in /proc/sys/net for lots of kernel tuning..
  • process limits: check out ulimit as others have stated here
  • as your application grows in complexity, it may not have enough CPU power to keep up with the number of connections coming in. Use top to see if your CPU is maxed
  • number of threads? I’m not experienced with threading, but this may come into play in conjunction with the previous items.
Читайте также:  Поменять иконки linux mint

Is your server single-threaded? If so, what polling / multiplexing function are you using?

Using select() does not work beyond the hard-coded maximum file descriptor limit set at compile-time, which is hopeless (normally 256, or a few more).

poll() is better but you will end up with the scalability problem with a large number of FDs repopulating the set each time around the loop.

epoll() should work well up to some other limit which you hit.

10k connections should be easy enough to achieve. Use a recent(ish) 2.6 kernel.

How many client machines did you use? Are you sure you didn’t hit a client-side limit?

On my system hardcoded select limit was 1024 and it is indeed impossible to go above this (the limit is imposed by a data type that holds the map of file descriptors to watch).

The quick answer is 2^16 TCP ports, 64K.

The issues with system imposed limits is a configuration issue, already touched upon in previous comments.

The internal implications to TCP is not so clear (to me). Each port requires memory for it’s instantiation, goes onto a list and needs network buffers for data in transit.

Given 64K TCP sessions the overhead for instances of the ports might be an issue on a 32-bit kernel, but not a 64-bit kernel (correction here gladly accepted). The lookup process with 64K sessions can slow things a bit and every packet hits the timer queues, which can also be problematic. Storage for in transit data can theoretically swell to the window size times ports (maybe 8 GByte).

The issue with connection speed (mentioned above) is probably what you are seeing. TCP generally takes time to do things. However, it is not required. A TCP connect, transact and disconnect can be done very efficiently (check to see how the TCP sessions are created and closed).

There are systems that pass tens of gigabits per second, so the packet level scaling should be OK.

There are machines with plenty of physical memory, so that looks OK.

The performance of the system, if carefully configured should be OK.

The server side of things should scale in a similar fashion.

I would be concerned about things like memory bandwidth.

Consider an experiment where you login to the local host 10,000 times. Then type a character. The entire stack through user space would be engaged on each character. The active footprint would likely exceed the data cache size. Running through lots of memory can stress the VM system. The cost of context switches could approach a second!

Источник

What is the theoretical maximum number of open TCP connections that a modern Linux box can have

Assuming infinite performance from hardware, can a Linux box support >65536 open TCP connections? I understand that the number of ephemeral ports (<65536) limits the number of connections from one local IP to one port on one remote IP. The tuple (local ip, local port, remote ip, remote port) is what uniquely defines a TCP connection; does this imply that more than 65K connections can be supported if more than one of these parameters are free. e.g. connections to a single port number on multiple remote hosts from multiple local IPs. Is there another 16 bit limit in the system? Number of file descriptors perhaps?

Читайте также:  Звук в микрофон linux

3 Answers 3

A single listening port can accept more than one connection simultaneously.

There is a ’64K’ limit that is often cited, but that is per client per server port, and needs clarifying.

Each TCP/IP packet has basically four fields for addressing. These are:

source_ip source_port destination_ip destination_port

Inside the TCP stack, these four fields are used as a compound key to match up packets to connections (e.g. file descriptors).

If a client has many connections to the same port on the same destination, then three of those fields will be the same — only source_port varies to differentiate the different connections. Ports are 16-bit numbers, therefore the maximum number of connections any given client can have to any given host port is 64K.

However, multiple clients can each have up to 64K connections to some server’s port, and if the server has multiple ports or either is multi-homed then you can multiply that further.

So the real limit is file descriptors. Each individual socket connection is given a file descriptor, so the limit is really the number of file descriptors that the system has been configured to allow and resources to handle. The maximum limit is typically up over 300K, but is configurable e.g. with sysctl.

The realistic limits being boasted about for normal boxes are around 80K for example single threaded Jabber messaging servers.

Источник

How many socket connections possible?

Has anyone an idea how many tcp-socket connections are possible on a modern standard Linux server? (There is in general less traffic on each connection, but all the connections have to be up all the time.)

For Windows, see this question [Which is the maximum number of Windows concurrent tcp/ip connections?][1] [1]:stackoverflow.com/questions/413110/…

8 Answers 8

I achieved 1600k concurrent idle socket connections, and at the same time 57k req/s on a Linux desktop (16G RAM, I7 2600 CPU). It’s a single thread http server written in C with epoll. Source code is on github, a blog here.

I did 600k concurrent HTTP connections (client & server) on both the same computer, with JAVA/Clojure . detail info post, HN discussion: http://news.ycombinator.com/item?id=5127251

The cost of a connection(with epoll):

  • application need some RAM per connection
  • TCP buffer 2 * 4k ~ 10k, or more
  • epoll need some memory for a file descriptor, from epoll(7)

Each registered file descriptor costs roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel.

@Bangash My comment has absolutely nothing to do with Erlang, or really anything other than the fact that leef posted a comment talking about 1 million socket connections on a single box, but this answer talks about 1.6 million — hence it seemed like a bit of a silly comment. Erlang is great — powers CouchDB. However, I don’t see how your comment has any relevance here.

This depends not only on the operating system in question, but also on configuration, potentially real-time configuration.

will show the current maximum number of file descriptors total allowed to be opened simultaneously. Check out http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html

Just checked my ubuntu (13.04) laptop. 386491. I doubt this will be the first limit I would run into.

A limit on the number of open sockets is configurable in the /proc file system

Max for incoming connections in the OS defined by integer limits.

Linux itself allows billions of open sockets.

To use the sockets you need an application listening, e.g. a web server, and that will use a certain amount of RAM per socket.

Читайте также:  Linux командная строка символ

RAM and CPU will introduce the real limits. (modern 2017, think millions not billions)

1 millions is possible, not easy. Expect to use X Gigabytes of RAM to manage 1 million sockets.

Outgoing TCP connections are limited by port numbers ~65000 per IP. You can have multiple IP addresses, but not unlimited IP addresses. This is a limit in TCP not Linux.

FreeBSD is probably the server you want, Here’s a little blog post about tuning it to handle 100,000 connections, its has had some interesting features like zero-copy sockets for some time now, along with kqueue to act as a completion port mechanism.

Solaris can handle 100,000 connections back in the last century!. They say linux would be better

The best description I’ve come across is this presentation/paper on writing a scalable webserver. He’s not afraid to say it like it is 🙂

Same for software: the cretins on the application layer forced great innovations on the OS layer. Because Lotus Notes keeps one TCP connection per client open, IBM contributed major optimizations for the ”one process, 100.000 open connections” case to Linux

And the O(1) scheduler was originally created to score well on some irrelevant Java benchmark. The bottom line is that this bloat benefits all of us.

I stopped at 70,000 because it was more than my client required; so the test had been passed. With changes in how non-paged pool limits are calculated I would imagine that a windows server 2008 machine would have no problem with 100,000 connections.

@BrianCline You probably don’t need this anymore, but I also wanted it and I think I found it: slideshare.net/Arbow/scalable-networking (slide 33)

On Linux you should be looking at using epoll for async I/O. It might also be worth fine-tuning socket-buffers to not waste too much kernel space per connection.

I would guess that you should be able to reach 100k connections on a reasonable machine.

depends on the application. if there is only a few packages from each client, 100K is very easy for linux. A engineer of my team had done a test years ago, the result shows : when there is no package from client after connection established, linux epoll can watch 400k fd for readablity at cpu usage level under 50%.

For windows machines, if you’re writing a server to scale well, and therefore using I/O Completion Ports and async I/O, then the main limitation is the amount of non-paged pool that you’re using for each active connection. This translates directly into a limit based on the amount of memory that your machine has installed (non-paged pool is a finite, fixed size amount that is based on the total memory installed).

For connections that don’t see much traffic you can reduce make them more efficient by posting ‘zero byte reads’ which don’t use non-paged pool and don’t affect the locked pages limit (another potentially limited resource that may prevent you having lots of socket connections open).

Apart from that, well, you will need to profile but I’ve managed to get more than 70,000 concurrent connections on a modestly specified (760MB memory) server; see here http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html for more details.

Obviously if you’re using a less efficient architecture such as ‘thread per connection’ or ‘select’ then you should expect to achieve less impressive figures; but, IMHO, there’s simply no reason to select such architectures for windows socket servers.

Источник

Оцените статью
Adblock
detector