Linux limit tcp sockets

Max number of socket on Linux

It seems that the server is limited at ~32720 sockets. I have tried every known variable change to raise up this limit. But the server stay limited at 32720 opened socket, even if there is still 4Go of free memory and 80% of idle cpu. Here’s the configuration

~# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 63931 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 798621 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 2048 cpu time (seconds, -t) unlimited max user processes (-u) 63931 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited net.netfilter.nf_conntrack_max = 999999 net.ipv4.netfilter.ip_conntrack_max = 999999 net.nf_conntrack_max = 999999 

Just so it’s said: If you need more than 32000 sockets at once, you have bigger problems than just that number being too low. A normal server doesn’t ever have more than a few hundred sockets (maybe even a couple thousand, for a busy server) open at once.

@TheSquad: do you have some security framework loaded, that limits the number of fd’s and/or connections?

Experience. Even extremely busy web sites rarely serve more than a couple thousand simultaneous clients — once they get to that point, they’re clustered or otherwise distributed to reduce load. And the QuakeNet IRC network, the best example i could think of for mass long-lived TCP client/server stuff, has maybe 80k simultaneous users spread over 40+ servers. That’s about 2k per.

@mvds: The limit is most likely not due to security stuff — security would kick in WAY before 32k sockets.

8 Answers 8

If you’re dealing with openssl and threads, go check your /proc/sys/vm/max_map_count and try to raise it.

In IPV4, the TCP layer has 16 bits for the destination port, and 16 bits for the source port.

Seeing that your limit is 32K I would expect that you are actually seeing the limit of outbound TCP connections you can make. You should be able to get a max of 65K sockets (this would be the protocol limit). This is the limit for total number of named connections. Fortunately, binding a port for incoming connections only uses 1. But if you are trying to test the number of connections from the same machine, you can only have 65K total outgoing connections (for TCP). To test the amount of incoming connections, you will need multiple computers.

Note: you can call socket(AF_INET. ) up to the number of file descriptors available, but you cannot bind them without increasing the number of ports available. To increase the range, do this:

Читайте также:  Linux mount virtualbox disk

echo «1024 65535» > /proc/sys/net/ipv4/ip_local_port_range (cat it to see what you currently have—the default is 32768 to 61000)

Perhaps it is time for a new TCP like protocol that will allow 32 bits for the source and dest ports? But how many applications really need more than 65 thousand outbound connections?

The following will allow 100,000 incoming connections on linux mint 16 (64 bit) (you must run it as root to set the limits)

#include #include #include #include #include #include void ShowLimit() < rlimit lim; int err=getrlimit(RLIMIT_NOFILE,&lim); printf("%1d limit: %1ld,%1ld\n",err,lim.rlim_cur,lim.rlim_max); >main() < ShowLimit(); rlimit lim; lim.rlim_cur=100000; lim.rlim_max=100000; int err=setrlimit(RLIMIT_NOFILE,&lim); printf("set returned %1d\n",err); ShowLimit(); int sock=socket(AF_INET,SOCK_STREAM,IPPROTO_TCP); sockaddr_in maddr; maddr.sin_family=AF_INET; maddr.sin_port=htons(80); maddr.sin_addr.s_addr=INADDR_ANY; err=bind(sock,(sockaddr *) &maddr, sizeof(maddr)); err=listen(sock,1024); int sockets=0; while(true) < sockaddr_in raddr; socklen_t rlen=sizeof(raddr); err=accept(sock,(sockaddr *) &raddr,&rlen); if(err>=0) < ++sockets; printf("%1d sockets accepted\n",sockets); >> > 

Источник

What is the theoretical maximum number of open TCP connections that a modern Linux box can have

Assuming infinite performance from hardware, can a Linux box support >65536 open TCP connections? I understand that the number of ephemeral ports (<65536) limits the number of connections from one local IP to one port on one remote IP. The tuple (local ip, local port, remote ip, remote port) is what uniquely defines a TCP connection; does this imply that more than 65K connections can be supported if more than one of these parameters are free. e.g. connections to a single port number on multiple remote hosts from multiple local IPs. Is there another 16 bit limit in the system? Number of file descriptors perhaps?

3 Answers 3

A single listening port can accept more than one connection simultaneously.

There is a ’64K’ limit that is often cited, but that is per client per server port, and needs clarifying.

Each TCP/IP packet has basically four fields for addressing. These are:

source_ip source_port destination_ip destination_port

Inside the TCP stack, these four fields are used as a compound key to match up packets to connections (e.g. file descriptors).

If a client has many connections to the same port on the same destination, then three of those fields will be the same — only source_port varies to differentiate the different connections. Ports are 16-bit numbers, therefore the maximum number of connections any given client can have to any given host port is 64K.

However, multiple clients can each have up to 64K connections to some server’s port, and if the server has multiple ports or either is multi-homed then you can multiply that further.

So the real limit is file descriptors. Each individual socket connection is given a file descriptor, so the limit is really the number of file descriptors that the system has been configured to allow and resources to handle. The maximum limit is typically up over 300K, but is configurable e.g. with sysctl.

Читайте также:  Samba linux настройка доступа

The realistic limits being boasted about for normal boxes are around 80K for example single threaded Jabber messaging servers.

Источник

How many socket connections possible?

Has anyone an idea how many tcp-socket connections are possible on a modern standard Linux server? (There is in general less traffic on each connection, but all the connections have to be up all the time.)

For Windows, see this question [Which is the maximum number of Windows concurrent tcp/ip connections?][1] [1]:stackoverflow.com/questions/413110/…

8 Answers 8

I achieved 1600k concurrent idle socket connections, and at the same time 57k req/s on a Linux desktop (16G RAM, I7 2600 CPU). It’s a single thread http server written in C with epoll. Source code is on github, a blog here.

I did 600k concurrent HTTP connections (client & server) on both the same computer, with JAVA/Clojure . detail info post, HN discussion: http://news.ycombinator.com/item?id=5127251

The cost of a connection(with epoll):

  • application need some RAM per connection
  • TCP buffer 2 * 4k ~ 10k, or more
  • epoll need some memory for a file descriptor, from epoll(7)

Each registered file descriptor costs roughly 90 bytes on a 32-bit kernel, and roughly 160 bytes on a 64-bit kernel.

@Bangash My comment has absolutely nothing to do with Erlang, or really anything other than the fact that leef posted a comment talking about 1 million socket connections on a single box, but this answer talks about 1.6 million — hence it seemed like a bit of a silly comment. Erlang is great — powers CouchDB. However, I don’t see how your comment has any relevance here.

This depends not only on the operating system in question, but also on configuration, potentially real-time configuration.

will show the current maximum number of file descriptors total allowed to be opened simultaneously. Check out http://www.cs.uwaterloo.ca/~brecht/servers/openfiles.html

Just checked my ubuntu (13.04) laptop. 386491. I doubt this will be the first limit I would run into.

A limit on the number of open sockets is configurable in the /proc file system

Max for incoming connections in the OS defined by integer limits.

Linux itself allows billions of open sockets.

To use the sockets you need an application listening, e.g. a web server, and that will use a certain amount of RAM per socket.

RAM and CPU will introduce the real limits. (modern 2017, think millions not billions)

1 millions is possible, not easy. Expect to use X Gigabytes of RAM to manage 1 million sockets.

Outgoing TCP connections are limited by port numbers ~65000 per IP. You can have multiple IP addresses, but not unlimited IP addresses. This is a limit in TCP not Linux.

FreeBSD is probably the server you want, Here’s a little blog post about tuning it to handle 100,000 connections, its has had some interesting features like zero-copy sockets for some time now, along with kqueue to act as a completion port mechanism.

Читайте также:  How to change date in linux

Solaris can handle 100,000 connections back in the last century!. They say linux would be better

The best description I’ve come across is this presentation/paper on writing a scalable webserver. He’s not afraid to say it like it is 🙂

Same for software: the cretins on the application layer forced great innovations on the OS layer. Because Lotus Notes keeps one TCP connection per client open, IBM contributed major optimizations for the ”one process, 100.000 open connections” case to Linux

And the O(1) scheduler was originally created to score well on some irrelevant Java benchmark. The bottom line is that this bloat benefits all of us.

I stopped at 70,000 because it was more than my client required; so the test had been passed. With changes in how non-paged pool limits are calculated I would imagine that a windows server 2008 machine would have no problem with 100,000 connections.

@BrianCline You probably don’t need this anymore, but I also wanted it and I think I found it: slideshare.net/Arbow/scalable-networking (slide 33)

On Linux you should be looking at using epoll for async I/O. It might also be worth fine-tuning socket-buffers to not waste too much kernel space per connection.

I would guess that you should be able to reach 100k connections on a reasonable machine.

depends on the application. if there is only a few packages from each client, 100K is very easy for linux. A engineer of my team had done a test years ago, the result shows : when there is no package from client after connection established, linux epoll can watch 400k fd for readablity at cpu usage level under 50%.

For windows machines, if you’re writing a server to scale well, and therefore using I/O Completion Ports and async I/O, then the main limitation is the amount of non-paged pool that you’re using for each active connection. This translates directly into a limit based on the amount of memory that your machine has installed (non-paged pool is a finite, fixed size amount that is based on the total memory installed).

For connections that don’t see much traffic you can reduce make them more efficient by posting ‘zero byte reads’ which don’t use non-paged pool and don’t affect the locked pages limit (another potentially limited resource that may prevent you having lots of socket connections open).

Apart from that, well, you will need to profile but I’ve managed to get more than 70,000 concurrent connections on a modestly specified (760MB memory) server; see here http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html for more details.

Obviously if you’re using a less efficient architecture such as ‘thread per connection’ or ‘select’ then you should expect to achieve less impressive figures; but, IMHO, there’s simply no reason to select such architectures for windows socket servers.

Источник

Оцените статью
Adblock
detector