Linux socket open files

Socket accept — «Too many open files»

I am working on a school project where I had to write a multi-threaded server, and now I am comparing it to apache by running some tests against it. I am using autobench to help with that, but after I run a few tests, or if I give it too high of a rate (around 600+) to make the connections, I get a «Too many open files» error. After I am done with dealing with request, I always do a close() on the socket. I have tried to use the shutdown() function as well, but nothing seems to help. Any way around this?

13 Answers 13

There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.

You can check the following:

That will give you the system wide limits of file descriptors.

On the shell level, this will tell you your personal limit:

This can be changed in /etc/security/limits.conf — it’s the nofile param.

However, if you’re closing your sockets correctly, you shouldn’t receive this unless you’re opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.

I had similar problem. Quick solution is :

explanation is as follows — each server connection is a file descriptor. In CentOS, Redhat and Fedora, probably others, file user limit is 1024 — no idea why. It can be easily seen when you type: ulimit -n

Note this has no much relation to system max files (/proc/sys/fs/file-max).

In my case it was problem with Redis, so I did:

ulimit -n 4096 redis-server -c xxxx 

in your case instead of redis, you need to start your server.

Seems you do not understand the problem (or you place the comment under wrong answer?. It has to do with file descriptor limit, and nothing to do with memory or memory leak.

@RafaelBaptista High amount of concurrent connections is actually needed in some cases like for instance, a high performance chat server. This does not have to be about leaking FDs.

@RafaelBaptista: if you have a server that can handle more than 512 paraller connections you need A LOT MORE open files. Modern servers can handle multiple million parallel connections so having a limit as low as 1024 really does not make any sense. It might be okay for default limit for casual users but not for server software handling parallel client connections.

Use lsof -u `whoami` | wc -l to find how many open files the user has

TCP has a feature called «TIME_WAIT» that ensures connections are closed cleanly. It requires one end of the connection to stay listening for a while after the socket has been closed.

Читайте также:  Linux check memory errors

In a high-performance server, it’s important that it’s the clients who go into TIME_WAIT, not the server. Clients can afford to have a port open, whereas a busy server can rapidly run out of ports or have too many open FDs.

To achieve this, the server should never close the connection first — it should always wait for the client to close it.

No. TCP TIME_WAIT will hold sockets open at the operating system level and eventually cause the server to reject incoming connections. When you close the file handle, its closed. stackoverflow.com/questions/1803566/…

It’s true that the file handle closes immediately and I misspoke. But my main point still stands, because even though the FD is freed, the TCP port remains allocated during TIME_WAIT, and a busy server can run out of TCP ports, or spend too much kernel memory tracking them.

This means that the maximum number of simultaneously open files.

At the end of the file /etc/security/limits.conf you need to add the following lines:

* soft nofile 16384 * hard nofile 16384 

In the current console from root (sudo does not work) to do:

Although this is optional, if it is possible to restart the server.

In /etc/nginx/nginx.conf file to register the new value worker_connections equal to 16384 divide by value worker_processes .

If not did ulimit -n 16384 , need to reboot, then the problem will recede.

If after the repair is visible in the logs error accept() failed (24: Too many open files) :

In the nginx configuration, propevia (for example):

worker_processes 2; worker_rlimit_nofile 16384; events

I had this problem too. You have a file handle leak. You can debug this by printing out a list of all the open file handles (on POSIX systems):

void showFDInfo() < s32 numHandles = getdtablesize(); for ( s32 i = 0; i < numHandles; i++ ) < s32 fd_flags = fcntl( i, F_GETFD ); if ( fd_flags == -1 ) continue; showFDInfo( i ); >> void showFDInfo( s32 fd ) < char buf[256]; s32 fd_flags = fcntl( fd, F_GETFD ); if ( fd_flags == -1 ) return; s32 fl_flags = fcntl( fd, F_GETFL ); if ( fl_flags == -1 ) return; char path[256]; sprintf( path, "/proc/self/fd/%d", fd ); memset( &buf[0], 0, 256 ); ssize_t s = readlink( path, &buf[0], 256 ); if ( s == -1 ) < cerr cerr > 

By dumping out all the open files you will quickly figure out where your file handle leak is.

If your server spawns subprocesses. E.g. if this is a ‘fork’ style server, or if you are spawning other processes ( e.g. via cgi ), you have to make sure to create your file handles with «cloexec» — both for real files and also sockets.

Without cloexec, every time you fork or spawn, all open file handles are cloned in the child process.

It is also really easy to fail to close network sockets — e.g. just abandoning them when the remote party disconnects. This will leak handles like crazy.

Читайте также:  Где лежит cron linux

Result like: maxfiles 256 1000

If the numbers (soft limit & hard limit) are too low, you have to set upper:

sudo launchctl limit maxfiles 65536 200000 

it can take a bit of time before a closed socket is really freed up

cat /proc/sys/fs/file-max to see if there’s a system limit

For future reference, I ran into a similar problem; I was creating too many file descriptors (FDs) by creating too many files and sockets (on Unix OSs, everything is a FD). My solution was to increase FDs at runtime with setrlimit() .

First I got the FD limits, with the following code:

// This goes somewhere in your code struct rlimit rlim; if (getrlimit(RLIMIT_NOFILE, &rlim) == 0) < std::cout else

After running getrlimit() , I could confirm that on my system, the soft limit is 256 FDs, and the hard limit is infinite FDs (this is different depending on your distro and specs). Since I was creating > 300 FDs between files and sockets, my code was crashing.

In my case I couldn't decrease the number of FDs, so I decided to increase the FD soft limit instead, with this code:

// This goes somewhere in your code struct rlimit rlim; rlim.rlim_cur = NEW_SOFT_LIMIT; rlim.rlim_max = NEW_HARD_LIMIT; if (setrlimit(RLIMIT_NOFILE, &rlim) == -1)

Note that you can also get the number of FDs that you are using, and the source of these FDs, with this code.

Also you can find more information on gettrlimit() and setrlimit() here and here.

Источник

Why are TCP/IP sockets considered "open files"?

I need some assistance grasping what I'm sure is a fundamental concept in Linux: the limit for open files. Specifically, I'm confused on why open sockets can count towards the total number of "open files" on a system. Can someone please elaborate on the reason why? I understand that this probably goes back to the whole "everything is a file" principle in Linux but any additional detail would be appreciated.

3 Answers 3

The limit on "open files" is not really just for files. It's a limit on the number of kernel handles a single process can use at one time. Historically, the only thing that programs would typically open a lot of were files, so this became known as a limit on the number of open files. There is a limit to help prevent processes from say, opening a lot of files and accidentally forgetting to close them, which will cause system-wide problems eventually.

A socket connection is also a kernel handle. So the same limits apply for the same reasons - it's possible for a process to open network connections and forget to close them.

As noted in the comments, kernel handles are traditionally called file descriptors in Unix-like systems.

Читайте также:  Linux top имя процесса

"Kernel handles" is a Windows terminology. You'd rather refer to "file descriptors" which is how these entities are generally called with Unix & Linux.

This answer hedges too much. Sockets are files. They provide access to streams of bytes through the read / write interface, which is the heart of what it means to be a file.

@WumpusQ.Wumbley, but then you have the shutdown(2) syscall on them, but not on files, and you can't read from a socket using cat -- that's the reason netcat has been created. I'd say that (luckily) sockets in Unix-like kernels behave like files in terms of I/O, but the similarity ends right there. (Honestly, I'd also like to hear from someone with Plan 9 experience as I've heard they got unification of these things farther than traditional unices).

The "everything is a file" idea means that "file" is an abstract data type with many subtypes. Most of the subtypes support extra methods in addition to the basic stuff that all files support. sockets have lots of extras. block devices and regular files have seek. directories are really weird (write doesn't work and if read works, it's not useful). The presence of extra methods doesn't mean these things aren't part of the general category of things we call "files".

The reason why TCP/IP sockets use file descriptors is that, when the sockets interface was first designed and implemented (in BSD Unix, in 1983), its designers felt that a network connection was analogous to a file - you can read , write , and close both, and that it would fit well with the Unix idea of "everything is a file".

Other TCP/IP network stack implementations didn't necessarily integrate with their OS's file-I/O subsystem, an example being MacTCP. But because the BSD sockets interface was so popular, even these other implementations chose to replicate the socket API with its Unix-like functions, so you got "file descriptors", only used for TCP/IP communication, on systems that didn't otherwise have file descriptors.

The other part of your question is why is there a limit? It's because the quickest way to implement a file descriptor lookup table is with an array. Historically, the limit was hard-coded into the kernel.

Here's the code in Unix release 7 (1979) with a hard-coded limit 20 file descriptors per process:

By comparison, Linux dynamically allocates space for a process's file descriptor table. The absolute limit defaults to 8192, but you can set this to whatever you like. My system lists 191072 in /proc/sys/fs/file-max .

Despite there being no absolute limit in Linux any more, nonetheless we don't want to let programs go crazy, so the administrator (or the distribution packager) generally sets resource limits. Take a look at /etc/security/limits.conf , or run ulimit -n .

Источник

Оцените статью
Adblock
detector