Linux file descriptor limits

Glass Onion Blog

Cheat sheets, post-its and random notes from the desk of a programmer

Increasing ulimit and file descriptors limit on Linux

Getting “too many open files” errors? Here is how to increase ulimit and file descriptors settings on Linux

file-max is the maximum File Descriptors (FD). It is a kernel setting enforced at the system level. ulimit is enforced at the user level. It should be configured to be less than file-max.

Default settings for ulimit and file-max on a Linux system assume that several users (not applications) would share the system. These settings limit the number of resources used by each user. The default settings are very low for high performance servers and should be increased.

To change the file descriptor setting, edit the kernel parameter file /etc/sysctl.conf. Add line fs.file-max=[new value] to it.

# vi /etc/sysctl.conf fs.file-max = 500000

To change the ulimit setting, edit the file /etc/security/limits.conf and set the hard and soft limits.

# vi /etc/security/limits.conf * soft nofile 60000 * hard nofile 60000

Now test the new system settings using the following commands:

#ulimit -a open files (-n) 60000

Check the current open file descriptor limit:

# more /proc/sys/fs/file-max 500000

Another way to check the file descriptor limit:

# sysctl -a | grep fs.file-max fs.file-max = 500000

To find out how many file descriptors are currently being used:

Читайте также:  Suse linux release version

To find out how many files are currently open:

Источник

How do linux file descriptor limits work?

I was told that my server refused to accept client network connections at a specific port could be due to the lack of file descriptors. I looked up what this is all about and read about it here: http://www.netadmintools.com/art295.html So I tested my system and I got this:

cat /proc/sys/fs/file-nr 1088 0 331287 

What does this mean? The second column actually stays at 0 even after I shutdown my server, it even stays at 0 even right after a boot!

3 Answers 3

You want to look at /proc/sys/fs/file-max instead

From recent linux/Documentation/sysctl/fs.txt:

The kernel allocates file handles dynamically, but as yet it doesn’t free them again.

The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit.

Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles — this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles.

Attempts to allocate more file descriptors than file-max are reported with printk, look for «VFS: file-max limit reached».

EDIT: the underlying error is probably not the system running out of global filedescriptors, but just your process. It seems likely that the problem is the maximum size limit of select.

Читайте также:  Версии линукс для начинающих

Источник

Limits on the number of file descriptors

I understand about file descriptors, but I don’t understand about soft and hard limits. When I run cat /proc/sys/fs/file-max , I get back 590432 . This should imply that I can open up to 590432 files (i.e. have up to 590432 file descriptors. But when I run ulimit , it gives me different results:

$ ulimit unlimited $ ulimit -Hn # Hard limit 4096 $ ulimit -Sn # Soft limit 1024 

But what are the hard / soft limit from ulimit , and how do they relate to the number stored at /proc/sys/fs/file-max ?

2 Answers 2

According to the kernel documentation, /proc/sys/fs/file-max is the maximum, total, global number of file handles the kernel will allocate before choking. This is the kernel’s limit, not your current user’s. So you can open 590432, provided you’re alone on an idle system (single-user mode, no daemons running). File handles ( struct file in the kernel) are different from file descriptors: multiple file descriptors can point to the same file handle, and file handles can also exist without an associated descriptor internally. No system-wide file descriptor limit is set; this can only be mandated per process.

Note that the documentation is out of date: the file has been /proc/sys/fs/file-max for a long time. Thanks to Martin Jambon for pointing this out.

The difference between soft and hard limits is answered here, on SE. You can raise or lower a soft limit as an ordinary user, provided you don’t overstep the hard limit. You can also lower a hard limit (but you can’t raise it again for that process). As the superuser, you can raise and lower both hard and soft limits. The dual limit scheme is used to enforce system policies, but also allow ordinary users to set temporary limits for themselves and later change them.

Читайте также:  Добавить диск linux vmware

Note that if you try to lower a hard limit below the soft limit (and you’re not the superuser), you’ll get EINVAL back (Invalid Argument).

So, in your particular case, ulimit (which is the same as ulimit -Sf ) says you don’t have a soft limit on the size of files written by the shell and its subprocesses. (that’s probably a good idea in most cases)

Your other invocation, ulimit -Hn reports on the -n limit (maximum number of open file descriptors), not the -f limit, which is why the soft limit seems higher than the hard limit. If you enter ulimit -Hf you’ll also get unlimited .

Источник

Оцените статью
Adblock
detector