Limit on open files in linux

On Linux — set maximum open files to unlimited. Possible?

Is it possible to set the maximum number of open files to some «infinite» value or must it be a number? I had a requirement to set the descriptor limit for a daemon user to be «unlimited» and I’m trying to determine if that’s possible or how to do it. I’ve seen some mailing lists refer to a «max» value that can be used (as in: «myuser hard nofile max», but so far the man pages and references I’ve consulted don’t back that up. If I can’t use ‘max’ or similar, I’d like to know how to determine what the max number of files is (theoretically) so I have some basis for whatever number I pick. I don’t want to use 100000000 or something if there’s a more reasonable way to get an upper bound. I’m using RHEL 5 if it’s important. Update: I’m an idiot when it comes to writing questions. Ideally I’d like to do this in the limits.conf file (which is where «max» would come from). Does that change any answers? Thanks for the comments. This is for a JBOSS instance and not a daemon I’m writing so I don’t know if setrlimit() is useful to me. However, Jefromi — I do like the definition of Infinity 🙂 I saw a post that suggests a file descriptor is «two shorts and a pointer» so I should be able to calculate the approximate upper bound.

2 Answers 2

POSIX allows you to set the RLIMIT_NOFILE resource limit to RLIM_INFINITY using setrlimit() . What this means is that the system will not enforce this resource limit. Of course, you will still be limited by the implementation (e.g. MAXINT ) and any other resource limitations (e.g. available memory).

Update: RHEL 5 has a maximum value of 1048576 (2 20 ) for this limit ( NR_OPEN in /usr/include/linux/fs.h ), and will not accept any larger value including infinity, even for root. So on RHEL 5 you can use this value in /etc/security/limits.conf and that is as close as you are going to get to infinity.

Not long ago a Linux kernel patch was applied to allow this limit to be set to infinity, however it has since been reverted as a result of unintended consequences.

Читайте также:  Find and change linux

Potentially useful: Ubuntu 14.04 also has this 2^20 limits in /etc/security/limits.conf . Anything past that will reset the limit to the default 2^10

Источник

How to Increase Number of Open Files Limit in Linux

In Linux, you can change the maximum amount of open files. You may modify this number by using the ulimit command. It grants you the ability to control the resources available for the shell or process started by it.

In this short tutorial we will show you how to check your current limit of open files and files descriptions, but to do so, you will need to have root access to your system.

First, Lets see how we can find out the maximum number of opened file descriptors on your Linux system.

Find Linux Open File Limit

# cat /proc/sys/fs/file-max 818354 

The number you will see, shows the number of files that a user can have opened per login session. The result might be different depending on your system.

For example on a CentOS server of mine, the limit was set to 818354, while on Ubuntu server that I run at home the default limit was set to 176772.

If you want to see the hard and soft limits, you can use the following commands:

Check Hard Limit in Linux

# ulimit -Hn 4096 

Check Soft Limits in Linux

# ulimit -Sn 1024 

To see the hard and soft values for different users, you can simply switch user with “su” to the user which limits you want to check.

# su marin $ ulimit -Sn 1024 
$ ulimit -Hn 4096 

How to Check System wide File Descriptors Limits in Linux

If you are running a server, some of your applications may require higher limits for opened file descriptors. A good example for such are MySQL/MariaDB services or Apache web server.

You can increase the limit of opened files in Linux by editing the kernel directive fs.file-max . For that purpose, you can use the sysctl utility.

For example, to increase open file limit to 500000, you can use the following command as root:

You can check the current value for opened files with the following command:

With the above command the changes you have made will only remain active until the next reboot. If you wish to apply them permanently, you will have to edit the following file:

Of course, you can change the number per your needs. To verify the changes again use:

Читайте также:  Linux как начать изучать

Users will need to logout and login again for the changes to take effect. If you want to apply the limit immediately, you can use the following command:

Set User Level Open File limits in Linux

The above examples, showed how to set global limits, but you may want to apply limits per user basis. For that purpose, as user root, you will need to edit the following file:

If you are a Linux administrator, I suggest you that you become very familiar with that file and what you can do to it. Read all of the comments in it as it provides great flexibility in terms of managing system resources by limiting users/groups on different levels.

The lines that you should add take the following parameters:

Here is an example of setting a soft and hard limits for user marin:

## Example hard limit for max opened files marin hard nofile 4096 ## Example soft limit for max opened files marin soft nofile 1024 

Final thoughts

This brief article showed you a basic example of how you can check and configure global and user level limits for maximum number of opened files.

While we just scratched the surface, I highly encourage you to have a more detailed look and read regarding /etc/sysctl.conf and /etc/security/limits.conf and learn how to use them. They will be of great help for you one day.

Источник

How do I change the number of open files limit in Linux? [closed]

When running my application I sometimes get an error about too many files open . Running ulimit -a reports that the limit is 1024. How do I increase the limit above 1024? Edit ulimit -n 2048 results in a permission error.

I just went through this on Centos 7 (same on RHEL) and made a blog post covering it because I had so much trouble even with all these posts: coding-stream-of-consciousness.com/2018/12/21/…. Often along with open files, you need to increase nproc which actually resides in multiple settings files. and if you use systemd/systemctl that has its own separate settings. It’s kind of nuts.

4 Answers 4

You could always try doing a ulimit -n 2048 . This will only reset the limit for your current shell and the number you specify must not exceed the hard limit

Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.

set rlim_fd_max = 166384 set rlim_fd_cur = 8192 

On OS X, this same data must be set in /etc/sysctl.conf.

kern.maxfilesperproc=166384 kern.maxfiles=8192 

Under Linux, these settings are often in /etc/security/limits.conf.

Читайте также:  Linux запуск команды от имени другого пользователя

There are two kinds of limits:

  • soft limits are simply the currently enforced limits
  • hard limits mark the maximum value which cannot be exceeded by setting a soft limit

Soft limits could be set by any user while hard limits are changeable only by root. Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.

There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default.

Источник

Largest allowed maximum number of open files in Linux

Is there a (technical or practical) limit to how large you can configure the maximum number of open files in Linux? Are there some adverse effects if you configure it to a very large number (say 1-100M)? I’m thinking server usage here, not embedded systems. Programs using huge amounts of open files can of course eat memory and be slow, but I’m interested in adverse effects if the limit is configured much larger than necessary (e.g. memory consumed by just the configuration).

In theory, you could calculate how many file-descriptors your system could handle based on available memory and the assertion that each fd consumes 1K of memory: serverfault.com/questions/330795/…

5 Answers 5

I suspect the main reason for the limit is to avoid excess memory consumption (each open file descriptor uses kernel memory). It also serves as a safeguard against buggy applications leaking file descriptors and consuming system resources.

But given how absurdly much RAM modern systems have compared to systems 10 years ago, I think the defaults today are quite low.

In 2011 the default hard limit for file descriptors on Linux was increased from 1024 to 4096.

Some software (e.g. MongoDB) uses many more file descriptors than the default limit. The MongoDB folks recommend raising this limit to 64,000. I’ve used an rlimit_nofile of 300,000 for certain applications.

As long as you keep the soft limit at the default (1024), it’s probably fairly safe to increase the hard limit. Programs have to call setrlimit() in order to raise their limit above the soft limit, and are still capped by the hard limit.

See also some related questions:

Источник

Оцените статью
Adblock
detector