Linux too many files

How do I change the number of open files limit in Linux? [closed]

When running my application I sometimes get an error about too many files open . Running ulimit -a reports that the limit is 1024. How do I increase the limit above 1024? Edit ulimit -n 2048 results in a permission error.

I just went through this on Centos 7 (same on RHEL) and made a blog post covering it because I had so much trouble even with all these posts: coding-stream-of-consciousness.com/2018/12/21/…. Often along with open files, you need to increase nproc which actually resides in multiple settings files. and if you use systemd/systemctl that has its own separate settings. It’s kind of nuts.

4 Answers 4

You could always try doing a ulimit -n 2048 . This will only reset the limit for your current shell and the number you specify must not exceed the hard limit

Each operating system has a different hard limit setup in a configuration file. For instance, the hard open file limit on Solaris can be set on boot from /etc/system.

set rlim_fd_max = 166384 set rlim_fd_cur = 8192 

On OS X, this same data must be set in /etc/sysctl.conf.

kern.maxfilesperproc=166384 kern.maxfiles=8192 

Under Linux, these settings are often in /etc/security/limits.conf.

There are two kinds of limits:

  • soft limits are simply the currently enforced limits
  • hard limits mark the maximum value which cannot be exceeded by setting a soft limit

Soft limits could be set by any user while hard limits are changeable only by root. Limits are a property of a process. They are inherited when a child process is created so system-wide limits should be set during the system initialization in init scripts and user limits should be set during user login for example by using pam_limits.

Читайте также:  View user group linux

There are often defaults set when the machine boots. So, even though you may reset your ulimit in an individual shell, you may find that it resets back to the previous value on reboot. You may want to grep your boot scripts for the existence ulimit commands if you want to change the default.

Источник

Fixing the “Too many open files” Error in Linux

If your Linux usage experience has led/exposed you to high-load Linux server environments, then there is a high chance you’ve crossed paths with the infamous “too many open files” error.

This Linux OS error simply implies that too many files (file descriptors) have been opened by a process and therefore no more files can be opened because the maximum open file limit has been met. Each system user or process in a Linux environment is assigned an open file limit value which is rather small.

This article will investigate the cause and cure of the “too many open files” error that is prominent in Linux operating system environments.

Understanding Linux File Descriptors

The entire hierarchy of a Linux operating system is file-attributed (be it the presentation of partitioned disk drives, network sockets, or regular/normal data). In Linux, once a file is open, it is identified by a non-negative integer called a file descriptor.

Open file descriptors are tabulated and linked to the process responsible for their existence. Whenever a new file is open, it is immediately appended as a new entry on the process’s table of open file descriptors.

For instance, consider the basic usage of the Linux cat command to open a simple text file. The open() system call is used to pass the filename as an argument before a file descriptor is assigned to it.

Читайте также:  Welcome to linuxshelltips.i

The cat command then takes advantage of the assigned file descriptor to interact with the file (display its content). When a user is done previewing the file, the close() system call is used to finally close the file.

At any given file operation instance, three file descriptors (all of them being open by default), are linked to a single file process. The three file descriptors are associated by their own unique notations and are as follows:

Examining Linux Open File Descriptors

Under Linux file management, it is possible to associate various system agents and their assigned file descriptors (total number of file descriptors) to transparently account for used system resources.

Global Usage of File Descriptors

We can check the /proc/sys/fs/file-nr file’s first field with the following awk one-liner command:

$ awk '' /proc/sys/fs/file-nr 12832 

Check File Descriptors

Per-Process Usage of File Descriptors

To check a process’s file descriptor usage, we first need to check the targeted process’s ID and then implement it in a lsof command. For instance, the per-process usage of the cat command’s file descriptors can be determined in the following manner:

First, run the ps aux command to determine the process id:

Then implement the lsof command to determine the per-process file descriptors.

Check Per Process File Descriptors

The NAME column points to the file descriptor’s exact file and the TYPE column points to the file type.

Check Linux File Descriptor Limits

It is the limit to the number of files a process is allowed to open at any given time. It can either be a soft limit or a hard limit. The soft limit is changeable (can only be lowered) by an unprivileged user but fully changeable by a privileged user. The hard limit is changeable by privileged users.

Читайте также:  Linux serial port permissions

Per-Session Limit

To check the soft limit, use the -Sn flag together with the ulimit command.

To check the hard limit, use the -Hn flag together with the ulimit command.

Check Linux Open File Limits

Per-Process Limit

Here, we first retrieve a process’s PID and pass it as a variable in the procfs filesystem.

$ pid=3231 $ grep "Max open files" /proc/$pid/limits

Check Linux File Limits Per Process

The second and third columns point to the soft and hard limits respectfully.

Global Limit

Determines the total system-wide file descriptors for all processes combined.

Check Linux Global Limit

Fixing “Too Many Open Files” Error in Linux

Now that we have understood the role of file descriptors, we can fix the error “too many open files errors” by increasing the file descriptors limit in Linux.

Temporarily (Per-Session)

This is a recommended approach. Here, we will use the ulimit -n command.

Change Linux Soft Limit

We have changed the soft limit from 1024 to 4096.

To change the soft limit to the hard limit value of 1048576 or more, we will need privileged user access.

Cannot Modify Limit Operation Not Permitted

Increase Per-User Limit

Here, we need to first open the /etc/security/limits.conf file.

$ sudo nano /etc/security/limits.conf

In reference to the below screen capture, you can append an entry like:

    * soft nofile 5000 * hard nofile 2000000

It is also advisable to specify the association with the file descriptor.

Increase Per User File Descriptors Limit

The changes above apply globally for all processes. You will need to re-logging or restart to your system for the changes to be effective.

Globally Increase Open File Limit

Open the /etc/sysctl.conf file.

Append the following line with your desired file descriptor value.

Increase Linux File Descriptor Limit

Save the file and reload the configuration:

Restart your system or re-login.

We are now comfortable with handling the “Too many open files” error on our Linux systems.

Источник

Оцените статью
Adblock
detector