- How to display open file descriptors but not using lsof command
- 2 Answers 2
- How to Increase Number of Open Files Limit in Linux
- Find Linux Open File Limit
- Check Hard Limit in Linux
- Check Soft Limits in Linux
- How to Check System wide File Descriptors Limits in Linux
- Set User Level Open File limits in Linux
- Final thoughts
- Monitoring number of Open FDs per process efficiently?
- 3 Answers 3
How to display open file descriptors but not using lsof command
While this command displays the number of FD’s, how do you display the list of open file descriptors that the command above just counted?
You probably want to know if your ulimit is exceeded, right? I blogged about this under linuxintro.org/wiki/Is_my_ulimit_exceeded; most importantly, the ulimit is a per-process restriction that you can find under /proc/PID/limits and instead of lsof I would use ls /proc/PID/fd to list the process’ file descriptors.
2 Answers 2
There are two reasons lsof | wc -l doesn’t count file descriptors. One is that it lists things that aren’t open files, such as loaded dynamically linked libraries and current working directories; you need to filter them out. Another is that lsof takes some time to run, so can miss files that are opened or closed while it’s running; therefore the number of listed open files is approximate. Looking at /proc/sys/fs/file-nr gives you an exact value at a particular point in time.
cat /proc/sys/fs/file-nr is only useful when you need the exact figure, mainly to check for resource exhaustion. If you want to list the open files, you need to call lsof , or use some equivalent method such as trawling /proc/*/fd manually.
Hi thanks for giving a good explanation Gilles. I tried ls /proc/*/fd and got all the open fd’s at that time. Its producing an output with some color coding, I’ll just have to look at the manual.
@dimas /proc/*/fd directories contain symbolic links to the open files. For visual inspection, use ls -l . For automated treatment, use readlink to extract the link target.
Just use ls -l but i’ll experiment with readlink. I tried other /proc/PID/maps and other options as specified here kernel.org/doc/man-pages/online/pages/man5/proc.5.html. Thanks again for the additional info.
/proc/sys/fs/file-nr gives me 3872 (and two other numbers). How can this be the count of files I have open if ulimit -n shows me 1024?
@ThorstenStaerk All settings of setrlimit (the system call underlying the ulimit shell command) are per-process. They affect only the process that makes the call (and indirectly the processes that it later forks).
Process information is kept dynamically by the system in directories under /proc. For example the process with PID 1234 will have a directory called /proc/1234.
There are quite a bit of information in there but right now you are interested in the /proc/1234/fd subdirectory.
NOTE: You need to have root permissions to view or open files for processes that you do not own, as well as for SetUID processes.
root@johan-HP-ProBook-6560b-LG654EA-ACQ:/proc# ls -l 2443/fd total 0 lr-x------ 1 johan johan 64 Feb 27 10:26 0 -> pipe:[13637] l-wx------ 1 johan johan 64 Feb 27 10:26 1 -> /home/johan/.xsession-errors lrwx------ 1 johan johan 64 Feb 27 10:26 10 -> anon_inode:[eventfd] lrwx------ 1 johan johan 64 Feb 27 10:26 11 -> anon_inode:[eventfd] lrwx------ 1 johan johan 64 Feb 27 10:26 12 -> socket:[39495] lrwx------ 1 johan johan 64 Feb 27 10:26 13 -> anon_inode:[eventfd] lr-x------ 1 johan johan 64 Feb 27 10:26 14 -> anon_inode:inotify lrwx------ 1 johan johan 64 Feb 27 10:26 15 -> anon_inode:[eventfd] l-wx------ 1 johan johan 64 Feb 27 10:26 16 -> pipe:[37885] lr-x------ 1 johan johan 64 Feb 27 10:26 17 -> pipe:[37886] l-wx------ 1 johan johan 64 Feb 27 10:26 2 -> /home/johan/.xsession-errors l-wx------ 1 johan johan 64 Feb 27 10:26 21 -> pipe:[167984] lr-x------ 1 johan johan 64 Feb 27 10:26 22 -> pipe:[167985] l-wx------ 1 johan johan 64 Feb 27 10:26 23 -> pipe:[170009] lr-x------ 1 johan johan 64 Feb 27 10:26 24 -> pipe:[170010] lrwx------ 1 johan johan 64 Feb 27 10:26 3 -> anon_inode:[eventfd] lr-x------ 1 johan johan 64 Feb 27 10:26 4 -> pipe:[14726] lrwx------ 1 johan johan 64 Feb 27 10:26 5 -> socket:[14721] l-wx------ 1 johan johan 64 Feb 27 10:26 6 -> pipe:[14726] lrwx------ 1 johan johan 64 Feb 27 10:26 7 -> socket:[14730] lrwx------ 1 johan johan 64 Feb 27 10:26 8 -> socket:[13984] lrwx------ 1 johan johan 64 Feb 27 10:26 9 -> socket:[14767] root@johan-HP:/proc# cat 2443/fdinfo/2 pos: 1244446 flags: 0102001
Also have a look at the rest of the files under /proc . a lot of useful information from the system resides here.
How to Increase Number of Open Files Limit in Linux
In Linux, you can change the maximum amount of open files. You may modify this number by using the ulimit command. It grants you the ability to control the resources available for the shell or process started by it.
In this short tutorial we will show you how to check your current limit of open files and files descriptions, but to do so, you will need to have root access to your system.
First, Lets see how we can find out the maximum number of opened file descriptors on your Linux system.
Find Linux Open File Limit
# cat /proc/sys/fs/file-max 818354
The number you will see, shows the number of files that a user can have opened per login session. The result might be different depending on your system.
For example on a CentOS server of mine, the limit was set to 818354, while on Ubuntu server that I run at home the default limit was set to 176772.
If you want to see the hard and soft limits, you can use the following commands:
Check Hard Limit in Linux
# ulimit -Hn 4096
Check Soft Limits in Linux
# ulimit -Sn 1024
To see the hard and soft values for different users, you can simply switch user with “su” to the user which limits you want to check.
# su marin $ ulimit -Sn 1024
$ ulimit -Hn 4096
How to Check System wide File Descriptors Limits in Linux
If you are running a server, some of your applications may require higher limits for opened file descriptors. A good example for such are MySQL/MariaDB services or Apache web server.
You can increase the limit of opened files in Linux by editing the kernel directive fs.file-max . For that purpose, you can use the sysctl utility.
For example, to increase open file limit to 500000, you can use the following command as root:
You can check the current value for opened files with the following command:
With the above command the changes you have made will only remain active until the next reboot. If you wish to apply them permanently, you will have to edit the following file:
Of course, you can change the number per your needs. To verify the changes again use:
Users will need to logout and login again for the changes to take effect. If you want to apply the limit immediately, you can use the following command:
Set User Level Open File limits in Linux
The above examples, showed how to set global limits, but you may want to apply limits per user basis. For that purpose, as user root, you will need to edit the following file:
If you are a Linux administrator, I suggest you that you become very familiar with that file and what you can do to it. Read all of the comments in it as it provides great flexibility in terms of managing system resources by limiting users/groups on different levels.
The lines that you should add take the following parameters:
Here is an example of setting a soft and hard limits for user marin:
## Example hard limit for max opened files marin hard nofile 4096 ## Example soft limit for max opened files marin soft nofile 1024
Final thoughts
This brief article showed you a basic example of how you can check and configure global and user level limits for maximum number of opened files.
While we just scratched the surface, I highly encourage you to have a more detailed look and read regarding /etc/sysctl.conf and /etc/security/limits.conf and learn how to use them. They will be of great help for you one day.
Monitoring number of Open FDs per process efficiently?
I want to be able to monitor the number of open file in Linux. Currently I am counting the number of files in /proc/
Maybe look at SystemTap or such and sample instead of getting an exact count? Iterating million FDs is probably going to be slow inside the kernel, too.
@thrig Ya, systemtap might be able to pull it from the task struct? Since Linux enforces file limits, it must track this sanely somehow somewhere.
I’d expect the kernel to track the number of open files per user (for RLIMIT_NOFILE ) and altogether, but to list the number of open files for each process, there may be no better way than trawling /proc . You can get an approximate value from FDSize in /proc/$pid/status , if you have processes with many files open.
How are you doing the count? Are you accidentally stat ing each file? I can count the number of files in a directory with ~200K files in less than a second: time ls | wc -l 210409 real 0m0.278s
3 Answers 3
For a dedicated load balancer I would track the total of files opened in the system, instead of wasting I/O and CPU resources counting them by process. The remaining open files by non-wanted processes should be a meaningless value for the intended result.
For knowing the global open files by the Linux system, there is no need to count them; the Linux kernel keeps track of how many files it has open.
This is much more efficient than counting all the files open with the output of lsof , that will travel all /proc/$PID/fd directories, and will affect negatively your system I/O / CPU resources.
will tell you the number of files open in the system.
lsof does what you’re doing, reading
@KyleBrandt you can always prepend the command with time to measure the execution time to check which is faster.
@KyleBrandt From vague memory, reading /proc/pid/fd is the only way to get this info, which is an open() and read() syscall per file — which in turn reads a set of kernel structs and returns the answer. You may be able to recompile lsof to shortcut the pipe to ‘wc’ and print out the total, but I don’t think you’re going to save much by doing that. You’re constrained by the hardware. Would reducing the load on each machine be possible?
How efficient is this SystemTap code for your use case? It’s not a perfect view as it only tracks changes from when it begins (so anything opened prior to the start would be missed), and will need additional work to make the output more legible or suitable.
global procfdcount probe begin < printf("begin trace. \n\n") >probe syscall.open < procfdcount[pid()]++ >probe syscall.close < p = pid() procfdcount[p]-- if (procfdcount[p] < 0) < procfdcount[p] = 0 >> probe kprocess.exit < p = pid() if (p in procfdcount) < delete procfdcount[p] >> probe timer.s(60) < foreach (p in procfdcount- limit 20) < printf("%d %lu\n", p, procfdcount[p]) >printf("\n") >
. (install systemtap here) . # stap-prep . (fix any reported systemtap issues here) . # stap procfdcount.stp
The downside of this method is the need to identify all «open files» (sockets, etc) and then to adjust the count via appropriate system call hooks (if available); the above only tracks file files. Another option would be to call task_open_file_handles for tasks that get onto the CPU, and to display the most recent of those counts periodically.
global taskopenfh probe begin < printf("begin trace. \n\n"); >probe scheduler.cpu_on < p = pid(); if (p == 0) next; taskopenfh[p] = task_open_file_handles(pid2task(p)); >probe timer.s(60) < foreach (p in taskopenfh-) < printf("%d %lu\n", p, taskopenfh[p]); >delete taskopenfh; printf("\n"); >
This would though miss anything that was not on the CPU; a full walk of the processes and then tasks would be necessary for a complete list, though that might be too slow or too expensive if you’ve got millions of FDs.
Also these probes don’t appear to be stable, so maybe eBPF or something in the future? E.g. the second one on Centos 7 blows up after some time on
ERROR: read fault [man error::fault] at 0x0000000000000008 (((&(fs->fdt)))) near identifier 'task_open_file_handles' at /usr/share/systemtap/tapset/linux/task.stp:602:10