- how to find the number of open files per process
- 2 Answers 2
- How to Increase Number of Open Files Limit in Linux
- Find Linux Open File Limit
- Check Hard Limit in Linux
- Check Soft Limits in Linux
- How to Check System wide File Descriptors Limits in Linux
- Set User Level Open File limits in Linux
- Final thoughts
- Counting open files per process
- 5 Answers 5
how to find the number of open files per process
we have kafka service ( as systemctl service ) and we configured in that service number of open files example:
[Service] LimitMEMLOCK=infinity LimitNOFILE=1500000 Type=forking User=root Group=kafka
now when service is up , we want to understand the consuming of number of files by kafka services from googled , I understand from — https://www.cyberciti.biz/faq/howto-linux-get-list-of-open-files/ that we can use the command fstat in order to capture the number of open files as
since we are using production RHEL 7.6 secured server , then its not clear if fstat can be installed on our server therefore we want to learn about other ideas? appreciate to get other approach other approach as suggest is by — ls «/proc/$pid/fd» but here is real example from my machine
ls /proc/176909/fd |more 0 1 10 100 1000 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 1001 10010 10011 10012 . . .
The link you gave mentions fstat for FreeBSD and ls «/proc/$pid/fd» for Linux. RHEL is a Linux distribution, not FreeBSD.
Run ls -l . You will see that those are symlinks to various files, devices, sockets, etc. You could do ls -1 /proc/176909/fd | wc -l . ls -1 /proc/176909/fd puts every listed item out on a separate line in one column. wc -l counts the number of lines. So this would give you the total number of items in that directory. I think you should just keep reading the link you posted as it says all of this.
2 Answers 2
As mentioned in comments you can use wc command:
-l count the number of lines (result of ls command)
The LimitNOFILE directive in systemd (see man systemd.exec ) corresponds to the RLIMIT_NOFILE resource limit as set with setrlimit() (see man setrlimit ). Can be set in some shells with ulimit -n or limit descriptors .
This specifies a value one greater than the maximum file descriptor number that can be opened by this process. Attempts (open(2), pipe(2), dup(2), etc.) to exceed this limit yield the error EMFILE. (Historically, this limit was named RLIMIT_OFILE on BSD.)
So it’s not strictly speaking the limit of number of open file descriptors (let alone open files) in that a process with that limit could have more open files if it had fds above the limit prior to that limit being set (or inherited upon creation ( clone() / fork() )) and could not get a fd above the limit even if it had very few opened fds.
On Linux, /proc//fd is a special directory that contains one magic symlink file for each fd that the process has opened.
You can get their number by counting them:
in zsh for instance (or ls «/proc/$pid/fd» | wc -l as already shown by Romeo).
You can get the highest pid value by sorting them numerically in reverse and get the first.
Or with GNU ls : ls -rv «/proc/$pid/fd» | head -n1
To get a report of number of open fds for all processes, you could do something like:
(for p (/proc/) () $p/fd/*(NoN)) | sort -n
More portably, you could resort to lsof :
lsof -ad0-2147483647 -Ff -p "$pid" | grep -c '^f'
For the number of open file descriptors and:
lsof -ad0-2147483647 -Ff -p "$pid" | sed -n '$s/^f//p'
How to Increase Number of Open Files Limit in Linux
In Linux, you can change the maximum amount of open files. You may modify this number by using the ulimit command. It grants you the ability to control the resources available for the shell or process started by it.
In this short tutorial we will show you how to check your current limit of open files and files descriptions, but to do so, you will need to have root access to your system.
First, Lets see how we can find out the maximum number of opened file descriptors on your Linux system.
Find Linux Open File Limit
# cat /proc/sys/fs/file-max 818354
The number you will see, shows the number of files that a user can have opened per login session. The result might be different depending on your system.
For example on a CentOS server of mine, the limit was set to 818354, while on Ubuntu server that I run at home the default limit was set to 176772.
If you want to see the hard and soft limits, you can use the following commands:
Check Hard Limit in Linux
# ulimit -Hn 4096
Check Soft Limits in Linux
# ulimit -Sn 1024
To see the hard and soft values for different users, you can simply switch user with “su” to the user which limits you want to check.
# su marin $ ulimit -Sn 1024
$ ulimit -Hn 4096
How to Check System wide File Descriptors Limits in Linux
If you are running a server, some of your applications may require higher limits for opened file descriptors. A good example for such are MySQL/MariaDB services or Apache web server.
You can increase the limit of opened files in Linux by editing the kernel directive fs.file-max . For that purpose, you can use the sysctl utility.
For example, to increase open file limit to 500000, you can use the following command as root:
You can check the current value for opened files with the following command:
With the above command the changes you have made will only remain active until the next reboot. If you wish to apply them permanently, you will have to edit the following file:
Of course, you can change the number per your needs. To verify the changes again use:
Users will need to logout and login again for the changes to take effect. If you want to apply the limit immediately, you can use the following command:
Set User Level Open File limits in Linux
The above examples, showed how to set global limits, but you may want to apply limits per user basis. For that purpose, as user root, you will need to edit the following file:
If you are a Linux administrator, I suggest you that you become very familiar with that file and what you can do to it. Read all of the comments in it as it provides great flexibility in terms of managing system resources by limiting users/groups on different levels.
The lines that you should add take the following parameters:
Here is an example of setting a soft and hard limits for user marin:
## Example hard limit for max opened files marin hard nofile 4096 ## Example soft limit for max opened files marin soft nofile 1024
Final thoughts
This brief article showed you a basic example of how you can check and configure global and user level limits for maximum number of opened files.
While we just scratched the surface, I highly encourage you to have a more detailed look and read regarding /etc/sysctl.conf and /etc/security/limits.conf and learn how to use them. They will be of great help for you one day.
Counting open files per process
I’m working on an application that monitors the processes’ resources and gives a periodic report in Linux, but I faced a problem in extracting the open files count per process. This takes quite a while if I take all of the files and group them according to their PID and count them. How can I take the open files count for each process in Linux?
5 Answers 5
Have a look at the /proc/ file system:
To do this for all processes, use this:
cd /proc for pid in 7* do echo "PID = $pid with $(ls /proc/$pid/fd/ | wc -l) file descriptors" done
As a one-liner (filter by appending | grep -v «0 FDs» ):
for pid in /proc/4*; do printf "PID %6d has %4d FDs\n" $(basename $pid) $(ls $pid/fd | wc -l); done
As a one-liner including the command name, sorted by file descriptor count in descending order (limit the results by appending | head -10 ):
for pid in /proc/3*; do p=$(basename $pid); printf "%4d FDs for PID %6d; command=%s\n" $(ls $pid/fd | wc -l) $p "$(ps -p $p -o comm=)"; done | sort -nr
Credit to @Boban for this addendum:
You can pipe the output of the script above into the following script to see the ten processes (and their names) which have the most file descriptors open:
. done | sort -rn -k5 | head | while read -r _ _ pid _ fdcount _ do command=$(ps -o cmd -p "$pid" -hc) printf "pid = %5d with %4d fds: %s\n" "$pid" "$fdcount" "$command" done
Here’s another approach to list the top-ten processes with the most open fds, probably less readable, so I don’t put it in front:
find /proc -maxdepth 1 -type d -name '6*' \ -exec bash -c "ls <>/fd/ | wc -l | tr '\n' ' '" \; \ -printf "fds (PID = %P), command: " \ -exec bash -c "tr '\0' ' ' < <>/cmdline" \; \ -exec echo \; | sort -rn | head
Of course, you will need to have root permissions to do that for many of the processes. Their file descriptors are kind of private, you know 😉
/proc/$pid/fd lists descriptor files, that is slightly different of «open files» as we can have memory map and other unusual file objects.
This extends the answer and turns pids to command names: for pid in 5*; do echo «PID = $pid with $(ls /proc/$pid/fd/ 2>/dev/null | wc -l) file descriptors»; done | sort -rn -k5 | head | while read -r line; do pid= echo $line | awk ‘
Yeah, well. Instead of parsing the original output and then call ps again for each process to find out its command, it might make more sense to use /proc/$pid/cmdline in the first loop. While technically it is still possible for a process to disappear between the evaluating of 6* and the scanning of its disc, this is less likely.
Executing command=$(ps -o cmd -p «$pid» -hc) gave me Warning: bad syntax, perhaps a bogus ‘-‘ . It worked running as command=$(ps -o cmd -p «$pid» hc) .
ps aux | sed 1d | awk '' | xargs -I <> bash -c <>
For Fedora, it gives: lsof: WARNING: can’t stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. lsof: no pwd entry for UID 65535
I used this to find top filehandler-consuming processes for a given user (username) where dont have lsof or root access:
for pid in `ps -o pid -u username` ; do echo "$(ls /proc/$pid/fd/ 2>/dev/null | wc -l ) for PID: $pid" ; done | sort -n | tail
How can I take the open files count for each process in Linux?
if you’re running it from root (e.g. prefixing the command with sudo -E env PATH=$PATH ), otherwise it’ll only return file descriptor counts per process whose /proc//fd you may list. This will give you a big JSON document/tree whose nodes look something like:
The content of fd dictionary is counts per file descriptor type. The most interesting ones are probably these (see procfile.Fd description or man fstat for more details):
I’m the author of Procpath, which is a tool that provides a nicer interface to procfs for process analysis. You can record a process tree’s procfs stats (in a SQLite database) and plot any of them later. For instance this is how my Firefox’s process tree (root PID 2468) looks like with regards to open file descriptor count (sum of all types):
procpath --logging-level ERROR record -f stat,fd -i 1 -d ff_fd.sqlite \ '$..children[?(@.stat.pid == 2468)]' # Ctrl+C procpath plot -q fd -d ff_fd.sqlite -f ff_df.svg
If I’m interested in only a particular type of open file descriptors (say, sockets) I can plot it like this:
procpath plot --custom-value-expr fd_sock -d ff_fd.sqlite -f ff_df.svg