Linux file descriptor count

Counting open files per process

I’m working on an application that monitors the processes’ resources and gives a periodic report in Linux, but I faced a problem in extracting the open files count per process. This takes quite a while if I take all of the files and group them according to their PID and count them. How can I take the open files count for each process in Linux?

5 Answers 5

Have a look at the /proc/ file system:

To do this for all processes, use this:

cd /proc for pid in 9* do echo "PID = $pid with $(ls /proc/$pid/fd/ | wc -l) file descriptors" done 

As a one-liner (filter by appending | grep -v «0 FDs» ):

for pid in /proc/4*; do printf "PID %6d has %4d FDs\n" $(basename $pid) $(ls $pid/fd | wc -l); done 

As a one-liner including the command name, sorted by file descriptor count in descending order (limit the results by appending | head -10 ):

for pid in /proc/1*; do p=$(basename $pid); printf "%4d FDs for PID %6d; command=%s\n" $(ls $pid/fd | wc -l) $p "$(ps -p $p -o comm=)"; done | sort -nr 

Credit to @Boban for this addendum:

You can pipe the output of the script above into the following script to see the ten processes (and their names) which have the most file descriptors open:

 . done | sort -rn -k5 | head | while read -r _ _ pid _ fdcount _ do command=$(ps -o cmd -p "$pid" -hc) printf "pid = %5d with %4d fds: %s\n" "$pid" "$fdcount" "$command" done 

Here’s another approach to list the top-ten processes with the most open fds, probably less readable, so I don’t put it in front:

find /proc -maxdepth 1 -type d -name '2*' \ -exec bash -c "ls <>/fd/ | wc -l | tr '\n' ' '" \; \ -printf "fds (PID = %P), command: " \ -exec bash -c "tr '\0' ' ' < <>/cmdline" \; \ -exec echo \; | sort -rn | head 

Of course, you will need to have root permissions to do that for many of the processes. Their file descriptors are kind of private, you know 😉

/proc/$pid/fd lists descriptor files, that is slightly different of «open files» as we can have memory map and other unusual file objects.

This extends the answer and turns pids to command names: for pid in 9*; do echo «PID = $pid with $(ls /proc/$pid/fd/ 2>/dev/null | wc -l) file descriptors»; done | sort -rn -k5 | head | while read -r line; do pid= echo $line | awk ‘‘ ; command= ps -o cmd -p $pid -hc ; echo $line | sed -s «s/PID = \(.*\) with \(.*\)/Command $command (PID = \1) with \2/g»; done

Yeah, well. Instead of parsing the original output and then call ps again for each process to find out its command, it might make more sense to use /proc/$pid/cmdline in the first loop. While technically it is still possible for a process to disappear between the evaluating of 4* and the scanning of its disc, this is less likely.

Читайте также:  Install python linux apt get install

Executing command=$(ps -o cmd -p «$pid» -hc) gave me Warning: bad syntax, perhaps a bogus ‘-‘ . It worked running as command=$(ps -o cmd -p «$pid» hc) .

ps aux | sed 1d | awk '' | xargs -I <> bash -c <> 

For Fedora, it gives: lsof: WARNING: can’t stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. lsof: no pwd entry for UID 65535

I used this to find top filehandler-consuming processes for a given user (username) where dont have lsof or root access:

for pid in `ps -o pid -u username` ; do echo "$(ls /proc/$pid/fd/ 2>/dev/null | wc -l ) for PID: $pid" ; done | sort -n | tail 

How can I take the open files count for each process in Linux?

if you’re running it from root (e.g. prefixing the command with sudo -E env PATH=$PATH ), otherwise it’ll only return file descriptor counts per process whose /proc//fd you may list. This will give you a big JSON document/tree whose nodes look something like:

The content of fd dictionary is counts per file descriptor type. The most interesting ones are probably these (see procfile.Fd description or man fstat for more details):

I’m the author of Procpath, which is a tool that provides a nicer interface to procfs for process analysis. You can record a process tree’s procfs stats (in a SQLite database) and plot any of them later. For instance this is how my Firefox’s process tree (root PID 2468) looks like with regards to open file descriptor count (sum of all types):

procpath --logging-level ERROR record -f stat,fd -i 1 -d ff_fd.sqlite \ '$..children[?(@.stat.pid == 2468)]' # Ctrl+C procpath plot -q fd -d ff_fd.sqlite -f ff_df.svg 

Firefox open file descriptors

If I’m interested in only a particular type of open file descriptors (say, sockets) I can plot it like this:

procpath plot --custom-value-expr fd_sock -d ff_fd.sqlite -f ff_df.svg 

Источник

How to display open file descriptors but not using lsof command

While this command displays the number of FD’s, how do you display the list of open file descriptors that the command above just counted?

You probably want to know if your ulimit is exceeded, right? I blogged about this under linuxintro.org/wiki/Is_my_ulimit_exceeded; most importantly, the ulimit is a per-process restriction that you can find under /proc/PID/limits and instead of lsof I would use ls /proc/PID/fd to list the process’ file descriptors.

2 Answers 2

There are two reasons lsof | wc -l doesn’t count file descriptors. One is that it lists things that aren’t open files, such as loaded dynamically linked libraries and current working directories; you need to filter them out. Another is that lsof takes some time to run, so can miss files that are opened or closed while it’s running; therefore the number of listed open files is approximate. Looking at /proc/sys/fs/file-nr gives you an exact value at a particular point in time.

Читайте также:  Red hat enterprise linux dvd

cat /proc/sys/fs/file-nr is only useful when you need the exact figure, mainly to check for resource exhaustion. If you want to list the open files, you need to call lsof , or use some equivalent method such as trawling /proc/*/fd manually.

Hi thanks for giving a good explanation Gilles. I tried ls /proc/*/fd and got all the open fd’s at that time. Its producing an output with some color coding, I’ll just have to look at the manual.

@dimas /proc/*/fd directories contain symbolic links to the open files. For visual inspection, use ls -l . For automated treatment, use readlink to extract the link target.

Just use ls -l but i’ll experiment with readlink. I tried other /proc/PID/maps and other options as specified here kernel.org/doc/man-pages/online/pages/man5/proc.5.html. Thanks again for the additional info.

/proc/sys/fs/file-nr gives me 3872 (and two other numbers). How can this be the count of files I have open if ulimit -n shows me 1024?

@ThorstenStaerk All settings of setrlimit (the system call underlying the ulimit shell command) are per-process. They affect only the process that makes the call (and indirectly the processes that it later forks).

Process information is kept dynamically by the system in directories under /proc. For example the process with PID 1234 will have a directory called /proc/1234.

There are quite a bit of information in there but right now you are interested in the /proc/1234/fd subdirectory.

NOTE: You need to have root permissions to view or open files for processes that you do not own, as well as for SetUID processes.

root@johan-HP-ProBook-6560b-LG654EA-ACQ:/proc# ls -l 2443/fd total 0 lr-x------ 1 johan johan 64 Feb 27 10:26 0 -> pipe:[13637] l-wx------ 1 johan johan 64 Feb 27 10:26 1 -> /home/johan/.xsession-errors lrwx------ 1 johan johan 64 Feb 27 10:26 10 -> anon_inode:[eventfd] lrwx------ 1 johan johan 64 Feb 27 10:26 11 -> anon_inode:[eventfd] lrwx------ 1 johan johan 64 Feb 27 10:26 12 -> socket:[39495] lrwx------ 1 johan johan 64 Feb 27 10:26 13 -> anon_inode:[eventfd] lr-x------ 1 johan johan 64 Feb 27 10:26 14 -> anon_inode:inotify lrwx------ 1 johan johan 64 Feb 27 10:26 15 -> anon_inode:[eventfd] l-wx------ 1 johan johan 64 Feb 27 10:26 16 -> pipe:[37885] lr-x------ 1 johan johan 64 Feb 27 10:26 17 -> pipe:[37886] l-wx------ 1 johan johan 64 Feb 27 10:26 2 -> /home/johan/.xsession-errors l-wx------ 1 johan johan 64 Feb 27 10:26 21 -> pipe:[167984] lr-x------ 1 johan johan 64 Feb 27 10:26 22 -> pipe:[167985] l-wx------ 1 johan johan 64 Feb 27 10:26 23 -> pipe:[170009] lr-x------ 1 johan johan 64 Feb 27 10:26 24 -> pipe:[170010] lrwx------ 1 johan johan 64 Feb 27 10:26 3 -> anon_inode:[eventfd] lr-x------ 1 johan johan 64 Feb 27 10:26 4 -> pipe:[14726] lrwx------ 1 johan johan 64 Feb 27 10:26 5 -> socket:[14721] l-wx------ 1 johan johan 64 Feb 27 10:26 6 -> pipe:[14726] lrwx------ 1 johan johan 64 Feb 27 10:26 7 -> socket:[14730] lrwx------ 1 johan johan 64 Feb 27 10:26 8 -> socket:[13984] lrwx------ 1 johan johan 64 Feb 27 10:26 9 -> socket:[14767] root@johan-HP:/proc# cat 2443/fdinfo/2 pos: 1244446 flags: 0102001 

Also have a look at the rest of the files under /proc . a lot of useful information from the system resides here.

Читайте также:  Загружаемые модули linux имеют следующие отличительные особенности

Источник

How to get number of opened file descriptors for user

I know about lsof and ls /proc/*/fd but none of them are atomic AFAIK. Because in the latter case I would need to get all pids for user and then filter by them and by that time some of the file descriptors could be closed. Maybe there is some system call for that or something, because obviously OS tracks that number as it would refuse to create FD if max limit for user is exhausted.

Are you sure that it would refuse to create FD if max limit for user is exhausted ? The setrlimit/getrlimit system calls work a per-process base. diskquota works on a per filesystem base. AFAIK, there is no API that works on a per user base.

Well, there is a limit for every user. If it wouldn’t refuse to create file when it’s exhausted then this limit is useless, right?

either lsof (with non atomic constraint) or read kernel sources and build your own utility. A good question anyway.

@user1685095 There is a limit for each process of a specified user not for each user. If a user has an hard limit of RLIMIT_NOFILE set to 100, she’ll can have two processes with 99 open files (198 in total).

1 Answer 1

I haven’t made an intensive search, but I don’t think what you’re looking for exists on Linux. Opening a file descriptor doesn’t take any global lock, only a per-process lock, so on a multicore machine whatever you’d be using to count the number of open file descriptors could be running literally at the same time that other threads is opening or closing files on other cores.

Linux doesn’t have a global limit on the total number of open files. There’s no explicit per-user limit either. There’s a per-user limit on processes, and a per-process limit on file descriptor numbers, which indirectly imposes a limit on open files per user, but that isn’t explicitly tracked.

Exploring /proc (which is what lsof does under the hood) is as good as it gets. /proc is the Linux API to get information about processes.

gerardnico.com/wiki/linux/limits.conf you can see there that there is a limit for open file descriptors. Which is exactly what I’m talking about.

@user1685095 As several of us have told you already, this is a per process limit, not per user. limits.conf defines different values for each user, but what that means is that each process run by the user has this limit, not that the limit is for the total number of files open by a process run by the user.

Источник

Оцените статью
Adblock
detector