Linux opened files number

What are the number of open files for a user on linux and system wide for linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.

This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.

  1. Find the number of open files currently being used, system wide and per user
  2. Find what the limit for open files of the system and user are.

I ran ulimit -n and it returned 1024. I also looked at /etc/limits.conf and there isn’t anything special in that file. /etc/sysctl.conf is also not modified. I’ll list the contents of the files below. I also ran lsof | wc -l , which returned 5000+ lines (if I’m using it correctly).

  1. How do I find the number of open files allowed per user? Is the soft limit the nofile setting found/defined in /etc/limits.conf? What is the default since I didn’t touch /etc/limits.conf?
  2. How do I find the number of open files allowed system-wide? Is it the hard limit in limits.conf? What’s the default number if limits.conf isn’t modified?
  3. What is the number that ulimit returns for open files? It says 1024 but when I run lsof and count the lines, it’s over 5000+ so something is not clicking with me. Are there other cmds I should run or files to look at to get these limits? Thanks in advance for your help.
# /etc/security/limits.conf # #Each line describes a limit for a user in the form: # #    # #Where: # can be: # - an user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # # can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # # can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open files # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit (KB) # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to values: [-20, 19] # - rtprio - max realtime priority # #    # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file 
# Controls IP packet forwarding net.ipv4.ip_forward = 0 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename # Useful for debugging multi-threaded applications kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 # Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 # the interval between the last data packet sent and the first keepalive probe net.ipv4.tcp_keepalive_time = 600 # the interval between subsequential keepalive probes net.ipv4.tcp_keepalive_intvl = 60 # the interval between the last data packet sent and the first keepalive probe net.ipv4.tcp_keepalive_time = 600 # the interval between subsequential keepalive probes net.ipv4.tcp_keepalive_intvl = 60 # the number of unacknowledged probes to send before considering the connection dead and notifying the application layer net.ipv4.tcp_keepalive_probes = 10 # the number of unacknowledged probes to send before considering the connection dead and notifying the application layer net.ipv4.tcp_keepalive_probes = 10 # try as hard as possible not to swap, as safely as possible vm.swappiness = 1 fs.aio-max-nr = 1048576 #fs.file-max = 4096 

Источник

Читайте также:  D link dwa 525 linux драйвер

how to find the number of open files per process

we have kafka service ( as systemctl service ) and we configured in that service number of open files example:

[Service] LimitMEMLOCK=infinity LimitNOFILE=1500000 Type=forking User=root Group=kafka 

now when service is up , we want to understand the consuming of number of files by kafka services from googled , I understand from — https://www.cyberciti.biz/faq/howto-linux-get-list-of-open-files/ that we can use the command fstat in order to capture the number of open files as

since we are using production RHEL 7.6 secured server , then its not clear if fstat can be installed on our server therefore we want to learn about other ideas? appreciate to get other approach other approach as suggest is by — ls «/proc/$pid/fd» but here is real example from my machine

ls /proc/176909/fd |more 0 1 10 100 1000 10000 10001 10002 10003 10004 10005 10006 10007 10008 10009 1001 10010 10011 10012 . . . 

The link you gave mentions fstat for FreeBSD and ls «/proc/$pid/fd» for Linux. RHEL is a Linux distribution, not FreeBSD.

Run ls -l . You will see that those are symlinks to various files, devices, sockets, etc. You could do ls -1 /proc/176909/fd | wc -l . ls -1 /proc/176909/fd puts every listed item out on a separate line in one column. wc -l counts the number of lines. So this would give you the total number of items in that directory. I think you should just keep reading the link you posted as it says all of this.

2 Answers 2

As mentioned in comments you can use wc command:

-l count the number of lines (result of ls command)

The LimitNOFILE directive in systemd (see man systemd.exec ) corresponds to the RLIMIT_NOFILE resource limit as set with setrlimit() (see man setrlimit ). Can be set in some shells with ulimit -n or limit descriptors .

This specifies a value one greater than the maximum file descriptor number that can be opened by this process. Attempts (open(2), pipe(2), dup(2), etc.) to exceed this limit yield the error EMFILE. (Historically, this limit was named RLIMIT_OFILE on BSD.)

So it’s not strictly speaking the limit of number of open file descriptors (let alone open files) in that a process with that limit could have more open files if it had fds above the limit prior to that limit being set (or inherited upon creation ( clone() / fork() )) and could not get a fd above the limit even if it had very few opened fds.

Читайте также:  Kali linux iso dvd

On Linux, /proc//fd is a special directory that contains one magic symlink file for each fd that the process has opened.

You can get their number by counting them:

in zsh for instance (or ls «/proc/$pid/fd» | wc -l as already shown by Romeo).

You can get the highest pid value by sorting them numerically in reverse and get the first.

Or with GNU ls : ls -rv «/proc/$pid/fd» | head -n1

To get a report of number of open fds for all processes, you could do something like:

(for p (/proc/) () $p/fd/*(NoN)) | sort -n 

More portably, you could resort to lsof :

lsof -ad0-2147483647 -Ff -p "$pid" | grep -c '^f' 

For the number of open file descriptors and:

lsof -ad0-2147483647 -Ff -p "$pid" | sed -n '$s/^f//p' 

Источник

Counting open files per process

I’m working on an application that monitors the processes’ resources and gives a periodic report in Linux, but I faced a problem in extracting the open files count per process. This takes quite a while if I take all of the files and group them according to their PID and count them. How can I take the open files count for each process in Linux?

5 Answers 5

Have a look at the /proc/ file system:

To do this for all processes, use this:

cd /proc for pid in 7* do echo "PID = $pid with $(ls /proc/$pid/fd/ | wc -l) file descriptors" done 

As a one-liner (filter by appending | grep -v «0 FDs» ):

for pid in /proc/9*; do printf "PID %6d has %4d FDs\n" $(basename $pid) $(ls $pid/fd | wc -l); done 

As a one-liner including the command name, sorted by file descriptor count in descending order (limit the results by appending | head -10 ):

for pid in /proc/3*; do p=$(basename $pid); printf "%4d FDs for PID %6d; command=%s\n" $(ls $pid/fd | wc -l) $p "$(ps -p $p -o comm=)"; done | sort -nr 

Credit to @Boban for this addendum:

You can pipe the output of the script above into the following script to see the ten processes (and their names) which have the most file descriptors open:

 . done | sort -rn -k5 | head | while read -r _ _ pid _ fdcount _ do command=$(ps -o cmd -p "$pid" -hc) printf "pid = %5d with %4d fds: %s\n" "$pid" "$fdcount" "$command" done 

Here’s another approach to list the top-ten processes with the most open fds, probably less readable, so I don’t put it in front:

find /proc -maxdepth 1 -type d -name '1*' \ -exec bash -c "ls <>/fd/ | wc -l | tr '\n' ' '" \; \ -printf "fds (PID = %P), command: " \ -exec bash -c "tr '\0' ' ' < <>/cmdline" \; \ -exec echo \; | sort -rn | head 

Of course, you will need to have root permissions to do that for many of the processes. Their file descriptors are kind of private, you know 😉

Читайте также:  Linux with tor browser

/proc/$pid/fd lists descriptor files, that is slightly different of «open files» as we can have memory map and other unusual file objects.

This extends the answer and turns pids to command names: for pid in 3*; do echo «PID = $pid with $(ls /proc/$pid/fd/ 2>/dev/null | wc -l) file descriptors»; done | sort -rn -k5 | head | while read -r line; do pid= echo $line | awk ‘‘ ; command= ps -o cmd -p $pid -hc ; echo $line | sed -s «s/PID = \(.*\) with \(.*\)/Command $command (PID = \1) with \2/g»; done

Yeah, well. Instead of parsing the original output and then call ps again for each process to find out its command, it might make more sense to use /proc/$pid/cmdline in the first loop. While technically it is still possible for a process to disappear between the evaluating of 1* and the scanning of its disc, this is less likely.

Executing command=$(ps -o cmd -p «$pid» -hc) gave me Warning: bad syntax, perhaps a bogus ‘-‘ . It worked running as command=$(ps -o cmd -p «$pid» hc) .

ps aux | sed 1d | awk '' | xargs -I <> bash -c <> 

For Fedora, it gives: lsof: WARNING: can’t stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs Output information may be incomplete. lsof: no pwd entry for UID 65535

I used this to find top filehandler-consuming processes for a given user (username) where dont have lsof or root access:

for pid in `ps -o pid -u username` ; do echo "$(ls /proc/$pid/fd/ 2>/dev/null | wc -l ) for PID: $pid" ; done | sort -n | tail 

How can I take the open files count for each process in Linux?

if you’re running it from root (e.g. prefixing the command with sudo -E env PATH=$PATH ), otherwise it’ll only return file descriptor counts per process whose /proc//fd you may list. This will give you a big JSON document/tree whose nodes look something like:

The content of fd dictionary is counts per file descriptor type. The most interesting ones are probably these (see procfile.Fd description or man fstat for more details):

I’m the author of Procpath, which is a tool that provides a nicer interface to procfs for process analysis. You can record a process tree’s procfs stats (in a SQLite database) and plot any of them later. For instance this is how my Firefox’s process tree (root PID 2468) looks like with regards to open file descriptor count (sum of all types):

procpath --logging-level ERROR record -f stat,fd -i 1 -d ff_fd.sqlite \ '$..children[?(@.stat.pid == 2468)]' # Ctrl+C procpath plot -q fd -d ff_fd.sqlite -f ff_df.svg 

Firefox open file descriptors

If I’m interested in only a particular type of open file descriptors (say, sockets) I can plot it like this:

procpath plot --custom-value-expr fd_sock -d ff_fd.sqlite -f ff_df.svg 

Источник

Оцените статью
Adblock
detector