Proc pid stat linux

what are the meaning of values at proc/[pid]/stat?

I was trying to develop an app to get CPU usage per app and kill apps when consuming much CPU. But I couldn’t figure out how to do this. I have read this post and have seen this answer. So I looked at proc/[pid]/stat . There are a lot of numeric values, but I couldn’t understand which value is representing what. Can anyone explain the meaning of values at proc/[pid]/stat .

Welcome to Stack Overflow. You can improve your question. Please read How to Ask including the link «How to ask questions the smart way.»

If present, /proc/[pid]/status might be easier to read, seems (see man7.org/linux/man-pages/man5/proc.5.html)

4 Answers 4

 /proc/[pid]/stat Status information about the process. This is used by ps(1). It is defined in the kernel source file fs/proc/array.c. The fields, in order, with their proper scanf(3) format speci‐ fiers, are listed below. Whether or not certain of these fields display valid information is governed by a ptrace access mode PTRACE_MODE_READ_FSCREDS | PTRACE_MODE_NOAUDIT check (refer to ptrace(2)). If the check denies access, then the field value is displayed as 0. The affected fields are indicated with the marking [PT]. (1) pid %d The process ID. (2) comm %s The filename of the executable, in parentheses. This is visible whether or not the executable is swapped out. (3) state %c One of the following characters, indicating process state: R Running S Sleeping in an interruptible wait D Waiting in uninterruptible disk sleep Z Zombie T Stopped (on a signal) or (before Linux 2.6.33) trace stopped t Tracing stop (Linux 2.6.33 onward) W Paging (only before Linux 2.6.0) X Dead (from Linux 2.6.0 onward) x Dead (Linux 2.6.33 to 3.13 only) K Wakekill (Linux 2.6.33 to 3.13 only) W Waking (Linux 2.6.33 to 3.13 only) P Parked (Linux 3.9 to 3.13 only) (4) ppid %d The PID of the parent of this process. (5) pgrp %d The process group ID of the process. (6) session %d The session ID of the process. (7) tty_nr %d The controlling terminal of the process. (The minor device number is contained in the combination of bits 31 to 20 and 7 to 0; the major device number is in bits 15 to 8.) (8) tpgid %d The ID of the foreground process group of the con‐ trolling terminal of the process. (9) flags %u The kernel flags word of the process. For bit mean‐ ings, see the PF_* defines in the Linux kernel source file include/linux/sched.h. Details depend on the kernel version. The format for this field was %lu before Linux 2.6. (10) minflt %lu The number of minor faults the process has made which have not required loading a memory page from disk. (11) cminflt %lu The number of minor faults that the process's waited-for children have made. (12) majflt %lu The number of major faults the process has made which have required loading a memory page from disk. (13) cmajflt %lu The number of major faults that the process's waited-for children have made. (14) utime %lu Amount of time that this process has been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). This includes guest time, guest_time (time spent running a virtual CPU, see below), so that applications that are not aware of the guest time field do not lose that time from their calculations. (15) stime %lu Amount of time that this process has been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). (16) cutime %ld Amount of time that this process's waited-for chil‐ dren have been scheduled in user mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). (See also times(2).) This includes guest time, cguest_time (time spent running a virtual CPU, see below). (17) cstime %ld Amount of time that this process's waited-for chil‐ dren have been scheduled in kernel mode, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). (18) priority %ld (Explanation for Linux 2.6) For processes running a real-time scheduling policy (policy below; see sched_setscheduler(2)), this is the negated schedul‐ ing priority, minus one; that is, a number in the range -2 to -100, corresponding to real-time priori‐ ties 1 to 99. For processes running under a non- real-time scheduling policy, this is the raw nice value (setpriority(2)) as represented in the kernel. The kernel stores nice values as numbers in the range 0 (high) to 39 (low), corresponding to the user-visible nice range of -20 to 19. Before Linux 2.6, this was a scaled value based on the scheduler weighting given to this process. (19) nice %ld The nice value (see setpriority(2)), a value in the range 19 (low priority) to -20 (high priority). (20) num_threads %ld Number of threads in this process (since Linux 2.6). Before kernel 2.6, this field was hard coded to 0 as a placeholder for an earlier removed field. (21) itrealvalue %ld The time in jiffies before the next SIGALRM is sent to the process due to an interval timer. Since ker‐ nel 2.6.17, this field is no longer maintained, and is hard coded as 0. (22) starttime %llu The time the process started after system boot. In kernels before Linux 2.6, this value was expressed in jiffies. Since Linux 2.6, the value is expressed in clock ticks (divide by sysconf(_SC_CLK_TCK)). The format for this field was %lu before Linux 2.6. (23) vsize %lu Virtual memory size in bytes. (24) rss %ld Resident Set Size: number of pages the process has in real memory. This is just the pages which count toward text, data, or stack space. This does not include pages which have not been demand-loaded in, or which are swapped out. (25) rsslim %lu Current soft limit in bytes on the rss of the process; see the description of RLIMIT_RSS in getrlimit(2). (26) startcode %lu [PT] The address above which program text can run. (27) endcode %lu [PT] The address below which program text can run. (28) startstack %lu [PT] The address of the start (i.e., bottom) of the stack. (29) kstkesp %lu [PT] The current value of ESP (stack pointer), as found in the kernel stack page for the process. (30) kstkeip %lu [PT] The current EIP (instruction pointer). (31) signal %lu The bitmap of pending signals, displayed as a deci‐ mal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. (32) blocked %lu The bitmap of blocked signals, displayed as a deci‐ mal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. (33) sigignore %lu The bitmap of ignored signals, displayed as a deci‐ mal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. (34) sigcatch %lu The bitmap of caught signals, displayed as a decimal number. Obsolete, because it does not provide information on real-time signals; use /proc/[pid]/status instead. (35) wchan %lu [PT] This is the "channel" in which the process is wait‐ ing. It is the address of a location in the kernel where the process is sleeping. The corresponding symbolic name can be found in /proc/[pid]/wchan. (36) nswap %lu Number of pages swapped (not maintained). (37) cnswap %lu Cumulative nswap for child processes (not main‐ tained). (38) exit_signal %d (since Linux 2.1.22) Signal to be sent to parent when we die. (39) processor %d (since Linux 2.2.8) CPU number last executed on. (40) rt_priority %u (since Linux 2.5.19) Real-time scheduling priority, a number in the range 1 to 99 for processes scheduled under a real-time policy, or 0, for non-real-time processes (see sched_setscheduler(2)). (41) policy %u (since Linux 2.5.19) Scheduling policy (see sched_setscheduler(2)). Decode using the SCHED_* constants in linux/sched.h. The format for this field was %lu before Linux 2.6.22. (42) delayacct_blkio_ticks %llu (since Linux 2.6.18) Aggregated block I/O delays, measured in clock ticks (centiseconds). (43) guest_time %lu (since Linux 2.6.24) Guest time of the process (time spent running a vir‐ tual CPU for a guest operating system), measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). (44) cguest_time %ld (since Linux 2.6.24) Guest time of the process's children, measured in clock ticks (divide by sysconf(_SC_CLK_TCK)). (45) start_data %lu (since Linux 3.3) [PT] Address above which program initialized and unini‐ tialized (BSS) data are placed. (46) end_data %lu (since Linux 3.3) [PT] Address below which program initialized and unini‐ tialized (BSS) data are placed. (47) start_brk %lu (since Linux 3.3) [PT] Address above which program heap can be expanded with brk(2). (48) arg_start %lu (since Linux 3.5) [PT] Address above which program command-line arguments (argv) are placed. (49) arg_end %lu (since Linux 3.5) [PT] Address below program command-line arguments (argv) are placed. (50) env_start %lu (since Linux 3.5) [PT] Address above which program environment is placed. (51) env_end %lu (since Linux 3.5) [PT] Address below which program environment is placed. (52) exit_code %d (since Linux 3.5) [PT] The thread's exit status in the form reported by waitpid(2). 

Источник

Читайте также:  Linux format file system

How do I get the total CPU usage of an application from /proc/pid/stat?

So is the total time spend the sum of fields 14 to 17?

6 Answers 6

Preparation

To calculate CPU usage for a specific process you’ll need the following:

  1. /proc/uptime
    • #1 uptime of the system (seconds)
  2. /proc/[PID]/stat
    • #14 utime — CPU time spent in user code, measured in clock ticks
    • #15 stime — CPU time spent in kernel code, measured in clock ticks
    • #16 cutime — Waited-for children’s CPU time spent in user code (in clock ticks)
    • #17 cstime — Waited-for children’s CPU time spent in kernel code (in clock ticks)
    • #22 starttime — Time when the process started, measured in clock ticks
  3. Hertz (number of clock ticks per second) of your system.
    • In most cases, getconf CLK_TCK can be used to return the number of clock ticks.
    • The sysconf(_SC_CLK_TCK) C function call may also be used to return the hertz value.

Calculation

First we determine the total time spent for the process:

We also have to decide whether we want to include the time from children processes. If we do, then we add those values to total_time :

total_time = total_time + cutime + cstime 

Next we get the total elapsed time in seconds since the process started:

seconds = uptime - (starttime / Hertz) 

Finally we calculate the CPU usage percentage:

cpu_usage = 100 * ((total_time / Hertz) / seconds) 

See also

Hi, this would give the avg cpu usage since the application start. If a process has in last 5 secs consumed most of cpu while it was idle for 1hr, this code would still give the avg value since its uptime.. Right

Читайте также:  Все релизы linux mint

Yes, this is average cpu usage since the process started ( starttime ). So the hour the process spent idle is also factored into the calculation ( uptime — starttime ).

@T-D The uptime I use in the equation is the first parameter of /proc/uptime . I mentioned the second parameter of /proc/uptime to indicate how to calculate the total CPU usage of the system as a whole rather than a single process; since we are only concerned with the CPU usage of a single process, the second parameter is unused in the equations.

Yes, you can say so. You can convert those values into seconds using formula:

 sec = jiffies / HZ ; here - HZ = number of ticks per second 

HZ value is configurable — done at kernel configuration time.

Here is my simple solution written in BASH. It is a linux/unix system monitor and process manager through procfs, like «top» or «ps«. There is two versions simple monochrome(fast) and colored version(little bit slow, but useful especially for monitoring the statе of processes). I made sorting by CPU usage.

 function my_ps < pid_array=`ls /proc | grep -E '^8+$'` clock_ticks=$(getconf CLK_TCK) total_memory=$( grep -Po '(?)(\d+)' /proc/meminfo ) cat /dev/null > .data.ps for pid in $pid_array do if [ -r /proc/$pid/stat ] then stat_array=( `sed -E 's/(\([^\s)]+)\s([^)]+\))/\1_\2/g' /proc/$pid/stat` ) uptime_array=( `cat /proc/uptime` ) statm_array=( `cat /proc/$pid/statm` ) comm=( `grep -Po '^[^\s\/]+' /proc/$pid/comm` ) user_id=$( grep -Po '(? <=Uid:\s)(\d+)' /proc/$pid/status ) user=$( id -nu $user_id ) uptime=$state=$ ppid=$ priority=$ nice=$ utime=$ stime=$ cutime=$ cstime=$ num_threads=$ starttime=$ total_time=$(( $utime + $stime )) #add $cstime - CPU time spent in user and kernel code ( can olso add $cutime - CPU time spent in user code ) total_time=$(( $total_time + $cstime )) seconds=$( awk 'BEGIN ' ) cpu_usage=$( awk 'BEGIN ' ) resident=$ data_and_stack=$ memory_usage=$( awk 'BEGIN ' ) printf "%-6d %-6d %-10s %-4d %-5d %-4s %-4u %-7.2f %-7.2f %-18s\n" $pid $ppid $user $priority $nice $state $num_threads $memory_usage $cpu_usage $comm >> .data.ps fi done clear printf "\e[30;107m%-6s %-6s %-10s %-4s %-3s %-6s %-4s %-7s %-7s %-18s\e[0m\n" "PID" "PPID" "USER" "PR" "NI" "STATE" "THR" "%MEM" "%CPU" "COMMAND" sort -nr -k9 .data.ps | head -$1 read_options > 

Источник

Оцените статью
Adblock
detector