What specifically are wall-clock-time, user-cpu-time, and system-cpu-time in Unix?
I can take a guess based on the names, but what specifically are wall-clock-time, user-cpu-time, and system-cpu-time in Unix? Is user-cpu time the amount of time spent executing user-code while kernel-cpu time the amount of time spent in the kernel due to the need of privileged operations (like I/O to disk)? What unit of time is this measurement in? And is wall-clock time really the number of seconds the process has spent on the CPU or is the name just misleading?
4 Answers 4
Wall-clock time is the time that a clock on the wall (or a stopwatch in hand) would measure as having elapsed between the start of the process and ‘now’.
The user-cpu time and system-cpu time are pretty much as you said — the amount of time spent in user code and the amount of time spent in kernel code.
The units are seconds (and subseconds, which might be microseconds or nanoseconds).
The wall-clock time is not the number of seconds that the process has spent on the CPU; it is the elapsed time, including time spent waiting for its turn on the CPU (while other processes get to run).
@Pacerier: on a single core machine, yes, but multi-core machines and multi-threaded programs can use more than 1 CPU second per elapsed second.
@JonathanLeffler thank you for the answer, I wanted to get the number of nanoseconds that has been elapsed but calculalting the CPU time using the formula CPUtime = #clock_cycles / clock_rate cannot be the same as calculating the elapsed time. Do you know if I can get the elapsed time from the CPU time?
@Bionix1441: You cannot derive elapsed time from CPU time for a number of reasons. First, a process can be idle, not consuming any CPU time, for arbitrary periods (for example, a daemon process waiting for a client to connect to it over the network), so it may do nothing for days at a time of elapsed time. Second, if it is running, it may have multiple threads, and if it has, say, 4 threads and there are 4 or more cores on the system, it might rack up 4 CPU seconds of expended effort per second of elapsed time. These show that there’s no simple (or even complex) formula that you could use.
@Catbuilts: Are you aware that the Unix kernel runs separately from user programs. When your program makes a system call (for example, read() or getpid() ), the kernel executes code on behalf of your program. The kernel also handles pre-emptive multi-tasking so that other programs get their turn to run, and does some general housekeeping work to keep the system running smoothly. This code is executed in ‘kernel code’ (also in ‘kernel mode’). This is distinct from the code you wrote, and the user libraries (including the system C library) that you run.
User CPU time vs System CPU time?
Could you explain more about «user CPU time» and «system CPU time»? I have read a lot, but I couldn’t understand it well.
4 Answers 4
The difference is whether the time is spent in user space or kernel space. User CPU time is time spent on the processor running your program’s code (or code in libraries); system CPU time is the time spent running code in the operating system kernel on behalf of your program.
@user472221 My answer is based on UNIX/Linux; the same concept should apply on Windows. I would guess that most DLLs are user-space, although I am not sure where Windows draws the line. If you really want to find out where your program is using CPU time, use a profiler.
Is «user space» and «kernel space» the same as «user/kernel mode»? If the kernel is running, but in user mode, does this account as user time or system time? Or is this of practical difference only when on a microkernel?
@JohnMudd: the programmer may have some intuition about where they expect time to be spent, and if that isn’t what’s really happening, may want to change their code accordingly. For example, if the program is expected to be waiting in a call to epoll() for I/O to react to, only doing very transient bursts of processing, but it’s unexpectedly spending a significant percentage of its time in user mode, then the programmer may want to investigate where and why. For example, if a web server was doing that, it may be someone’s managed to get a javascript coin miner to run.
Why is the system CPU time (% sy) high?
I am running a script that loads big files. I ran the same script in a single core OpenSuSe server and quad core PC. As expected in my PC it is much more faster than in the server. But, the script slows down the server and makes it impossible to do anything else. My script is
for 100 iterations Load saved data (about 10 mb)
real 0m52.564s user 0m51.768s sys 0m0.524s
real 32m32.810s user 4m37.677s sys 12m51.524s
I wonder why «sys» is so high when i run the code in server. I used top command to check the memory and cpu usage. It seems there is still free memory, so swapping is not the reason. % sy is so high, its probably the reason for the speed of server but I dont know what is causing % sy so high. The process that is using highest percent of CPU (99%) is «myscript». %wa is zero in the screenshot but sometimes it gets very high (50 %). When the script is running, load average is greater than 1 but have never seen to be as high as 2. I also checked my disc:
strt:~ # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 16480 MB in 2.00 seconds = 8247.94 MB/sec Timing buffered disk reads: 20 MB in 3.44 seconds = 5.81 MB/sec john@strt:~> df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 245G 102G 131G 44% / udev 4.0G 152K 4.0G 1% /dev tmpfs 4.0G 76K 4.0G 1% /dev/shm
I have checked these things but I am still not sure what is the real problem in my server and how to fix it. Can anyone identify a probable reason for the slowness? What could be the solution? Or is there anything else I should check? Thanks!