Linux чем загрузить процессор

How to create a CPU spike with a bash command

I want to create a near 100% load on a Linux machine. It’s quad core system and I want all cores going full speed. Ideally, the CPU load would last a designated amount of time and then stop. I’m hoping there’s some trick in bash. I’m thinking some sort of infinite loop.

26 Answers 26

I use stress for this kind of thing, you can tell it how many cores to max out.. it allows for stressing memory and disk as well.

Example to stress 2 cores for 60 seconds

You need to EPEL repo for CentOS wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

@bfred.it Your cores may utilize hyperthreading, effectively doubling your amount of cores (4 physical and 4 virtual cores). You’ll also want to stress the virtual ones for a full load test.

sudo apt-get install stress on debian based systems, for completeness. Used this to test a cooling mod on the Intel i7 NUC Kit.

To run more of those to put load on more cores, try to fork it:

fulload() < dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null | dd if=/dev/zero of=/dev/null & >; fulload; read; killall dd 

Repeat the command in the curly brackets as many times as the number of threads you want to produce (here 4 threads). Simple enter hit will stop it (just make sure no other dd is running on this user or you kill it too).

This actually worked the best for my situation. It also worked in Cygwin. For some reason, the other solutions wouldn’t quite spike the CPU. Adding a count and making four processes in parallel worked perfectly. It spiked the CPU at 100% in top and then back down to zero without any help. Just four lines of code and a «wait».

Reading from /dev/zero and writing to /dev/null is not a very good load generator — you have to run a lot of them to generate significant load. Better to do something like dd if=/dev/urandom | bzip2 -9 >> /dev/null . /dev/urandom requires significantly more effort to generate output, and bzip2 will expend a lot of effort trying to compress it, so the overall CPU usage is a lot higher than «fill a block with zeros, and then throw it away».

I think this one is simpler. Open Terminal and type the following and press Enter.

To fully utilize modern CPUs, one line is not enough, you may need to repeat the command to exhaust all the CPU power.

To end all of this, simply put

The idea was originally found here, although it was intended for Mac users, but this should work for *nix as well.

+1 Works like a charm, thank you! Worth adding: this command will max out one hyperthread per cpu core. So a dual core cpu (each core having 2 threads) will get a total load of 25% per yes command (assuming the system was otherwise idle).

Just to add to this, Each iteration of this command adds 25 percent load on the CPU (Android) up to 4 iterations and the rest have no effect (even in terms of clock rate).

If you have multiple terminal sessions or tmux tabs, it’s simple to run yes > /dev/null , then CTRL+C it when you’re done

Although I’m late to the party, this post is among the top results in the google search «generate load in linux».

The result marked as solution could be used to generate a system load, i’m preferring to use sha1sum /dev/zero to impose a load on a cpu-core.

Читайте также:  Linux python log file

The idea is to calculate a hash sum from an infinite datastream (eg. /dev/zero, /dev/urandom, . ) this process will try to max out a cpu-core until the process is aborted. To generate a load for more cores, multiple commands can be piped together.

eg. generate a 2 core load: sha1sum /dev/zero | sha1sum /dev/zero

Seeing, this to be better than dd for CPU load. I get max cpu load of 44% on dd (6 times) and 86%+ on sha1sum . Thx~!

To load 3 cores for 5 seconds:

seq 3 | xargs -P0 -n1 timeout 5 yes > /dev/null 

This results in high kernel (sys) load from the many write() system calls.

If you prefer mostly userland cpu load:

seq 3 | xargs -P0 -n1 timeout 5 md5sum /dev/zero 

If you just want the load to continue until you press Ctrl-C:

seq 3 | xargs -P0 -n1 md5sum /dev/zero 

for some reason this command timeout 5 md5sum /dev/zero always returns an exit code of 124, which causes issues with scripts that have strict exit codes enabled.

One core (doesn’t invoke external process):

while true; do /bin/true; done 

The latter only makes both of mine go to ~50% though.

This one will make both go to 100%:

@HaoyuanGe All the cpus are 100% only when echoing «nothing». Replace the do echo; with do echo «some very very long string»; to see

If you want to keep a server responsive whilst running such a test, do it in a separate shell (another tmux/screen window, or ssh session), and renice that shell first, e.g. in bash: renice 19 -p $$ . It’ll still max the CPUs, but won’t impinge on other processes.

this code while true; do /bin/true; done good but When one I add that the CPU speed increases while true; do /bin/true & echo «b»; done

Here is a program that you can download Here

Install easily on your Linux system

./configure make make install 

and launch it in a simple command line

to stress all your CPUs (however you have) with 40 threads each running a complex sqrt computation on a ramdomly generated numbers.

You can even define the timeout of the program

unlike the proposed solution with the dd command, which deals essentially with IO and therefore doesn’t really overload your system because working with data.

The stress program really overloads the system because dealing with computation.

There is already an answer above for the stress command. As that answer says, you can just install it via yum/apt/etc.

An infinite loop is the idea I also had. A freaky-looking one is:

( : is the same as true , does nothing and exits with zero)

You can call that in a subshell and run in the background. Doing that $num_cores times should be enough. After sleeping the desired time you can kill them all, you get the PIDs with jobs -p (hint: xargs )

This fork bomb will cause havoc to the CPU and will likely crash your computer.

From: cyberciti.biz/faq/understanding-bash-fork-bomb WARNING! These examples may crash your computer if executed. «Once a successful fork bomb has been activated in a system it may not be possible to resume normal operation without rebooting the system as the only solution to a fork bomb is to destroy all instances of it.»

I would split the thing in 2 scripts :

#!/bin/bash while [ 1 ] ; do # Force some computation even if it is useless to actually work the CPU echo $((13**99)) 1>/dev/null 2>&1 done 
#!/bin/bash # Either use environment variables for NUM_CPU and DURATION, or define them here for i in `seq $` : do # Put an infinite loop on each CPU infinite_loop.bash & done # Wait DURATION seconds then stop the loops and quit sleep $ killall infinite_loop.bash 

to increase load or consume CPU 100% or X%

Читайте также:  Linux все сервисы systemd

on some system this will increase the load in slots of X%, in that case you have to run the same command multiple time.

then you can see CPU uses by typing command

#!/bin/bash duration=120 # seconds instances=4 # cpus endtime=$(($(date +%s) + $duration)) for ((i=0; i 

I've used bc (binary calculator), asking them for PI with a big lot of decimals.

$ NUMCPU=$(grep $'^processor\t*:' /proc/cpuinfo |wc -l) 

This method is strong but seem system friendly, as I've never crashed a system using this.

#!/bin/bash while [ 1 ] do #Your code goes here done 

If you do not want to install additional software, you may use a compression utility which utilizes all CPU cores automatically. For example, xz:

 cat /dev/zero | xz -T0 > /dev/null 

This takes infinite stream of dummy data from /dev/zero and compresses it using all cores available in the system.

I went through the Internet to find something like it and found this very handy cpu hammer script.

#!/bin/sh # unixfoo.blogspot.com if [ $1 ]; then NUM_PROC=$1 else NUM_PROC=10 fi for i in `seq 0 $((NUM_PROC-1))`; do awk 'BEGIN ' & done 

Using examples mentioned here, but also help from IRC, I developed my own CPU stress testing script. It uses a subshell per thread and the endless loop technique. You can also specify the number of threads and the amount of time interactively.

#!/bin/bash # Simple CPU stress test script # Read the user's input echo -n "Number of CPU threads to test: " read cpu_threads echo -n "Duration of the test (in seconds): " read cpu_time # Run an endless loop on each thread to generate 100% CPU echo -e "\E[32mStressing $ threads for $ seconds. \E[37m" for i in $(seq $); do let thread=$-1 (taskset -cp $ $BASHPID; while true; do true; done) & done # Once the time runs out, kill all of the loops sleep $ echo -e "\E[32mStressing complete.\E[37m" kill 0 

Utilizing ideas here, created code which exits automatically after a set duration, don't have to kill processes --

#!/bin/bash echo "Usage : ./killproc_ds.sh 6 60 (6 threads for 60 secs)" # Define variables NUM_PROCS=$ #How much scaling you want to do duration=$ # seconds function infinite_loop < endtime=$(($(date +%s) + $duration)) while (($(date +%s) < $endtime)); do #echo $(date +%s) echo $((13**99)) 1>/dev/null 2>&1 $(dd if=/dev/urandom count=10000 status=none| bzip2 -9 >> /dev/null) 2>&1 >&/dev/null done echo "Done Stressing the system - for thread $1" > echo Running for duration $duration secs, spawning $NUM_PROCS threads in background for i in `seq $` ; do # Put an infinite loop infinite_loop $i & done 

You can try to test the performance of cryptographic algorithms.

bash -c 'for (( I=100000000000000000000 ; I>=0 ; I++ )) ; do echo $(( I+I*I )) & echo $(( I*I-I )) & echo $(( I-I*I*I )) & echo $(( I+I*I*I )) ; done' &>/dev/null 

and it uses nothing except bash.

To enhance dimba's answer and provide something more pluggable (because i needed something similar). I have written the following using the dd load-up concept 😀

It will check current cores, and create that many dd threads. Start and End core load with Enter

#!/bin/bash load_dd() < dd if=/dev/zero of=/dev/null >fulload() < unset LOAD_ME_UP_SCOTTY export cores="$(grep proc /proc/cpuinfo -c)" for i in $( seq 1 $( expr $cores - 1 ) ) do export LOAD_ME_UP_SCOTTY="$$(echo 'load_dd | ')" done export LOAD_ME_UP_SCOTTY="$$(echo 'load_dd &')" eval $ > echo press return to begin and stop fullload of cores read fulload read killall -9 dd 

Dimba's dd if=/dev/zero of=/dev/null is definitely correct, but also worth mentioning is verifying maxing the cpu to 100% usage. You can do this with

This asks for ps output of a 1-minute average of the cpu usage by each process, then sums them with awk. While it's a 1 minute average, ps is smart enough to know if a process has only been around a few seconds and adjusts the time-window accordingly. Thus you can use this command to immediately see the result.

awk is a good way to write a long-running loop that's CPU bound without generating a lot of memory traffic or system calls, or using any significant amount of memory or polluting caches so it slows down other cores a minimal amount. ( stress or stress-ng can also do that if you either installed, if you use a simple CPU-stress method.)

awk 'BEGIN>' # about 3 seconds on 4GHz Skylake 

It's a counted loop so you can make it exit on its own after a finite amount of time. (Awk uses FP numbers, so a limit like 2^54 might not be reachable with i++ due to rounding, but that's way larger than needed for a few seconds to minutes.)

To run it in parallel, use a shell loop to start it in the background n times

for i in ;do awk 'BEGIN>' & done ###### 6 threads each running about 3 seconds 
$ for i in ;do awk 'BEGIN>' & done [1] 3047561 [2] 3047562 [3] 3047563 [4] 3047564 [5] 3047565 [6] 3047566 $ # this shell is usable. (wait a while before pressing return) [1] Done awk 'BEGIN>' [2] Done awk 'BEGIN>' [3] Done awk 'BEGIN>' [4] Done awk 'BEGIN>' [5]- Done awk 'BEGIN>' [6]+ Done awk 'BEGIN>' $ 

I used perf to see what kind of load it put on the CPU: it runs 2.6 instructions per clock cycle, so it's not the most friendly to a hyperthread sharing the same physical core. But it has a very small cache footprint, getting negligible cache misses even in L1d cache. And strace will show it makes no system calls until exit.

$ perf stat -r5 -d awk 'BEGIN>' Performance counter stats for 'awk BEGIN>' (5 runs): 3,277.56 msec task-clock # 0.997 CPUs utilized ( +- 0.24% ) 7 context-switches # 2.130 /sec ( +- 12.29% ) 1 cpu-migrations # 0.304 /sec ( +- 40.00% ) 180 page-faults # 54.765 /sec ( +- 0.18% ) 13,708,412,234 cycles # 4.171 GHz ( +- 0.18% ) (62.29%) 35,786,486,833 instructions # 2.61 insn per cycle ( +- 0.03% ) (74.92%) 9,696,339,695 branches # 2.950 G/sec ( +- 0.02% ) (74.99%) 340,155 branch-misses # 0.00% of all branches ( +-122.42% ) (75.08%) 12,108,293,527 L1-dcache-loads # 3.684 G/sec ( +- 0.04% ) (75.10%) 217,064 L1-dcache-load-misses # 0.00% of all L1-dcache accesses ( +- 17.23% ) (75.10%) 48,695 LLC-loads # 14.816 K/sec ( +- 31.69% ) (49.90%) 5,966 LLC-load-misses # 13.45% of all LL-cache accesses ( +- 31.45% ) (49.81%) 3.28711 +- 0.00772 seconds time elapsed ( +- 0.23% ) 

The most "friendly" to the other hyperthread on an x86 CPU would be a C program like this, which just runs a pause instruction in a loop. (Or portably, a Rust program that runs std::hint::spin_loop .) As far as the OS's process scheduler, it stays in user-space (nothing like a yield() system call), but in hardware it doesn't take up many resources, letting the other logical core have the front-end for multiple cycles.

Источник

Оцените статью
Adblock
detector