How Limit memory usage for a single Linux process and not kill the process
How Limit memory usage for a single Linux process and not kill the process. I know ulimit can limit memory usage, but if exceed the limit, will kill the process. Is there any other command or shell can limit memory usage and not kill the process?
> but if exceed the limit, will kill the process. . This is not correct. In this case malloc() (in case C application) simply returns NULL and your process is not killed. It is probably killed because of attepts to do something with NULL pointer. linux.die.net/man/3/malloc — On error, these functions return NULL.
2 Answers 2
Another way besides setrlimit, which can be set using the ulimit utility:
$ ulimit -Sv 500000 # Set ~500 mb limit
is to use Linux’s control groups, because it limits a process’s (or group of processes’) allocation of physical memory distinctly from virtual memory. For example:
$ cgcreate -g memory:/myGroup
$ echo $(( 500 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
$ echo $(( 5000 * 1024 * 1024 )) > /sys/fs/cgroup/memory/myGroupmemory.memsw.limit_in_bytes
will create a control group named «myGroup», cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:
Note: For what I can understand setrlimit will limit the virtual memory, although with cgroups you can limit the physical memory.
I believe you are wrong on thinking that a limit set with setrlimit(2) will always kill the process.
Indeed, if the stack space is exceeded ( RLIMIT_STACK ), the process would be killed (with SIGSEGV ).
But if it is heap memory ( RLIMIT_DATA or RLIMIT_AS ), mmap(2) would fail. If it has been called from malloc(3) or friends, that malloc would fail.
Some Linux systems are configured with memory overcommit.
This is a sysadmin issue: echo 0 > /proc/sys/vm/overcommit_memory
The moral of the story is that you should always check result of malloc , at least like
struct mystruct_st *ptr = malloc(sizeof(struct mystruct_st)); if (!ptr)
Of course, sophisticated applications could handle «out-of-memory» conditions more wisely, but it is difficult to get right.
Some people incorrectly assumes that malloc does not fail (this is wrong). But that is their mistake. Then they dereference a NULL pointer, getting a SIGSEGV that they deserve.
You could consider catching SIGSEGV with some processor-specific code, see this answer. Unless you are a guru, don’t do that.
How can I limit the CPU and RAM usage for a process?
I’m on a Ubuntu VPS with SFTP and a console. I need a specific process to only use 60% of my CPU and 2048 MB of RAM. I also need that another process only uses 30% of CPU and 1024MB of RAM. How can I limit the CPU and RAM usage of a process?
3 Answers 3
Be warned: Here there be dragons.
When you start going down the path of specifically controlling resources of applications / processes / threads to this extent, you begin to open a literal Pandora’s box of problems when it comes time to debug an issue that your rate limiting did not take into account.
That said, if you believe that you know what you’re doing, there are three options available to you: nice , cpulimit , and control groups (Cgroups).
Here is a TL;DR for these three methods:
Nice ⇢ nice
This is a very simple way to prioritise a task and is quite effective for «one off» uses, such as reducing the priority of a long-running, computationally-expensive task that should use more of the CPU when the machine is not being used by other tasks (or people).
CPU Limit ⇢ cpulimit -l 60
If your server performance suffers (a.k.a. stalls) when the CPU usage exceeds a certain amount, then cpulimit can help reduce the pressure on the system. It does this by pausing the process at different intervals to keep it under a defined ceiling by sending SIGSTOP and SIGCONT signals to the process. cpulimit does not change the nice value of the process, instead it monitors and controls the real-world CPU usage.
You will find that cpulimit is useful when you want to ensure that a process doesn’t use more than a certain portion of the CPU, which your question alludes to, but a disadvantage is that the process cannot use all of the available CPU time when the system is idle (which nice allows).
sudo cgcreate -g cpu:/restrained sudo cgset -r cpu.shares=768 restrained sudo cgexec -g cpu: restrained
Cgroups — control groups — are a feature built into the Linux kernel that enables you to control how resources should be allocated. With Cgroups you can specify how much CPU, memory, bandwidth, or combinations of these resources can be used by the processes that are assigned to a group.
A key advantage of Cgroups over nice or cpulimit is that the limits are applied to a set of processes; not just one. nice and cpulimit are also limited to restricting the CPU usage of a process, whereas Cgroups can limit other process resources.
If you go down the rabbit-hole of Cgroups then you can hyper-optimise a system for a specific set of tasks.