Linux what eats memory

How to find which processes are taking all the memory?

Under Linux, simply press M to sort by physical memory usage (RES column). Under *BSD, run top -o res or top -o size . But htop is a lot nicer and doesn’t even consume more memory than top (however it’s not part of the basic toolset so you might not have it installed).

@Steven How can we group process with same parent? Basically firefox shows up in multiple times may be as it spawns multiple child processes. Is it possible to get combined memory usage?

If you have it installed I like htop once launching it you can press f6 , down arrow (to MEM% ), enter to sort by memory.

In Solaris the command you would need is:

This will list all processes in order of descending process image size. Note that the latter is based on memory committed to the process by the OS, not its resident physical memory usage.

There are supposedly versions of «top» available for Solaris, but these are not part of the standard installation.

Once top starts, press F to switch to the sort field screen. Choose one of the fields listed by pressing the key listed on the left; you probably want N for MEM%

If you want MEM%, pressing ‘M’ does the same stated above. ‘c’ adds command line parameters to the process list, may be informative for your problem.

This command will identify the top memory consuming processes:

ps -A --sort -rss -o pid,pmem:40,cmd:500 | head -n 6 | tr -s " " ";z" 

Doesn’t work on Solaris 9: ps: illegal option — — ps: ort is an invalid non-numeric argument for -s option ps: illegal option — r ps: s is an invalid non-numeric argument for -s option ps: unknown output format: -o pmem:40 ps: unknown output format: -o cmd:500

One nice alternative to top is htop . Check it, it is much more user friendly than regular top.

Globally: It’s always recommended to use a log analyser tool for logging history logs such as Splunk, ELK etc. So that using query language you can easily get the PIDs and its usage by CPU & memory.

AT SERVER/OS LEVEL: From inside top you can try the following:

 Press SHIFT+M ---> This will give you a process which takes more memory in descending order. 
$ ps -eo pid,ppid,cmd,%mem,%cpu --sort=-%mem | head -10 

This will give the top 10 processes by memory usage. Also you can use vmstat utility to find the RAM usage at same time not for history.

Источник

How to find out what process are eating all memory

Im using SUSE, i have 31GB of memory Mem: 31908592k total, 31429632k used, 478960k free, 12176k buffers How do I find out what process are eating up all my memory.

Читайте также:  Vnc server and linux

The used memory was 22GB i ran rsync command then it starts eating my memory.. i kill -9 rsync.. the used memory was increased but never reduced after killing the rsync process

. Why did you kill -9 rsync?! rsync has proper mechanisms for shutting down, and kill -9 is never one of them.

Why i kill -9 rsync? boz i noticed it is taking lots of memry i will be running out of mem.. is there other way to stop rsync? thank you for your reply

Or even just plain kill without the -9 , or kill -2 if you want the exact same effect as ctrl-C , but you don’t have an open terminal connected to the process.

3 Answers 3

Looks like you’re running free , looking at the first line, and freaking out. That’s OK; a lot of people do that.

Ignore the first line of output from free . Really. Linux is very aggressive about caching data from disk, even including «swapped out» virtual memory pages, in RAM and all that cached data is counted as «used» in free ‘s first line, even though it can be made available again in a microsecond (or less, these days) if needed.

You need to be looking at the second line of output from free , the one labeled -/+ buffers/cache: . That’s the line which will tell you how much memory is actually tied up by running programs, ignoring what’s being used for the moment by buffers and cache.

For example, my machine currently shows:

 total used free shared buffers cached Mem: 3083924 2177300 906624 0 461344 837008 -/+ buffers/cache: 878948 2204976 Swap: 4000148 0 4000148 

Although there are only 906624 bytes of RAM sitting completely unused at the moment, there are an additional 461344 bytes of buffers and 837008 bytes of cached data which can be made available more-or-less instantly if something more important (i.e., just about anything) needs the memory, giving me a total of 2204976 bytes actually available to me.

Источник

Something eats all memory (I suspect memory leak on some app). How to detect what?

enter image description here

I have server which runs liquidsoap+icecast bundle and simple website (httpd+mysqld). Nothing special. Visitors around 2000+ per day, with around 50 being online simultaneously on average. Server has 8GB RAM. As the time goes by, amount of free memory constantly decreases, although nothing new is started on server and there are no new users. At some point it starts to swap, load on server goes up and it becomes unresponsive. Usually what I do is just restart the server. What can be done to detect what exactly leaks memory? I use top to monitor usage of resources, but as far as I see it shows nothing helpful: Is there any way to find out what uses that much memory? or what starts to swap to disk heavily? Any way to free up memory without rebooting the server?

Any reason why you don’t try restarting some of the services (apache, liquidsoap) instead of the server?

I originally responded for normal memory usage. I have updated with a set of tools which may help identify the problem.

Читайте также:  Linux для слабых ноутбука

@jamespo, actually I tried that, but it had no effect, so restart was the only thing I knew could help.

The 4027092k cached should explain the memory usage, no? I’m working at the moment on a similar problem elsewhere, and so far I’ve managed to figure out the memory transfer can be regulated with the following params: vfs_cache_pressure vm.dirty_ratio vm.dirty_background_ratio This is not a complete fix and any feedback’s most welcome. I hope it’s a right direction to go.

4 Answers 4

Running top in batch mode to report memory sizes periodically can be used to see who is using the memory when things go south. Runing sar in batch mode should give some good diagnostics on memory use, and related I/O. Running munin to monitor the system should give you a graph with good detail on what memory is being used for. This may help a lot.

You can use limits.conf to limit the maximum core size of programs. Properly set, this should kill any programs which are leaking memory. This works with the pam_limits module. Limits can also be set with the ulimits command.

You are running a few programs which could use large amounts of memory. Some things you could look at include.

  • Poorly programmed applications running under apache2 can leak memory. You should see the memory size increase when this happens. You can tune apache2 to recycle children after a certain number of uses by setting MaxRequestsPerChild to 100 or so. If this resolves the problem, then you need to resolve the leak. I would watch this first.
  • MySQL may try to load data into memory. If you have a lot of data in memory this may cause some thrashing, but should not be as dramatic as you are seeing.
  • If you have a large tmpfs file system mounted, then you may leak memory if files are not deleted when used. Large long lived files can also be a problem.
  • If the problem is occurs at roughly the same time of day, you may have a scheduled program which is leaking memory.
  • If you have a program that allocates shared memory, but does not release it before exiting, you will have a relatively invisible memory leak. If the shared memory is locked in memory, then it may force swapping. The amount of available shared memory is typically relatively limited.
  • The liquidsoap+icecast bundle could run into buffering issues that use memory. I haven’t used this combination, so I am not sure how this would appear.

Normal memory usage: Free memory is not something you want a lot of. If your system has been up for a long time and has a lot of free memory something is wrong. Every time you read or write a file, the blocks will go into the buffer cache. This will reduce your free memory, and is a good thing. The system will keep enough free space to start a few programs without looking elsewhere for memory. As many programs run quickly, their memory will be returned to the free pool when they stop running.

Читайте также:  Linux уменьшить использование swap

When you read a file that is in buffer cache, no disk access is required and the read is resolved from the buffer cache. Writes use a similar mechanism. If your system needs memory, the buffer cache is one of the first places that is used. Most buffers can be released immediately.

If you have a memory leak, you will see free memory and buffers both begin to shrink. This is still not a severe problem, as the leaked memory should eventually be moved to swap space. Your system will still run fine until you fill the swap space, and draw down the remaining free space to the point programs can’t be started. It is typical that a small amount of swap space may be used.

Источник

How to troubleshoot what eats memory?

I’m trying to figure out why memory consumption started to constantly increase on my server during the last hours. I’ve tried to find the cause on the application level, but no success. That is why now looking into possible server cause. I’m not a pro in servers administration, so any help is appreciated. First common memory was eaten, now swap consumption is also constantly increasing. My server runs on CentOS 7 with the kernel 3.10.0-514.26.2.el7.x86_64 SOLUTION
Finally, the issue was identified to be caused by a recently updated server library. The accepted answer is a good reminder, in the situation when you’re stressed out by the memory usage, to trace back what had been changed in your system before the issue appeared. Some tips I’ve been looking for and found to be very useful are described in How to find which processes are taking all the memory? I’m listing below the commands that I used and may help in such situation. ps auwx —sort rss — processes sorted by memory usage ps -fu username — processes by a user htop usage/analysis showed many hung application cron-launched processes in my case. I configured htop to output both PID and PPID, because I needed to correlate PPID to /var/log/cron logged processes. free -m

 total used free shared buff/cache available Mem: 7565 6525 440 47 599 657 Swap: 8191 2612 5579 
MemTotal: 7747260 kB MemFree: 253960 kB MemAvailable: 498904 kB Buffers: 6160 kB Cached: 189076 kB SwapCached: 467788 kB Active: 5572588 kB Inactive: 1258540 kB Active(anon): 5498664 kB Inactive(anon): 1185908 kB Active(file): 73924 kB Inactive(file): 72632 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 8388604 kB SwapFree: 5686452 kB Dirty: 104 kB Writeback: 0 kB AnonPages: 6168400 kB Mapped: 68668 kB Shmem: 48676 kB Slab: 456672 kB SReclaimable: 389064 kB SUnreclaim: 67608 kB KernelStack: 7232 kB PageTables: 106848 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 12262232 kB Committed_AS: 10244216 kB VmallocTotal: 34359738367 kB VmallocUsed: 26276 kB VmallocChunk: 34359705340 kB HardwareCorrupted: 0 kB AnonHugePages: 5191680 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 83968 kB DirectMap2M: 8300544 kB 

How can I proceed to find if there’s any issue in how the server is functioning or configured in terms of memory usage?

Источник

Оцените статью
Adblock
detector