Linux vs windows mem

UNIX vs Windows memory deallocation

My understanding is that in unix, when memory is freed, the memory doesn’t get returned back to the operating system, it stays in the process to be used again for the next call to malloc. On windows, I understand that the memory actually gets returned to the operating system. Is there any big difference between these two ways of doing things or are they just two different ways of doing the same thing? And if there are any pros/cons to these two methods, what are they? EDIT: Thanks for the clarification. I had always thought this was an OS thing (since processes never seem to decrease in size in UNIX-like systems, but do in windows).

Processes decreasing in size on Windows might be another thing: Windows trims the resident set size when a window is minimized, and you probably were looking at that value on the task manager. Firefox, for one, had to disable that Windows «feature», which slowed it down too much.

9 Answers 9

There isn’t much difference between Windows and Unix with respect to that.

In both, there are two levels of allocation. The operating system allocates memory to the process in large chunks (one page or more; on x86, the page size is usually 4096 bytes). The runtime libraries, running within the process, subdivide this space and allocate parts of it to your code.

To return the memory to the operating system, first all the memory allocated from one of these large chunks has to be released to the runtime library. The runtime library then can, if it wants, tell the operating system to release that chunk of memory.

On Linux, you have brk and mmap . brk controls the size of of a large chunk of memory allocated to your process; you can expand or shrink it, but only at one end. malloc traditionally expands this chunk of memory when it needs more memory to allocate from, and shrinks it when possible. However, shrinking is not easy; it takes a single one-byte ill-timed allocation at the end to make it unable to shrink even if everything before that allocation has been freed. This is the source of the «Unix doesn’t release memory back» meme.

However, there’s also anonymous mmap . Anonymous mmap requests a chunk of memory from the operating system, which can be placed anywhere in the process memory space. This chunk can be returned easily when it’s not needed anymore, even if there are later allocations which weren’t released yet. malloc uses also mmap (particularly for large allocations, where a whole chunk of memory can be easily returned after being freed).

Of course, on both Windows and Linux if you do not like the behavior of the memory allocator (or allocators) from the runtime libraries, you can use your own, asking memory from the operating system and subdividing it the way you want (or sometimes asking memory from another allocator, but in larger blocks). One interesting use is to have an allocator for all the memory associated with a task (for instance, a web server request), which is completely discarded at the end of the task (with no need to free all the pieces individually); another interesting use is an allocator for fixed-size objects (for instance, five-byte objects), which avoids memory fragmentation.

Читайте также:  Configure mail server in linux

Источник

Is Linux More Efficient Than Windows When it Comes to RAM Consumption?

With lower system requirements for Linux distributors than Windows, switching to Linux is a great way to rejuvenate an old computer. This is because Linux requires less hard drive space thereby putting a lesser load on your computer’s CPU.

But when it comes to RAM, it depends. To get to the bottom of this question lets first consider what RAM is.

What Is RAM?

RAM Is an acronym for (random access memory) and is a space in your CPU for temporary storage of data that needs to be accessed frequently. It is not the same as your hard drive and it is also different in the sense that it does not store data when there is no power source, this means when you restart your PC, it returns to an empty state.

There are two types of RAM; DRAM and SRAM. DRAM is more in use because it is not as expensive as SRAM they both do the same thing only DRAM provides access times of about 60 nanoseconds whilst SRAM does in 10.

So where does this leave us?

Well we can look at system requirements that support both OS’ as a place to start. Microsoft recommends 4Gb of RAM for Windows 10 users, but the developer of Ubuntu (the most popular Linux Version) Canonical, recommends 2GB of RAM.

Even this does not tell the whole story as Ubuntu comes with extras such as animations and other goodies that if not needed, one could pretty much run Linux on old computers with even less than 2GB. You can save yourself some money by switching to Linux if your old windows computer needs more RAM.

How RAM Works?

A good and fast web browser may be able to load websites fast, but loading a website will always be faster if the information was already stored in a space on your computer.

Web browsers cache sites visited so it can load them faster next time you visit them and it does this by storing website information in RAM. It is also a similar principle with word documents while they are being updated.

This is why Gamers need more RAM than the average PC user, because the computer needs to manage the different game sequences.

Why You Can Bet on Linux

Both Linux and Windows use up GBs of RAM. But there are significant differences when it comes to managing RAM usage and we argue this is where Linux has an advantage.

You have far less options with Windows when it comes to boosting RAM. You can reduce background programs and services running at the same time or you get more RAM. The cheapest way to get more RAM is to turn a USB drive into a makeshift RAM.

With Linux on the other hand, you can do all these things and more. For example, you could switch to an alternative of Ubuntu which is much lighter on resources. There are a lot of options to choose from.

All you can do with Windows is adjust animation and theme settings but the graphical user interface still remains and it is still heavy. With both systems, you can run lightweight apps but this has a better effect on Linux because it has a lighter environment.

So Which Uses Less RAM?

Well after it is all said and done, one may not assume that because you are running a Linux desktop that you are consuming less RAM. If your computer comes with the standard 512MB of RAM, Linux can make it seem like a new machine but this depends on your use of RAM consuming tasks, such as gaming which may still make the system seem slow.

Читайте также:  Copy all file and folder in linux

Unfortunately, browsing the web is also one of these RAM – intensive tasks. There are many Linux distros that use less RAM than Windows 10, some better than others, and this will also determine the extent to which your Linux system compares to windows, but it is very safe to say, chances are it compares favorably.

Do you have any other points you think we have missed in this article, please share your thoughts in the space provided below!

Источник

Windows vs Linux — C++ Thread Pool Memory Usage

I have been looking at the memory usage of some C++ REST API frameworks in Windows and Linux (Debian). In particular, I have looked at these two frameworks: cpprestsdk and cpp-httplib. In both, a thread pool is created and used to service requests. I took the thread pool implementation from cpp-httplib and put it in a minimal working example below, to show the memory usage that I am observing on Windows and Linux.

#include #include #include #include #include #include #include #include #include #include #include using namespace std; // TaskQueue and ThreadPool taken from https://github.com/yhirose/cpp-httplib class TaskQueue < public: TaskQueue() = default; virtual ~TaskQueue() = default; virtual void enqueue(std::functionfn) = 0; virtual void shutdown() = 0; virtual void on_idle() <>; >; class ThreadPool : public TaskQueue < public: explicit ThreadPool(size_t n) : shutdown_(false) < while (n) < threads_.emplace_back(worker(*this)); cout > ThreadPool(const ThreadPool&) = delete; ~ThreadPool() override = default; void enqueue(std::function fn) override < std::unique_locklock(mutex_); jobs_.push_back(fn); cond_.notify_one(); > void shutdown() override < // Stop all worker threads. < std::unique_locklock(mutex_); shutdown_ = true; > cond_.notify_all(); // Join. for (auto& t : threads_) < t.join(); >> private: struct worker < explicit worker(ThreadPool& pool) : pool_(pool) <>void operator()() < for (;;) < std::functionfn; < std::unique_locklock(pool_.mutex_); pool_.cond_.wait( lock, [&] < return !pool_.jobs_.empty() || pool_.shutdown_; >); if (pool_.shutdown_ && pool_.jobs_.empty()) < break; >fn = pool_.jobs_.front(); pool_.jobs_.pop_front(); > assert(true == static_cast(fn)); fn(); > > ThreadPool& pool_; >; friend struct worker; std::vector threads_; std::list jobs_; bool shutdown_; std::condition_variable cond_; std::mutex mutex_; >; // MWE class ContainerWrapper < public: ~ContainerWrapper() < cout map, double> data; >; void handle_post() < cout cout int main(int argc, char* argv[]) < cout task_queue(new ThreadPool(40)); for (size_t i = 0; i < 50; ++i) < cout enqueue([]() < handle_post(); >); // Sleep enough time for the task to finish. std::this_thread::sleep_for(std::chrono::seconds(5)); > task_queue->shutdown(); return 0; > 

win_v_lin_50_seq_tasks_40_threads_rss

When I run this MWE and look at the memory consumption in Windows vs Linux, I get the graph below. For Windows, I used perfmon to get the Private Bytes value. In Linux, I used docker stats —no-stream —format «> to log the container’s memory usage. This was in line with res for the process from top running inside the container. It appears from the graph that when a thread allocates memory for the map variable in Windows in the handle_post function, that the memory is given back when the function exits before the next call to the function. This was the type of behaviour that I was naively expecting. I have no experience regarding how the OS deals with memory allocated by a function that is being executed in a thread when the thread stays alive i.e. like here in a thread pool. On Linux, it looks like the memory usage keeps growing and that memory is not given back when the function exits. When all 40 threads have been used, and there are 10 more tasks to process, the memory usage appears to stop growing. Can somebody give a high level view of what is happening here in Linux from a memory management point of view or even some pointers about where to look for some background info on this specific topic? Edit 1: I have edited the graph below to show the output value of rss from running ps -p -h -o etimes,pid,rss,vsz every second in the Linux container where is the id of the process being tested. It is in reasonable agreement with the output of docker stats —no-stream —format «> . Edit 2: Based on a comment below regarding STL allocators, I removed the map from MWE by replacing the handle_post function with the following and adding the includes #include and #include . Now, the handle_post function just allocates and sets memory for 500K int s which is approximately 2MiB.

I get the same behaviour here. I reduced the number of threads to 8 and the number of tasks to 10 in the example. The graph below shows the results. Edit 3: I have added the results from running on a Linux CentOS machine. It broadly agrees with the results from the Debian docker image result. 8_threads_10_seq_tasks_e3 Edit 4: Based on another comment below, I ran the example under valgrind ‘s massif tool. The massif command line parameters are in the images below. I ran it with —pages-as-heap=yes , second image below, and without this flag, first image below. The first image would suggest that ~2MiB memory is allocated to the (shared) heap as the handle_post function is executed on a thread and then freed as the function exits. This is what I would expect and what I observe on Windows. I am not sure how to interpret the graph with —pages-as-heap=yes yet, i.e. the second image. I can’t reconcile the output of massif in the first image with the value of rss from the ps command shown in the graphs above. If I run the Docker image and limit the container memory to 12MB using docker run —rm -it —privileged —memory=»12m» —memory-swap=»12m» —name=mwe_test cpp_testing:1.0 , the container runs out of memory on the 7th allocation and is killed by the OS. I get Killed in the output and when I look at dmesg , I see Killed process 25709 (cpp_testing) total-vm:529960kB, anon-rss:10268kB, file-rss:2904kB, shmem-rss:0kB . This would suggest that the rss value from ps is accurately reflecting the (heap) memory actually being used by the process whereas the massif tool is calculating what it should be based on malloc / new and free / delete calls. This is just my basic assumption from this test. My question would still stand i.e. why is, or does it appear that, the heap memory is not being freed or deallocated when the handle_post function exits? massif_output Edit 5: I have added below a graph of the memory usage as you increase the number of threads in the thread pool from 1 to 4. The pattern continues as you increase the number of threads up to 10 so I have not included 5 to 10. Note that I have added a 5 sec pause at the start of main which is the initial flat line in the graph for the first ~5secs. It appears that, regardless of thread count, there is a release of memory after the first task is processed but that memory is not released (kept for reuse?) after task 2 through 10. It may suggest that some memory allocation parameter is tuned during task 1 execution (just thinking out loud!)? increase_num_threads Edit 6: Based on the suggestion from the detailed answer below, I set the environment variable MALLOC_ARENA_MAX to 1 and 2 before running the example. This gives the output in the following graph. This is as expected based on the explanation of the effect of this variable given in the answer. effect_of_malloc_arena_max

Источник

Оцените статью
Adblock
detector