What are high memory and low memory on Linux?
@hiro, you mean «HIGHMEM» is «kernel virtual address» as described by ldd3. I agree with you. it is confusing, ldd3 defined «LOWMEM» «HIGHMEM», also defined «kernel virtual address» «kernel logic address». they are the same thing, but has different name. that is the «beauty» of software, it is so description-language-depended.
7 Answers 7
On a 32-bit architecture, the address space range for addressing RAM is:
The linux kernel splits that up 3/1 (could also be 2/2, or 1/3 1 ) into user space (high memory) and kernel space (low memory) respectively.
Every newly spawned user process gets an address (range) inside this area. User processes are generally untrusted and therefore are forbidden to access the kernel space. Further, they are considered non-urgent, as a general rule, the kernel tries to defer the allocation of memory to those processes.
A kernel processes gets its address (range) here. The kernel can directly access this 1 GB of addresses (well, not the full 1 GB, there are 128 MB reserved for high memory access).
Processes spawned in kernel space are trusted, urgent and assumed error-free, the memory request gets processed instantaneously.
Every kernel process can also access the user space range if it wishes to. And to achieve this, the kernel maps an address from the user space (the high memory) to its kernel space (the low memory), the 128 MB mentioned above are especially reserved for this.
1 Whether the split is 3/1, 2/2, or 1/3 is controlled by the CONFIG_VMSPLIT_. option; you can probably check under /boot/config* to see which option was selected for your kernel.
This is old and I’m not sure you’re around here. But I want to ask one thing: the reserved 128MB in kernel space (for high memory access), is it all the references of user-space memory area? So, a kernel process can access any user space by refer to this area, right?
What exactly does «can directly access» mean here? I mean, isn’t the kernel itself accessed via the virtual memory mechanism ?
I believe that what you say about high/low memory is wrong: I believe that in a pure 32bit system, the kernel can access the full 3GB of user space directly (kernel can access kernel space and user space). However when you have a PAE kernel things get more complex, now you have more than 3GB of RAM, each process can be 3GB, and you can not access the whole of user space directly. This is where high mem and that 128MB of memory in kernel space, comes in. With a 64bit kernel it gets simpler again, no high men, as all user space is accessible from the kernel.
@mgalgs ¼, 2/4 and ¾ was just a set of default choices that were exposed. Since 2007, one can also select 5/16ths and 15/32ths. If you know to edit which #define line, you can pick an almost arbitrary split of your own.
The first reference to turn to is Linux Device Drivers (available both online and in book form), particularly chapter 15 which has a section on the topic.
In an ideal world, every system component would be able to map all the memory it ever needs to access. And this is the case for processes on Linux and most operating systems: a 32-bit process can only access a little less than 2^32 bytes of virtual memory (in fact about 3GB on a typical Linux 32-bit architecture). It gets difficult for the kernel, which needs to be able to map the full memory of the process whose system call it’s executing, plus the whole physical memory, plus any other memory-mapped hardware device.
So when a 32-bit kernel needs to map more than 4GB of memory, it must be compiled with high memory support. High memory is memory which is not permanently mapped in the kernel’s address space. (Low memory is the opposite: it is always mapped, so you can access it in the kernel simply by dereferencing a pointer.)
When you access high memory from kernel code, you need to call kmap first, to obtain a pointer from a page data structure ( struct page ). Calling kmap works whether the page is in high or low memory. There is also kmap_atomic which has added constraints but is more efficient on multiprocessor machines because it uses finer-grained locking. The pointer obtained through kmap is a resource: it uses up address space. Once you’ve finished with it, you must call kunmap (or kunmap_atomic ) to free that resource; then the pointer is no longer valid, and the contents of the page can’t be accessed until you call kmap again.
Thanks Gilles for the answer.. But, i am not still able to get the whole concept. Could you please be a bit more simpler without reducing the information in it?
This is relevant to the Linux kernel; I’m not sure how any Unix kernel handles this.
The High Memory is the segment of memory that user-space programs can address. It cannot touch Low Memory.
Low Memory is the segment of memory that the Linux kernel can address directly. If the kernel must access High Memory, it has to map it into its own address space first.
There was a patch introduced recently that lets you control where the segment is. The tradeoff is that you can take addressable memory away from user space so that the kernel can have more memory that it does not have to map before using.
HIGHMEM is a range of kernel’s memory space, but it is NOT memory you access but it’s a place where you put what you want to access.
A typical 32bit Linux virtual memory map is like:
(CPU-specific vector and whatsoever are ignored here).
Linux splits the 1GB kernel space into 2 pieces, LOWMEM and HIGHMEM. The split varies from installation to installation.
If an installation chooses, say, 512MB-512MB for LOW and HIGH mems, the 512MB LOWMEM (0xc0000000-0xdfffffff) is statically mapped at the kernel boot time; usually the first so many bytes of the physical memory is used for this so that virtual and physical addresses in this range have a constant offset of, say, 0xc0000000.
On the other hand, the latter 512MB (HIGHMEM) has no static mapping (although you could leave pages semi-permanently mapped there, but you must do so explicitly in your driver code). Instead, pages are temporarily mapped and unmapped here so that virtual and physical addresses in this range have no consistent mapping. Typical uses of HIGHMEM include single-time data buffers.
For the people looking for an explanation in the context of Linux kernel memory space, beware that there are two conflicting definitions of the high/low memory split (unfortunately there is no standard, one has to interpret that in context):
- «High memory» defined as the totality of kernel space in VIRTUAL memory. This is a region that only the kernel can access and comprises all virtual addresses greater or equal than PAGE_OFFSET . «Low memory» refers therefore to the region of the remaining addresses, which correspond to the user-space memory accessible from each user process. For example: on 32-bit x86 with a default PAGE_OFFSET , this means that high memory is any address ADDR with ADDR ≥ 0xC0000000 = PAGE_OFFSET (i.e. higher 1 GB). This is the reason why in Linux 32-bit processes are typically limited to 3 GB. Note that PAGE_OFFSET cannot be configured directly, it depends on the configurable VMSPLIT_x options (source). To summarize: in 32-bit archs, virtual memory is by default split into lower 3 GB (user space) and higher 1 GB (kernel space). For 64 bit, PAGE_OFFSET is not configurable and depends on architectural details that are sometimes detected at runtime during kernel load. On x86_64, PAGE_OFFSET is 0xffff888000000000 for 4-level paging (typical) and 0xff11000000000000 for 5-level paging (source). For ARM64 this is usually 0x8000000000000000 . Note though, if KASLR is enabled, this value is intentionally unpredictable.
- «High memory» defined as the portion of PHYSICAL memory that cannot be mapped contiguously with the rest of the kernel virtual memory. A portion of the kernel virtual address space can be mapped as a single contiguous chunk into the so-called physical «low memory». To fully understand what this means, a deeper knowledge of the Linux virtual memory space is required. I would recommend going through these slides. From the slides: This kind of «high/low memory» split is only applicable to 32-bit architectures where the installed physical RAM size is relatively high (more than ~1 GB). Otherwise, i.e. when the physical address space is small (
Помогите уменьшить память в Kali linux, чтобы была неразмеченная область для 2 ОС.
GParted с live CD или из под Kali юзал, уменьшал партицию?
root@kali:~# umount /dev/sda1 umount: /: target is busy (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1).) Вот такая ошибка выходит. Как я понял, «цель занята», то есть как бы ее нельзя размонтировать, потому что он используется, да? И что делать тогда?
Загружаться с любого LiveCD в котором есть gparted и уже из Live системы изменять разметку диска.
Просто вытащи одну из плашек памяти и её станет меньше!
umount /dev/sda1 umount: /: target is busy
Как я понял, «цель занята», то есть как бы ее нельзя размонтировать, потому что он используется,
ты пытаешься размонтировать корневой раздел запущенной системе. Конечно она используется.
Флэшку вторую воткни! Что ты, бл. как не русский.
Крч, всем спасибо, получилось сделать это с LiveCD. Но так как гребаную форточку желательно ставить первой, нежели второй, у меня с Линукса слетел grub. Потом я его, конечно, заново установил, но после винды, как зашел в Линукс, он попросту слетел. И вот я снес форточку, установил заново Линукс, но есть новая проблема — не устанавливается wine. Пж, помогите, буду весьма признателен.
root@kali:~# dpkg —add-architecture i386 && apt-get update && apt-get install wine32 Чтение списков пакетов… Готово Чтение списков пакетов… Готово Построение дерева зависимостей Чтение информации о состоянии… Готово Пакет wine32 недоступен, но упомянут в списке зависимостей другого пакета. Это может означать, что пакет отсутствует, устарел, или доступен из источников, не упомянутых в sources.list
E: Для пакета «wine32» не найден кандидат на установку
Ставь Ubuntu. Проблем будет гораздо меньше, легче будет найти решение возникающих проблем. Не нравиться Ubuntu? Бери Linux Mint, Fedora, Rosa Linux, openSUSE. Debian не бери — это заготовка.
logon ★ ( 23.02.17 22:13:29 MSK )
Последнее исправление: logon 23.02.17 22:14:41 MSK (всего исправлений: 2)
А можно установить две системы Линукс? Не будет потом таких проблем, которые были у меня, когда стоял Виндовс вместе с Линукс?
А можно установить две системы Линукс?
Не будет потом таких проблем, когда стоял Виндовс вместе с Линукс?
How to trigger action on low-memory condition in Linux?
So, I thought this would be a pretty simple thing to locate: a service / kernel module that, when the kernel notices userland memory is running low, triggers some action (e.g. dumping a process list to a file, pinging some network endpoint, whatever) within a process that has its own dedicated memory (so it won’t fail to fork() or suffer from any of the other usual OOM issues). I found the OOM killer, which I understand is useful, but which doesn’t really do what I’d need to do. Ideally, if I’m running out of memory, I want to know why. I suppose I could write my own program that runs on startup and uses a fixed amount of memory, then only does stuff once it gets informed of low memory by the kernel, but that brings up its own question. Is there even a syscall to be informed of something like that? A way of saying to the kernel «hey, wake me up when we’ve only got 128 MB of memory left»? I searched around the web and on here but I didn’t find anything fitting that description. Seems like most people use polling on a time delay, but the obvious problem with that is it makes it way less likely you’ll be able to know which process(es) caused the problem.
Question: have you evaluated remote monitoring products such as Nagios, LogicMonitor, etc? This is the kind of thing products like those are typically good for in most environments.
Yep. We have nagios. We poll memory at 10 minute intervals. The problem is sometimes a process can balloon very rapidly- even in as little as a minute or two- and it’s hard to do a post-mortem when that happens because there are a lot of moving parts on those systems.
Sure, and you don’t want to monitor your machines to death. The only tool I’ve used for that kind of resolution is sar , and that may help you do perform a post-mortem. Continually reporting on that locally generated data would be pretty expensive on the network, though. Are you planning to keep the data local (or via something like NFS) much like kdump?
Keeping it local for now. We have centralized logging & monitoring, but information at a fine granularity is only something we need when some specific problem comes up with an individual system, so we’d just be logging in and poking around that system, anyway.
Also sar is pretty cool. Thanks for the recommendation. Tend to use htop or regular top , though they’re resource-hungry on a good day, so. maybe not the best tool for the job. 😛