Shared memory segments linux

What are Linux shared memory ‘segments’ vs. ‘pages’?

But I can’t find a definition for page. shmmax , for its part, defines the max size of a shared memory segment. So I don’t understand this paragraph nor the ceil(shmmax/PAGE_SIZE) part. (I’m researching this in order to determine how to set up my server for postgres.)

1 Answer 1

Segment: A segment is your interface into the shared memory. A segment is made up of one or more pages. If you (or your process) haven’t created a segment, you’re not using shared memory.

ceil: AKA ‘ceiling’. A well-defined math function that returns the next highest integer (aka rounding up). See Wikipedia: Floor and ceiling functions

PAGE_SIZE is the number of bytes the OS is using to split up its’ chunks of memory. You can find the size with getconf:

# getconf PAGE_SIZE PAGE_SIZE 4096 

shmmax is the maximum size of any individual segment in bytes (not pages).

shmall needs to be at least ‘ceil(shmmax/PAGE_SIZE)’ because if it was any less you could not create a segment that’s shmmax in size. You would run out of pages to use.

Let’s say you want to use no more than 8MiB (MB being base 10, MiB — mebibytes being base 2, what your computer actually uses when calculating the sizes) for shared memory on the system.

To find the number of pages 8MiB is you simply divide by PAGE_SIZE.

Now let’s say that you know you only need a single segment that’s 512K (KiB, not KB) in size for postgres. You have all the data to calculate the minimum number you should set shmall to.

The smallest you should set shmall to would be 128 pages. There’s nothing stopping you from setting it higher. Shmall is simply a limit specifying that you will not use more than that amount of memory for shared memory whether there’s one segment or ten.

Источник

Linux — Shared Memory (SHM) (/dev/shm)

The shared memory system can also be used to set permissions on memory.

There are two different types of shared memory implementations:

Management

By default, your operating system includes an entry in /etc/fstab to mount /dev/shm .

File structure

shm / shmfs is also known as tmpfs.

tmpfs means temporary file storage facility. It is intended to appear as a mounted file system, but one which uses virtual memory instead of a persistent storage device.

Читайте также:  Проверка файловой системы linux lvm

How to check its size ?

To check the size of the shared memory file system, enter the following command:

Limit

To determine current shared memory limits you can use the ipcs command.

------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 1073741824 max total shared memory (kbytes) = 17179869184 min seg size (bytes) = 1 

Parameters

shmmax

shmmax define the Maximum size (in bytes) for a shared memory segment.

We gan get the parameters limit by using the proc Filesystem such as:

shmall total shared memory avail: 2097152

segments

------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x45110010 1774485506 oracle 660 1048576 2 0x00000000 3112963 oracle 660 67108864 51 0x00000000 3145732 oracle 660 1543503872 51 0x910ac490 3178501 oracle 660 2097152 51 0x6611c0d9 1774518278 oracle 660 126921994088 1 locked 0x6711c0d9 1774551047 oracle 660 33554432 1 0x1111c0df 1775206408 oracle 660 273722634328 2 locked 0x1211c0df 1775239177 oracle 660 33554432 2 

Process

------ Shared Memory Creator/Last-op -------- shmid owner cpid lpid 1774485506 oracle 30581 11420 3112963 oracle 24249 11377 3145732 oracle 24249 11377 3178501 oracle 24249 11377 1774518278 oracle 30572 11420 1774551047 oracle 30572 11420 

lpid is the process ID of the last job to attach or detach from the shared memory segment or change the semaphore value.

top

 The amount of shared memory used by a task. It simply reflects mem- ory that could be potentially shared with other processes. 

Источник

Shared Memory Segment in Operating System

Where is shared memory belongs to ? Which means it is owned by each individual process like stack and heap. So, other program cannot able to access the stack of some other program. Or it is a common segment of memory which is used by any number of process. The below figure shows my question diagramatically. Figure 1:

 ----------------- ----------------- ----------------- | stack | | stack | | stack | | | | | | | | Shared m/y | --->| Shared m/y | 
 ----------------------------------------- | | | | | Shared Memory || | | | | | | | ----------------------------------------- | | ^ | | | | ----------------- ----------------- ----------------- | stack | | stack | | stack | | | | | | | | heap | | Heap | | Heap | | | | | | | | Data segment | | Data segment | | Data segment | | | | | | | | text | | text | | text | ----------------- ----------------- ----------------- Process 1 Process 2 Process 3 

In Figure 1, Each process have a segment of shared memory within the address space of a process. On that shared memory of process 2 is accessed by process 1 and process 3. In Figure 2, the shared memory is a segment of memory which is accessed by all the processes. So, in above two scenario, which is used by process for shared memory segment.

2 Answers 2

The correct way to think about this is like so:

  • The system has a certain amount of physical memory, that's available to the operating system (the physical RAM chips in your computer)
  • Each process has a virtual address space, that does not directly correspond to physical memory.
  • The operating system can map any part of the physical memory it has available into the virtual address space of a process.

Basically, the OS can say: "put this physical memory chunk into the virtual address space at address 0x12345678". Every data that's in a processes virtual address space ultimately resides somewhere in physical memory. Stack, heap, shared memory, . is all the same in that regard. The only thing that distinguishes shared memory, is that multiple processes have the same piece of physical memory mapped into their address space.

system memory to virtual address space mapping

In reality, things are a bit more complicated, but this description gives the basic idea.

I suspect part of your confusion is from terminology. In pre-64-bit Intel, data was organized in segments. The term segment is also used in linkers to describe how data is assembled in a program (that in 32-bit Intel may be mapped to hardware segments). However, for you question, you should eliminate the term segment.

The general way things work in Intel 64-bit and most non-Intel systems is that the physical memory is divided into PAGE FRAMES of some fixed size (e.g. 4K, 1K). The memory management unit of the CPU operates on PAGES of the same size. The operating system sets up a linear logical address space for each process consisting of pages. The operating system maps the pages of the logical process address space to physical page frames using a PAGE TABLE.

When each process runs it sees its own logical address space with addresses in the range 0 to whatever. The pages within each process's address space are mapped to physical page frames and the Memory Management Unit does the translation from logical addresses using pages to physical memory addresses using page frames automatically using the page table(s).

This system keeps each process from messing with other processes. Generally, if Process X accesses logical address Q and Process Y accesses logical address Q, they will be accessing different physical memory locations because their page tables will have different mappings.

Every system I am aware of that uses logical memory translation has the ability to for multiple processes to map a logical page(s) to the same physical page frame(s): shared memory. Processes can use this mechanism to quickly exchange data (at the cost of having manage the synchronization to that data).

In this type of sharing, the physical page frame does not have to be mapped to the same logical address (and usually is not).

Process X can map page frame P to page A while Process Y can map page frame P to page B

There is another form of shared memory that usually is implemented quite differently. The processor or operating system (on some processors) defines a range of logical addresses for a System Address Space. This range of addresses is the same for all processes and all processes have the same mapping of logical addresses to physical page frames in this range.

The System address is protected so that it can only be accessed in Kernel mode so processes cannot muck with each other.

Think in terms of pages rather than segments here.

Источник

How to list processes attached to a shared memory segment in linux?

I don't think you can do this with the standard tools. You can use ipcs -mp to get the process ID of the last process to attach/detach but I'm not aware of how to get all attached processes with ipcs .

With a two-process-attached segment, assuming they both stayed attached, you can possibly figure out from the creator PID cpid and last-attached PID lpid which are the two processes but that won't scale to more than two processes so its usefulness is limited.

The cat /proc/sysvipc/shm method seems similarly limited but I believe there's a way to do it with other parts of the /proc filesystem, as shown below:

When I do a grep on the procfs maps for all processes, I get entries containing lines for the cpid and lpid processes.

For example, I get the following shared memory segment from ipcs -m :

------ Shared Memory Segments -------- key shmid owner perms bytes nattch status 0x00000000 123456 pax 600 1024 2 dest 

and, from ipcs -mp , the cpid is 3956 and the lpid is 9999 for that given shared memory segment (123456).

Then, with the command grep 123456 /proc/*/maps , I see:

/proc/3956/maps: blah blah blah 123456 /SYSV000000 (deleted) /proc/9999/maps: blah blah blah 123456 /SYSV000000 (deleted) 

So there is a way to get the processes that attached to it. I'm pretty certain that the dest status and (deleted) indicator are because the creator has marked the segment for destruction once the final detach occurs, not that it's already been destroyed.

So, by scanning of the /proc/*/maps "files", you should be able to discover which PIDs are currently attached to a given segment.

Источник

Оцените статью
Adblock
detector