Processes in linux operating system

Processes within the Linux Operating System.

This post will explore the following topics: Process Management, Process Creation, Process Scheduling, and Process Destruction.

Process Management

«Process management includes creating and deleting processes and providing mechanisms for processes to communicate and synchronize with each other»[1].

A process is a running program (active entity). The execution of processes must be sequential, a program counter specifies the next instruction to execute and the associated resources. A single process cannot execute its instructions in parallel. Once an executable file has been loaded into memory, a program becomes a process[1]. The memory layout of a process is typically represented by multiple sections and is shown in Figure 1.1. These sections include:
Text section — the executable code
Data section — global variables
Heap section — when the program is running, memory is dynamically allocated
Stack section — this is temporary data storage when using functions (such as function parameters, return addresses, and local variables)
Image description Processes change their states as they execute. The state of a process is partly defined by the activity of that process. The following are possible states for a process as shown in Figure 1.2: • New – a process is created
Running – instructions are executed
Waiting– a process is waiting for some event to occur (such as an Input/Output (I/O) completion or reception of a signal)
Ready– a process is waiting to be allocated to a processor
Terminated– a process is finished All operating systems have the same states they represent, but their names may vary. Image description Every process in the operating system is represented by a Process Control Block (PCB)—also called a task control block. Each process has a process ID (PID). There are many pieces of information related to a specific process, among which are these: • Process state – i.e., new, ready, running, waiting, halted, etc. • Program counter – it specifies the address of the next instruction to be executed for a process. This state must be saved during an interruption for a process to continue correctly. • Central Processing Unit (CPU) registers – depending on the computer architecture, registers differ in number and type, i.e.: accumulators, index registers, stack pointers. Just like the program counter, this state information must be saved during
an interrupt occurs, otherwise, a process will not continue correctly when it is rescheduled to run. • CPU-scheduling – process priority information, scheduling queues and schedule parameters can be found here. • Memory-management – depending on the memory, which is used by OS, here we can find information about limit registers and the page tables, the segment tables, or the value of the base. • Accounting – here is information about real-time CPU usage, time constraints, process numbers, etc. • I/O status – information about I/O devices assigned to the process, a list of open files, etc. In summary, a PCB as shown in Figure 1.3, is the source of information needed to start or restart a process. Image description Process management in Linux OS views each single-threaded process, or every thread within a multithreaded process or the kernel—as a distinct task.
A process is then exemplified via two key elements, the process control block and the additional information describing the user address space.
PCB is always in memory, but the latter data can be paged in and out of memory[2].

Читайте также:  Failed to fetch linux libc dev

Process Creation

Image description

Process creation is replicating the Process Control Block by using fork() system call. Process creation is called the parent process whereas the created process is called the child process.
The parent process can have many children, but the child process can only have one parent.
Parent and child processes share all resources but, they have distinct address spaces.
Figure 1.4 represents process creation using fork(). System call fork() creates a new process, then exec() system call replaces process memory with a new program. The parent calls the wait() process while it waits for the child to terminate[3].

Process Scheduling

Often there are multiple processes requiring access to the CPU at this same time. The objective of the process scheduler is to maximize the use of a microprocessor by deciding which of the processes in the queue must be executed next.
The scheduler maintains the queues of processes: • Ready queue – all processes in main memory, ready and waiting to execute.
Wait queue – processes waiting for an event (i.e., I/O). The ready queue can be implemented as a first-in, first-out (FIFO) queue, a priority queue, a tree, or just an unordered linked list. Types of schedulers: • Long term(job scheduler) – it picks processes from the ready queue and loads them into memory for execution. • Medium-term(swapping) – it removes processes from the memory which reduces the degree of multiprogramming. • Short term(CPU scheduler) – it picks the process ready to be executed next and allocates CPU. The context switch in OS is an integral part of multitasking. It saves and restores the state of the CPU in the Process Control Block so that the process can be continued from the same point later stage. Context switch allows many processes to share a single CPU. A Process Scheduler plans various processes to be allocated to the CPU based on scheduling algorithms.
These algorithms are either non-preemptive (once a process enters the running state, it can’t be preempted up until it finishes its assigned time) or preemptive (is based on priority where a scheduler may preempt a low priority running process at any moment when a high priority process arrives at a ready state). Process scheduling algorithms: • First-Come, First-Served (FCFS) Scheduling – this is a non-preemptive, pre-emptive scheduling algorithm and it is based on FIFO (First In First Out) queue. Due to high wait time, this algorithm has very poor performance. • Shortest-Job-Next (SJN) Scheduling – this is a non-preemptive, pre-emptive scheduling algorithm. Gives a minimum average waiting time for a given set of processes but it requires knowing CPU time in advance. • Priority Scheduling – this is a non-preemptive algorithm. Processes are given a priority and the process with the highest will be executed first and then next after that. Priority requirements can be based on memory, time etc. • Shortest Remaining Time(SRT) – this is a preemptive scheduling algorithm. Processors are assigned to the jobs next to completion, however, they can be preempted by a newer job with a shorter completion time. Same as the SJN algorithm, it requires CPU time to be known. • Round Robin(RR) Scheduling – this is a preemptive scheduling algorithm. Each process is given a set time to execute. Once a process is executed for that length of time then it is preempted and the next process is executed for its given time. • Multiple-Level Queues Scheduling – each queue is the combination of existing algorithms with similar characteristics. Each queue has its priorities assigned[4].

Читайте также:  Базальт спо альт линукс

Источник

Оцените статью
Adblock
detector