OS

Pages in this section

  • Address space (OS)
    Last edited: 2026-02-05

    Address space (OS)

    For a process the address space is the virtual memory associated to the executing program. This is used to make memory management within the program simple and abstracts handling the physical memory to the OS .

  • Asynchronous programming
    Last edited: 2026-02-05

    Asynchronous programming

    Asynchronous programming is a programming paradigm that allows a program to handle operations that may take an indeterminate amount of time, such as I/O -bound tasks, without blocking the execution of the entire program. Instead of waiting for an operation to complete before moving on to the next one, asynchronous programming enables a program to initiate an operation and then continue executing other tasks while waiting for the operation to finish. Once the operation completes, the program is notified, and the relevant code can be executed.

  • Atomic instruction
    Last edited: 2025-04-08

    Atomic instruction

    An atomic instruction in computer systems is an operation that is indivisible — it either completes entirely or does not happen at all, with no intermediate state visible to other threads or processors. Atomicity is crucial in concurrent programming to avoid race conditions when multiple threads access and modify shared data.

  • Buddy Allocator
    Last edited: 2026-02-05

    Buddy Allocator

    The buddy allocator is a memory allocator used in the linux kernel to efficiently manage contiguous blocks of memory. It works by dividing memory into blocks of sizes that are powers of 2.

  • Cache coherence
    Last edited: 2025-04-08

    Cache coherence

    Cache coherence refers to the process of keeping multiple cached copies of the same memory location consistent across different caches. This is critical in multi-core processors, where each CPU core may have its own private cache.

  • Checkpointing
    Last edited: 2026-02-05

    Checkpointing

    Checkpointing is an operation performed by the operating system where it copies a current process state and saves a write protected copy of it. The operating system can then resume the process from that state in the future. This is useful for:

  • Concurrency
    Last edited: 2026-02-05

    Concurrency

    This is a technique to handle large tasks that require waiting on different resources outside the control of the executor. This means starting lots of different tasks and switching to a different task whenever you are blocked from progressing on your current task. A common technique here is asynchronous programming or multi-threading using a single kernel thread .

  • Conditional variables (Mutex)
    Last edited: 2026-02-05

    Conditional variables (Mutex)

    A mutex may need special operations applied upon some condition being met. (For example, processing a list when it is full.) To implement this, we use a data structure called a conditional variable that holds at least:

  • Consistency model
    Last edited: 2025-04-13

    Consistency model

    A consistency model is an agreement between memory and the software that is using it. More precisely it guarantees certain behaviour of the memory if the software behaves in a certain way. The guarantees it can make relate to access order and propagation and visibility of updates.

  • Context switch (CPU)
    Last edited: 2026-02-05

    Context switch

    A context switch is when the CPU goes from running one process to running a different one. This involves writing the old process ’s PCB into memory and fetching the second process’s PCB from memory and loading it into the CPU registers .

  • Copy on write (COW)
    Last edited: 2026-02-05

    Copy on write (COW)

    If two processes are using the same memory the operating system can let them share access to the same frame . Only needing to copy the data across if a write on the data is initiated by either process. This delays operations from the operating system until they are absolutely necessary.

  • CPU register
    Last edited: 2026-02-05

    CPU register

    A CPU register is a small, fast storage location within the Central processing unit (CPU) of a computer. They hold data that the CPU is currently processing such as operands (the values to be operated on) or the result of operations. They also hold memory addresses or instruction pointers. They normally are small in size such as 32 or 64 bits . (This often is referenced within the type of CPU you have.) There are different types of registers:

  • Deadlock
    Last edited: 2026-02-05

    Deadlock

    A deadlock is when two or more threads are waiting on each other for a mutex meaning none of them will stop waiting.

  • Demand paging
    Last edited: 2026-02-05

    Demand paging

    As virtual memory is far larger than physical memory to maximize resource usage the operating system will swap out memory in RAM to some secondary storage like the disk. In doing so it updates the page table entry to reflect this. If the memory is then accessed again it needs to pull it back off the secondary storage. It does this in the following way:

  • Descriptor table
    Last edited: 2026-02-05

    Descriptor table

    In segmentation the virtual addresses contain a selector and an offset. The selector relates to some segment descriptor, such as code, data, heap, etc. The descriptor table maps this selector to the segment address in physical memory .

  • Device driver
    Last edited: 2026-02-05

    Device driver

    A device driver is a piece of software installed in the OS to enable the operating system to communicate with an external device. The operating system offers a standard API for device drivers to conform to. Many devices comply with standard drivers, but if the device is specialised, they are offered by the device manufacturer.

  • Direct memory access (DMA)
    Last edited: 2026-02-05

    Direct memory access (DMA)

    Direct memory access instead of Programmed IO (PIO) uses a DMA controller to write data to and from devices. The CPU still directly accesses the status and command registers. Dma Device To utilise the DMA controller, the CPU needs to configure it—which is not a small operation. Therefore, for small data transfers, it is not worth the overhead. Another restriction is that the data to be used by the DMA controller needs to be kept in physical memory while the transfer happens.

  • Distributed file system (DFS)
    Last edited: 2025-04-12

    Distributed file system (DFS)

    A file system is distributed if the files represented within the system do not necessarily reside on the machine hosting that file system. In this case, access to different files will require network calls.

  • Distributed shared memory (DSM)
    Last edited: 2025-04-13

    Distributed shared memory (DSM)

    Distributed shared memory is a Peer distributed application which enables machines to share their memory and have access to memory which does not exist on the machine locally. This extends the machines effective memory size. Though the payoff is some memory access will be slower.

  • Earliest deadline first (EDF)
    Last edited: 2026-02-05

    Earliest deadline first (EDF)

    This is a policy in OS that chooses to do the task that has the earliest deadline first.

  • External fragmentation
    Last edited: 2026-02-05

    External fragmentation

    External fragmentation occurs when free memory is split into small, non-contiguous blocks, making it impossible to allocate a large contiguous block despite having enough total free space. This happens in systems that use variable-sized allocations (e.g., segmentation or heap memory management). For example, if a program repeatedly allocates and frees different-sized memory chunks, gaps form between allocated blocks, preventing large allocations. external_fragmentation

  • Fragmentation
    Last edited: 2026-02-05

    Fragmentation

    Fragmentation occurs when memory is inefficiently utilised due to gaps or wasted space. It can take two forms:

  • Hardware protection levels
    Last edited: 2025-04-12

    Hardware protection levels

    Within the CPU there are different privilege levels which software can reside in. This normally uses the ‘ring model’ in which there are 4 levels:

  • Heap (OS)
    Last edited: 2026-01-28

    Heap

    The heap of a process is dynamic memory which is allocated at run time. It will be used to store variables which may vary dramatically in size depending on what the application is run on - for example reading data into memory. This memory will stay allocated until it is explicitly deallocated. Therefore the heap can come with considerable overheads and require techniques like garbage collection or custom allocations to handle.

  • Inter-process communication (IPC)
    Last edited: 2026-02-05

    Inter-process communication (IPC)

    Inter-process communication is the method or API in which different processes can communicate with one another. There are four main methods to communicate messages between two processes .

  • Interface definition language (IDL)
    Last edited: 2025-04-12

    Interface definition language (IDL)

    An Interface Definition Language (IDL) is a formal language used to define the interface between software components, especially in Remote Procedure Calls (RPC) systems. It describes function names, parameters, return types, and data structures in a way that is independent of programming language.

  • Internal fragmentation
    Last edited: 2026-02-05

    Internal fragmentation

    Internal fragmentation occurs when allocated memory blocks are larger than the data they store, leaving unused space inside the allocated block. This happens because memory is allocated in fixed-size units (e.g., pages in paging system or predefined allocation sizes in heap memory). The unused portion inside an allocated block is wasted, leading to inefficiency. For example, a process is allocated a 4 KB memory page but only uses 3 KB, wasting 1 KB inside the page.

  • Inverted page tables (IPT)
    Last edited: 2026-02-05

    Inverted page tables

    Traditional page tables are indexed by virtual addresses , but on 64-bit architectures, the virtual address space can be many petabytes, while physical memory is usually much smaller (gigabytes or terabytes). This results in large page tables , consuming significant memory.

  • Kernel
    Last edited: 2026-02-05

    Kernel

    The kernel is the core component of an Operating system (OS) that manages the system’s hardware and resources, such as the CPU , direct memory access , and I/O devices. It acts as an intermediary between software applications and the physical hardware of a computer. The kernel is responsible for critical tasks, including process management, memory management, device management, and system security.

  • Least-recently used (LRU)
    Last edited: 2026-02-05

    Least-recently used (LRU)

    This is a policy that chooses to operate first on the least recently used element, such as memory.

  • Memory allocator
    Last edited: 2026-02-05

    Memory allocator

    The memory allocator gets used when a process needs to map physical memory onto its virtual memory . There are two different kinds of allocators:

  • Memory Management Unit (MMU)
    Last edited: 2026-02-05

    Memory Management Unit (MMU)

    The Memory Management Unit (MMU) is a hardware component responsible for translating virtual addresses used by programs into physical addresses . It works alongside the operating system and uses page tables or segment tables to perform this translation. The MMU also enforces memory protection, preventing unauthorized access to restricted memory regions.

  • Memory page
    Last edited: 2026-02-05

    Memory page

    A memory page is a fixed-size block of memory, determined by the operating system and hardware architecture, typically at system startup. It serves as the fundamental unit of memory management in both physical memory and virtual memory . The operating system, in conjunction with the Memory Management Unit (MMU) , manages this mapping through the use of page tables to memory frames .

  • Memory segment
    Last edited: 2026-02-05

    Memory segment

    A memory segment is a variable-sized block of memory used in segmentation , an alternative to paging for memory management. Instead of dividing memory into fixed-size pages , segmentation divides memory into logically distinct sections, such as code, data, and stack segments. Each segment has a base address and a limit, defining its size and boundaries. The operating system and the Memory Management Unit (MMU) manage segment access using a segment table , which maps segment numbers to physical memory . Unlike paging , segmentation allows programs to organize memory based on logical structures rather than fixed-size blocks.

  • Memory segmentation
    Last edited: 2026-02-05

    Memory segmentation

    Segmentation is a memory management technique that divides memory into logically distinct memory segments , such as code, data, and stack segments, each with a variable size. Instead of using fixed-size blocks like paging , segmentation allows programs to allocate memory dynamically based on their needs. The operating system manages memory through a descriptor table , which stores the base address and limit of each segment. Segmentation can reduce internal fragmentation but may lead to external fragmentation without additional management techniques.

  • Monitors
    Last edited: 2025-04-08

    Monitors

    Monitors are a high-level synchronization construct that encapsulate:

  • Multi-level page tables
    Last edited: 2026-02-05

    Multi-level page tables

    To reduce the memory overhead of a single large page table , modern systems use a hierarchical paging structure called a multi-level page table. Instead of a single, flat table mapping all virtual pages to physical frames, the multi-level approach breaks this into a series of smaller nested page tables.

  • Multi-processing
    Last edited: 2026-02-05

    Multi-processing

    This method involves breaking a complicated task over multiple processes . This assigns them different PCB and different virtual memory .

  • Multi-threading
    Last edited: 2026-02-05

    Multi-threading

    This is the technique of breaking a complicated task up over multiple threads . They will still share the same virtual memory and be tied to the same process but can be executed in parallel if scheduled on different kernel threads.

  • Mutex
    Last edited: 2026-02-05

    Mutex

    A mutex is a lock on some shared operation between threads . For example accessing shared memory. To do the operation you must obtain the mutex (if some other thread has the mutex you enter a wait state). The mutex is just a data structure consisting of at least:

  • Network file system (NFS)
    Last edited: 2026-02-05

    Network file system (NFS)

    This is a distributed file system developed by Sun. This uses Remote Procedure Calls (RPC) to communicate between servers and client. When the client opens a request to a file, it creates a virtual file descriptor which contains details about the server and file. This is used by the client to read/write to the server. If the server dies that descriptor returns an error so the client knows there was an issue with the request. NFS Architecture

  • Page table
    Last edited: 2026-02-05

    Page table

    A page table maps addresses in the virtual address space of a process which is indexed by the virtual page number and an offset within that page. The virtual page number is mapped to a physical frame number which combined with the offset can identify a location in physical memory . The simplest way to do this is with flat page tables. A page table contains one entry for each virtual page number which is the index in the page table. The page table entry then consists of the physical frame number along with some management bits which inform the operating system if the memory mapping is valid and what permissions the process has to access this memory. To then do the mapping it sums the offset from the virtual address with the physical frame number in the page table to get the physical address . Other page table types exist such as Multi-level page tables or Inverted page tables (IPT) .

  • Page table entry
    Last edited: 2026-02-05

    Page table entry

    A page table entry is indexed by the virtual page number and contains the physical frame number which is how the mapping between the two is carried out. However, the entry also contains some other management fields such as:

  • Paging system
    Last edited: 2026-02-05

    Paging system

    Paging is a memory management scheme that divides both physical memory and virtual memory into fixed-size blocks called memory pages (virtual memory ) and memory frames (physical memory ). The operating system maintains a page table to map virtual pages to physical frames, allowing non-contiguous memory allocation and reducing fragmentation . Paging enables efficient memory use and simplifies process isolation, but it requires hardware support in the form of a Memory Management Unit (MMU) to handle address translation.

  • PCI Express (PCIe)
    Last edited: 2025-04-09

    PCI Express (PCIe)

    A more modern standard than Peripheral component interconnect (PCI) .

  • Peer distributed application
    Last edited: 2025-04-13

    Peer distributed application

    A peer distributed application is a distributed algorithm where nodes in the network act not only as clients (receivers of data) but also as servers (or producers). This is distinguished from a Peer-peer model as there may still be some privileged nodes in the network that handle more of the management workload.

  • Peripheral Component Interconnect (PCI)
    Last edited: 2025-04-09

    Note

    Peripheral Component Interconnect (PCI) is a standard for connecting peripheral devices to a computer’s CPU . It defines a bus that allows hardware components like network cards, sound cards, modems, and storage controllers to communicate with the CPU and memory. PCI supports plug-and-play configuration and allows multiple devices to share the same bus.

  • Physical Frame Number (PFN)
    Last edited: 2026-02-05

    Physical Frame Number (PFN)

    When using paging the operating system breaks up physical memory into frames . The physical frame number is an index in the operating system abstraction of RAM which acts as one contiguous block. This, combined with an offset, can get mapped to a location on RAM by the memory controller .

  • Physical memory
    Last edited: 2026-02-05

    Physical memory

    Physical memory usually refers to the actual RAM installed in a computer. However, the term can also have more specific meanings depending on the context. A physical address is a reference to a location in this RAM , but in some cases, it may also refer to an abstraction used by the operating system , which treats memory as a contiguous block, even if the physical layout is non-contiguous. The mapping between virtual addresses and physical addresses is handled by the Memory Management Unit (MMU) and memory controller , which ensure that memory is accessed correctly and efficiently.

  • Portable operating system interface (POSIX)
    Last edited: 2026-02-05

    Portable operating system interface (POSIX)

    Portable Operating System Interface (POSIX) is a set of standardised APIs and interfaces that define how software should interact with an operating system . POSIX ensures compatibility and portability of applications across different UNIX-like operating systems by standardising system calls, command-line interfaces, and utility functions. It is widely used to maintain cross-platform compatibility in software development.

  • POSIX threads (PThreads)
    Last edited: 2026-02-05

    POSIX threads (PThreads)

    This is the POSIX interface for threads , mutex , and conditional variables .

  • Process
    Last edited: 2026-02-05

    Process

    A process is an instance of an executing program. It has some state in memory such as:

  • Process control block (PCB)
    Last edited: 2026-02-05

    Process control block (PCB)

    A Process control block is a data structure that holds the state for a process . This includes but is not limited to:

  • Process Identification (PID)
    Last edited: 2026-02-05

    Process Identification (PID)

    This is an ID associated with an executing process.

  • Process modes
    Last edited: 2026-02-05

    Process modes

    Processes normally have at least 2 modes.

  • Program counter (PC)
    Last edited: 2026-02-05

    Program counter (PC)

    The program counter points to the next instruction to be executed by a process .

  • Programmed IO (PIO)
    Last edited: 2025-04-09

    Programmed IO (PIO)

    This is a method of IO access that the OS can use without additional hardware support. The CPU communicates directly with the device by writing to its command or data registers and introspects about the device’s state through its status register. Canonical Device

  • Pseudo devices
    Last edited: 2026-02-05

    Pseudo devices

    Pseudo devices are not backed by real hardware but offer virtual interfaces to the OS . They provide functionality that is useful for testing, debugging and interacting with the OS . Examples of some devices in Linux are:

  • Race condition
    Last edited: 2025-04-08

    Race condition

    A race condition occurs when two or more threads or processes access shared data concurrently and the final result depends on the timing or order of their execution. Because the outcome varies based on how the threads are scheduled, it leads to unpredictable or incorrect behavior.

  • Random Access Memory (RAM)
    Last edited: 2026-02-05

    Random Access Memory (RAM)

    Random access memory is used to temporarily store data relevant to an executing program. Unlike disk storage RAM is temporary and will lose its state once the machine is turned off. However, RAM is fast access which makes it more usable to programs when they need to access it frequently. Processes access RAM through virtual memory which is mapped to physical memory through page tables .

  • Reader-writer locks
    Last edited: 2025-04-08

    Reader-writer locks

    A reader-writer lock is a synchronization construct that allows multiple threads to read shared data concurrently, while ensuring exclusive access for a thread that needs to write. It supports two types of locking:

  • Remote direct memory access (RDMA)
    Last edited: 2025-04-13

    Remote direct memory access (RDMA)

    Remote direct memory access is a data-centre technology which extends direct memory access (DMA) to work between servers.

  • Remote Procedure Calls (RPC)
    Last edited: 2026-02-05

    Remote Procedure Calls (RPC)

    A Remote Procedure Call (RPC) is a protocol that allows a program to execute a procedure (function) on a different address space, typically on another computer over a network, as if it were a local function call. This offers several key benefits:

  • Semaphores
    Last edited: 2026-02-05

    Semaphores

    Semaphores are a synchronization construct used to control access to shared resources. A semaphore is initialized with an integer value (often called the capacity). It maintains a counter that represents the number of available “permits.” When a thread attempts to acquire the semaphore:

  • Sequential consistency
    Last edited: 2025-04-13

    Sequential consistency

    This is a consistency model that has the following rules:

  • Slab allocator
    Last edited: 2026-02-05

    Slab allocator

    The slab allocator is a memory allocator used in the Linux kernel to efficiently allocate and reuse small, fixed-size objects (e.g., process descriptors, file system inodes).

  • Spinlocks
    Last edited: 2025-04-08

    Spinlocks

    Spinlocks are a synchronization construct that is similar to mutexes . When a process tries to acquire the lock whilst another process has it, it will wait at the unlock operation. However, in comparison to a mutex , the spinlock will not relinquish the CPU and instead continues to actively poll for the lock to become available.

  • Spurious wakeups
    Last edited: 2026-02-05

    Spurious wakeups

    When waking up threads in a mutex block using signal/broadcast if you still hold the mutex then the threads will just be moved to waiting on the mutex as it is still held. This is a spurious wakeup as we pay the cost of context switching to the thread just to hand back control to the CPU .

  • Stack (OS)
    Last edited: 2026-02-05

    Stack (OS)

    The stack of an application is a FIFO queue of stack frames—these contain a function’s parameters, local variables, and return address. These get added when a function is called and removed once a function completes. The stack acts as the control flow for a process , determining where to return to once a function has completed. The stack has a fixed size when a process starts, and if it goes beyond that size, it can cause a stack overflow.

  • Stack Pointer (SP)
    Last edited: 2026-02-05

    Stack Pointer (SP)

    This is a CPU register that points to the top of the stack for a process .

  • Strict consistency
    Last edited: 2025-04-13

    Strict consistency

    This is a consistency model that guarantees all updates are visible everywhere immediately. This is not a practical system and is only a theoretical model.

  • Synchronization
    Last edited: 2026-02-05

    Synchronization

    Synchronization is the process by which two or more independent operators coordinate with one another to manage shared resources or coordinate work with one another.

  • Thread
    Last edited: 2026-02-05

    Thread

    A thread is the smallest unit of execution within a process , representing a single sequence of instructions that the CPU can execute. Each thread within a process shares the process’s resources, such as memory and file handles, but operates with its own set of CPU registers , stack , and program counter . In the Process control block (PCB) , the state of each thread is tracked, including its individual register values, program counter, and thread-specific data, while sharing the broader process-level information like memory space and I/O resources.

  • Translation Lookaside Buffer (TLB)
    Last edited: 2026-02-05

    Translation Lookaside Buffer (TLB)

    The Translation Lookaside Buffer (TLB) is a small, high-speed cache inside the MMU that stores recently used virtual-to-physical address mappings. Since accessing these mappings in memory is slow, the TLB helps speed up address translation by reducing the need to fetch mapping table entries from RAM . A TLB hit means the translation is found instantly, while a TLB miss requires fetching the mapping from the table in main memory.

  • Trap instruction
    Last edited: 2026-02-05

    Trap instruction

    A trap instruction is sent to when an application in user mode tries to access hardware without using a system call . This is passed to the OS to judge if the call was illegitimate or harmful. While this is happening, the process is stopped from doing anything else.

  • Virtual machine monitor (VMM)
    Last edited: 2025-04-12

    Virtual machine monitor (VMM)

    A virtual machine monitor enables virtualization by providing an environment:

  • Virtual page number (VPN)
    Last edited: 2026-02-05

    Virtual page number (VPN)

    When using paging the virtual memory is broken down into pages , each of which is indexed from 0 to the maximum memory that the process has. The virtual page number is the index of a particular page in this address space. This, along with an offset into the page, determines the exact location of a byte of data.

  • Virtualization
    Last edited: 2025-04-12

    Virtualization

    A virtual machine is an efficient, isolated duplicate of the real machine. This is enabled by a Virtual machine monitor (VMM) .

  • Weak consistency
    Last edited: 2025-04-13

    Weak consistency

    Weak consistency is a consistency model that introduces a new operation: synchronise. The model only guarantees that all operations performed before a synchronisation point will be visible to processes that call synchronise afterwards. There are variations of this: