Allocation of Page Frames - PowerPoint PPT Presentation


PPT – Allocation of Page Frames PowerPoint presentation | free to view - id: 9b6fe-NzM1O


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

Allocation of Page Frames


Should allocate according to size of the working set of the process ... in swap space equal to process size ... Has min/max size range that changes over time ... – PowerPoint PPT presentation

Number of Views:439
Avg rating:3.0/5.0
Slides: 16
Provided by: quynh6
Tags: allocation | frames | page | size


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Allocation of Page Frames

Allocation of Page Frames
  • How many page frames should be allocated to a
  • Should allocate according to size of the working
    set of the process
  • Remember that working set is set of pages that
    process has referenced in the last N seconds or
    last N references (not easily known at startup)
  • Allocating a process fewer pages than its working
    set can quickly lead to page faults
  • One way to prevent many page faults is to not
    schedule a process unless it can be allocated
    enough page frames for its working set

Allocation of Page Frames
  • Local vs. global allocation
  • Local when a process needs more pages, evict
    its own pages (not another process)
  • Can lead to wasted memory if a process working
    set decreases in size
  • When a process thrashes, it will not cause other
    processes to thrash
  • Global on a page fault, evict a page from any
    process page set
  • Better for dynamic working set which may grow or
  • Domino-style thrashing may occur one process
    page faults, evicting another process pages, and
    when this process runs, it will evict yet another
    process pages, etc.

Allocation Policies
  • Periodically determine number of running
    processes and allocate each an equal share
    (hybrid of local global)
  • Allocate number of pages proportional to size of
  • Waste and thrashing may still occur

Allocation of Page Frames
  • Page fault frequency measure of page fault rate
    of a process
  • Too low take pages from that process
  • Too high allocate additional page frames to the
  • Too many processes have high page fault frequency
    swap out one or more of these processes (which
    process to swap out determined by scheduling
  • May also use process priority to affect page
    allocation (thereby, affecting page replacement)

Relationship of Allocation to Replacement Policy
  • FIFO and LRU can be run either globally or
    locally on the process pages
  • Working set makes sense only as a local policy or
    to initialize the number of pages given to a

Policies to Prevent Thrashing
  • Adopt a local allocation policy
  • Do not schedule a process unless its working set
    of pages is in memory (implies prepaging)
  • Use Page Fault Frequency approach

Page Size
  • Want to set page size to reduce internal
  • Smaller page size leads to more pages leads to
    larger page table
  • What is optimal page size?
  • p page size ?
  • s average process size,
  • e size of each page table entry
  • s/p average number of pages per process
  • (s/p)e space taken up by average process in
    page table
  • p/2 average wasted memory in last page of
    process due to internal fragmentation
  • overhead (s/p)e p/2
  • Find optimal page size by taking derivative and
    setting to 0
  • p 2se

Page Fault Handling
  • If present/absent bit 0, means page is not
    addressable by the process (protected addresses)
    or it is not in memory
  • Consult backing store map to distinguish between
    the cases
  • Absent from map means illegal address
  • Otherwise, backing store map indicates where to
    find the page on disk

Backing Store
  • Swap area on disk where page that is evicted is
  • How should space in swap area be allocated to a
  • When process started, reserve chunk in swap space
    equal to process size
  • Either initialize chunk with copy of entire
    process, or load entire process into memory and
    let it be paged out
  • Disadvantage size of process may grow. Better
    to have separate swap areas for text, data, and
  • Do not reserve space in swap area in advance
  • Allocate disk space for each page when it is
    swapped out (de-allocate when swapped in)
  • Disadvantage disk address needed for each page
    (instead of per process)

Backing Store Map 2 Implementations
  • Store disk address in PTE
  • Problems
  • Can be wasteful since page table is stored in
  • PTE structure is influenced by hardware, so need
    hardware cooperation to store disk addresses
  • Disk addresses are big (device block number)
    and OS-specific
  • Would need to have disk address for every page
    not in memory, so more difficult to compress page
  • Use separate data structure
  • Associates addressable regions (text, stack,
    data) of virtual address space with starting
    block number on disk
  • Each region of virtual address space is stored
    contiguously on disk
  • Backing store map data structure stored in memory
    has only one entry per region

OS Involvement with Paging
  • Process creation
  • OS determines program and data size
  • Allocates and initializes page table
  • Allocates swap area on disk to store page table
    when process is swapped out
  • Record page table and swap area in process table
  • Possibly preload pages
  • Process scheduled for execution
  • Reset MMU for process
  • Flush TLB
  • Copy page table or start/end addresses to
    hardware registers
  • Process termination
  • Release page table, page frames, and disk swap

OS Involvement with Paging
  • Page fault
  • Determine which virtual address caused page fault
  • Compute which table is needed to locate it on
  • Possibly evict a page to make room in main memory
  • Back-up program counter to point to faulting
    instruction so that it can be executed again (but
    this time, the needed page is in memory)

Unix Paging Policy
  • Demand paging
  • Page replacement algorithm
  • Maintain a certain number of free page frames
    (within a min/max range)
  • Swaps out pages when number of free pages is
    below min
  • Some Unix variants uses 2-handed clock

Linux Paging Policy
  • Demand paging
  • Maintain a certain range of free page frames
  • Each process on a 32-bit machine is given 3 GB of
    virtual address space and 1 GB reserved for page
    tables and other kernel data
  • 3-level page table
  • Kernel is never paged out
  • Buddy system for memory partitioning

Windows Paging Policy
  • Demand paging without pre-paging
  • Maintain a certain number of free page frames
  • For 32-bit machine, each process has 4 GB of
    virtual address space
  • Backing store disk space is not assigned to
    page until it is paged out
  • Uses working sets (per process)
  • Consists of pages mapped into memory and can be
    accessed without page fault
  • Has min/max size range that changes over time
  • If page fault occurs and working set lt min, add
  • If page fault occurs and working set gt max, evict
    page from working set and add new page
  • If too many page faults, then increase size of
    working set
  • When evicting pages,
  • Evict from large processes that have been idle
    for a long time before small active processes
  • Consider foreground process last