Virtual Memory - PowerPoint PPT Presentation

1 / 18
About This Presentation
Title:

Virtual Memory

Description:

32-bit address was typical (232 = 4 billion) ... virtual address into 3 portions high page bits, low page bits, offset bits. High page bits ... – PowerPoint PPT presentation

Number of Views:30
Avg rating:3.0/5.0
Slides: 19
Provided by: quynh6
Category:
Tags: bit | memory | virtual

less

Transcript and Presenter's Notes

Title: Virtual Memory


1
Virtual Memory
  • Invented to solve problem that a program or
    collection of ready programs are larger than
    physical memory
  • Before virtual memory, overlays were used
  • Programmer splits program into swap units called
    overlays
  • Overlays must not refer to memory addresses
    outside of itself except through special call to
    ask OS to specifically swap 2 swap units
  • Unfriendly programming model

2
Virtual Memory
  • Virtual memory range of addressable locations
    independent of size of physical memory
  • Each process given a separate virtual address
    space
  • Similar to having separate memory partitions for
    each process, but virtual address space is not
    physically bounded
  • Instead, virtual address space is limited by
    memory address size
  • Dependent on architecture
  • 32-bit address was typical (232 4 billion)
  • 64-bit is becoming more prevalent (264 16
    quintillion)

3
Virtual Memory
  • Virtual address space often divided into a few
    large regions that are used differently
  • Entire virtual address space is not addressable
    by a process
  • Off-limit regions used by OS to store per-proces
    data structures
  • Virtual memory is form of indirect addressing

4
Virtual Memory Mapping
  • Memory management unit (MMU)
  • Hardware to map virtual address to physical
    address
  • Hardware essential for performance
  • MMU resides in the CPU package
  • MMU sends translated physical memory address onto
    memory bus
  • Steps in virtual address translation
  • CPU issues a program instruction
  • MMU translates virtual address to physical (main
    memory) address (not physical location on disk!)
  • Physical memory page frame is accessed

5
Paging
  • Physical memory is a sequence of page frames
  • Process virtual address space is sequence of
    pages
  • Pages and page frames are the same size
  • Size is a power of 2
  • Typically 4-64KB (typical size increases as main
    memory size increases)
  • Will large or small page sizes be more wasteful?
  • Any page can be mapped to any page frame
  • OS maintains mapping automatically (with
    assistance from MMU hardware)
  • Mapping is transparent to user/program

6
Page Tables
  • MMU translates virtual address to physical
    address via page table
  • Size of page table number of virtual pages
  • One page table per process
  • Located in kernels physical memory or process
    virtual memory
  • Table start and length in 2 hidden registers that
    are part of process state information
  • Example of using page table.

7
Page Table Entry
  • Present/Absent bit Is page loaded in main
    memory?
  • Page Frame - physical page frame where virtual
    page is loaded
  • Protection read, write, executable
  • If access is not allowed, produce segmentation
    fault
  • dirty bit has page been modified?
  • If yes, then need to write back to disk when page
    is swapped out of physical memory
  • Referenced set when page is accessed
  • Helps OS decide whether or not to swap page out
  • Disable caching if set, do not use cached copy
  • Cached copy may be invalid if page is mapped to
    I/O device and will change often (memory-mapped
    I/O)

8
Page Tables Design Issues
  • Average waste due to internal fragmentation is ½
    page per process, so want small page size
  • But, small page size many pages large page
    table page tables (for each process) will take
    up a lot of memory space
  • Page sizes have grown a overall memory size has
    grown 4 to 64KB is common
  • Often need to make 2 or more page table
    references per instruction, so fast access is
    necessary!
  • Using registers faster than using main memory

9
Page Tables Design Issues
  • 2 options
  • Store entire page table for process in registers
  • No memory reference to access page table
  • But, context switch between processes would be
    expensive why?
  • Store page table in memory and one register
    points to start of page table for running process
  • On context switch, load only 1 register value
  • But, need one or more memory reference per
    instruction why?
  • Neither methods rarely used. Instead, use hybrid
    methods.

10
Managing Page Tables
  • There is one page table entry (PTE) per possible
    page in the address space, so huge number of PTEs
  • For example, 32-bit address space with 4KB pages
    and 32-bit PTEs
  • Solutions to page size and fast access
  • Make all except part of OSs page table pageable
    (helps a little)
  • Multi-level page tables
  • Hashed page tables
  • Inverted page tables
  • TLB translation look-aside buffer

11
Multi-level Page Tables
  • Divide virtual address into 3 portions high
    page bits, low page bits, offset bits
  • High page bits
  • Index into a 1st-level page table
  • Each entry points to 2nd-level page table which
    contains conventional page table entries
  • Need a complete 1st-level page table for entire
    virtual address space

12
Multi-level Page Tables
  • Avoid keeping all page tables in memory in 2 ways
  • If 1st-level page entry indicates an invalid
    address, then do not even allocate corresponding
    2nd-level page tables
  • Very common for process to have widely separated
    and dense text, stack, and data areas end up
    allocating just three 2nd-level page tables, one
    for each segment. 3 areas do not have to be
    contiguous.
  • Multi-level page tables can be extended to N
    levels as address spaces get larger
  • VAX and Pentium support 2-level page tables
  • SPARC supports 3 levels
  • Motorola 680X0 supports up to 4 levels

13
Translation Lookaside Buffer (TLB)
  • Each virtual address reference requires 2
    physical memory references
  • To page table to get mapping
  • To page frame itself
  • Eliminate 1 by caching recent mappings in
    translation lookaside buffer (TLB)
  • Take advantage of fact that most programs
    reference small number of pages often
  • TLB is part of MMU
  • Each TLB entry is a page table entry (has same
    elements as PTE)
  • TLB is special hardware (called associative
    memory) searches all PTEs in TLB at once in
    parallel
  • Approximately 64-256 PTEs in TLB

14
Translation Lookaside Buffer (TLB)
  • Translation and page fault handling happens in
    middle of instruction
  • Given virtual address, MMU compares it to all TLB
    entries in parallel
  • If present, no need to access memory to get page
    table entry
  • Otherwise, load entry from page table
  • Hardware must know how to restart any instruction
    that may cause a page fault
  • RISC machines (SPARC, MIPS, Alpha) do page
    management in software. If virtual address is
    not in the TLB, TLB fault occurs and is passed to
    OS

15
Translation Lookaside Buffer (TLB)
  • 2 types of TLBs
  • Those whose entries include an address space ID
    field
  • Address space ID associates TLB entry with
    process
  • No need to invalidate entries on context switch
  • 2) Those whose entries do not include address
    space ID
  • TLB must be flushed on context switch
  • When process is loaded, TLB is loaded from page
    table in memory

16
Inverted Page Table
  • Inverted page table has as many entries as
    physical page frames will max size be smaller
    than that of page table?
  • One single global inverted page table, rather
    than one page table per process
  • Each entry contains process ID and virtual page
    number
  • Given virtual address, must search in inverted
    page table cannot use virtual address as index
    as with standard page table
  • Solutions
  • Maintain inverted page table in associative
    memory hardware whose entries are search in
    parallel
  • Use hash table to hash the virtual page address
    how?

17
Two Different Architectures
  • Typical page table architecture
  • Information is kept in combination of page tables
    and map of allocated virtual address space region
    to backing store backing store is swap space on
    disk where pages are swapped in out
  • TLB is a cache
  • PTE present/absent bit must be reset when page is
    replaced (page frame field is considered garbage)
  • TLB-only architecture (RISC machines)
  • TLB is not cache it is definitive record of
    where pages are in memory
  • Page tables indicate disk location
  • Page table remains unchanged when page is
    replaced
  • Page table should be called allocated-region-to-b
    acking-store data structure

18
Summary of Options
  • Page table the standard
  • Solutions to minimizing size of page tables
  • Multi-level page table
  • Inverted page table
  • Hashed page table
  • Solution to faster access
  • TLB reduces number of memory references to get
    page table entry
  • Inverted page table HP Spectrum, UltraSPARC,
    Power PC
  • Only TLB in hardware, leave all else to OS
    MIPS, Alpha, HP Precision
Write a Comment
User Comments (0)
About PowerShow.com