Virtual Memory - PowerPoint PPT Presentation

Loading...

PPT – Virtual Memory PowerPoint presentation | free to download - id: 17700c-ZDc1Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

Virtual Memory

Description:

goals of admitting new processes (long term), memory allocation (medium term) ... operations such as fetching and flushing page to backing store. Fred Kuhns ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 51
Provided by: Fre58
Learn more at: http://www.arl.wustl.edu
Category:

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Virtual Memory


1
Virtual Memory
  • Fred Kuhns

2
Memory Management
  • Central Component of any operating system
  • Hierarchical layering
  • main memory
  • backing store (secondary)
  • file servers (networked)
  • Policies related to memory requirements of
    processes (i.e. all or part resident)
  • goals of admitting new processes (long term),
    memory allocation (medium term) and processor
    scheduling (short term) must be considered
    together.
  • Common goal is to optimize the number of runnable
    process resident in memory

3
UNIX Memory Management
  • UNIX uses a demand paged (anticipatory paging)
    virtual memory architecture
  • Memory is managed at the page level
  • page frame physical page
  • virtual page
  • Basic memory allocation responsibility of the
    page-level allocator
  • has two principle clients
  • paging system and
  • kernel memory allocator

4
The Virtual Address Space
kernel memory
5
Process Address Space (one approach)
0xffffffff
Kernel address space
0x7fffffff
6
Virtual Memory Goals
  • Run applications larger than physical memory.
  • Run partially loaded programs.
  • Allow more than one program to simultaneously
    reside in memory.
  • Allow relocatable programs anywhere, anytime
  • Write machine independent code program should
    not depend on memory architecture.
  • Applications should not have to manage memory
    resources
  • Permit sharing of memory segments or regions.
    For example, read-only code segments should be
    shared between program instances.

7
Virtual Memory Costs
  • Space Translation tables and other data used by
    VM system reduce available memory to programs
  • Time Address translation time is added to the
    cost (execution time) of each instruction.
  • Async Also page fault handling may result in
    page I/O operations increasing latency and
    possibly affecting unrelated processes.
  • Overhead Memory management operations have been
    measured to consume up to 10 of the CPU time on
    a busy system.
  • Efficiency Allocating memory in pages may result
    in fragmentation

8
Processes and Memory
  • Process runs runs on a virtual machine as defined
    by the underlying hardware.
  • Focus is on Hardware support for a virtual
    address space
  • virtual addresses independent of physical memory
  • Key hardware component is the Memory Management
    Unit (MMU)
  • address translation virtual to physical memory
  • simplifies context switching
  • ensures virtual address space protection

9
Memory allocation
physical page
Page-level allocator
Kernel memory Allocator
Paging system
Network buffers
Data structures
temp storage
process
Buffer cache
10
Page-level Allocation
  • Kernel maintains a list of free physical pages.
  • Since kernel and user space programsd use virtual
    memory address, the physical location of a page
    is not important
  • Pages are allocated from the free list
  • Two principal clients
  • the paging system
  • the kernel memory allocator

11
The Paging System
  • Responsible for allocating and managing the
    address space of processes
  • Primary goal is to allow processes to run in a
    virtual address space and to perform address
    translations transparently
  • In demand-paged system the page is the basic unit
    of memory allocation, protection and address
    translation. Virtual address is converted to
    physical page (frame) number and offset.

12
The Paging System
  • Requirements
  • Address space management
  • Address Translation translation maps used by
    MMU, may result in a page fault exception
  • Physical memory management physical memory used
    a cache of useful data.
  • Memory Protection HW support exploited
  • Memory Sharing
  • Monitoring system load
  • Other facilities for example memory mapped
    files or shared libraries

13
Paged Virtual Memory
Working set
Physical address space
P1 virtual address space
P2 virtual address space
page0
page1
page2
page3
pagen
pagen
Address Translation
resident
Page frames
Non-resident
14
The Virtual Address Space
  • Address space along with processes register
    context reflects the current state
  • exec causes kernel to build new process image
  • memory regions text, initialized data,
    uninitialized data, modified data, stack, heap,
    shared memory and shared libraries.
  • These regions may differ in protection,
    initialization and sharing. Protections usually
    set at page level when allocated.
  • Process may start running before any of its pages
    are resident in memory.

15
Initial Access to Pages
  • Text and initialized data are read in from
    executable file.
  • Uninitialized data are zero-filled pages
  • Shared libraries from library file
  • The u area and stacks are setup during process
    creation (copied from parent).

16
Swap Area
  • Swap Area Pages are copied to the swap device to
    free up space for running programs.
  • Swapping plus paging for two-tiered scheme
  • Requires a swap map to locate swapped out pages
  • MMU set dirty bit for page if it has been
    modified
  • Text pages need not be backed by swap

17
Translation Maps
  • Hardware Translation Tables each access to
    memory must have the virtual address translated
    to a physical memory location
  • Page tables provide the MMU with this mapping
  • MMU uses TLB to cache recent translations
  • Other maps used by the OS
  • Address space map describes a virtual address
    space for a process or kernel
  • Physical memory map kernel uses to perform
    reverse maps and to describe a pages ownership,
    references and protections.
  • Backing store map used to locate non-resident
    pages

18
Replacement algorithms
  • Deciding when to reclaim a page Defined in terms
    of criteria used for selecting pages to reclaim
  • Reference string pages referenced over time
  • fault rate page faults for some length reference
    string
  • Algorithms evaluated based on effectiveness on
    collected (I.e. real) reference strings
  • Implementations usually require sample reference
    strings
  • Most UNIX implementation use a global replacement
    policy

19
Working set Model
  • Assumes a slowing changing locality of reference
    (i.e. processes tend to localize references to a
    small set of pages)
  • Set periodically changes
  • Implement using an approximate set
  • number of pages held versus fault rate.
  • Set high and low water marks
  • Models leads to the following assumptions which
    leads to a LRU policy
  • if a page has been accessed recently then assume
    it will be need again else assume it will not be
    needed
  • Most kernels implement a scheme whereby pages are
    periodically freed and placed on a free pool.

20
Typical Paging Systems
CPU
Unitialized data (zero filled)
Stack and heap (allocated on demand)
Allocated virtual pages
Swap
Disk
UFS
DRAM
Text and initialized data (copied from file)
a.out
21
Hardware Requirements
  • Protection Prevent process from changing own
    memory maps
  • Residency CPU distinguishes between resident and
    non-resident pages
  • Loading Load pages and restart interrupted
    program instructions
  • Dirty Determine if pages have been modified

22
Memory Management Unit
  • Translates Virtual Addresses
  • page tables
  • Translation Lookaside Buffer
  • Page tables
  • One for kernel addresses
  • one or more for user space processes
  • Page Table Entry (PTE) one per virtual page
  • 32 bits - page frame, protection, valid,
    modified, referenced

23
Translation
  • Virtual address virtual page number offset
  • Finds PTE for virtual page
  • Extract physical page and adds offset
  • Fail (MMU raises an exception - page fault)
  • bounds error - outside address range
  • validation error - non-resident page
  • protection error - not permitted access

24
Some details
  • Limit Page Table size
  • segments
  • page the page table (multi-level page table)
  • MMU has registers which point to the current page
    table(s)
  • kernel and MMU can modify page tables and
    registers

25
TLB
  • Problem Page tables require perhaps multiple
    memory access per instruction
  • Solution
  • rely on HW caching (virtual address cache)
  • cache the translations themselves - TLB

26
TLB Details
  • Associative cache of address translations
  • Entries may contain a tag identifying the process
    as well as the virtual address. Why is this
    important?
  • MMU typically manages the TLB
  • Kernel may need to invalidate entries, Why?

27
Address Translation - General
CPU
virtual address
MMU
cache
Physical address
data
Global memory
28
Address Translation Overview
MMU
Physical cache
Virtual address
physical address
TLB
Page tables
29
Cache/Main-Memory Structure
Slot Number
Memory Address
Tag
Block
0
0
1
1
2
Block (k words)
2
3
C - 1
Block Length (k words)
(b) Cache
Block
2n - 1
Word Length
(a) Main Memory
30
SPARC Reference MMU
Physical address
offset
Physical page
Context Tbl Ptr register
Context Tbl
12 Bits, 4K
24 Bits, 16Meg
PTD
Level 1
Level 2
PTD
4096
Level 2
PTD
Context register
256
(12 bits)
PTE
256
6 bits
8 bits
6 bits
12 bits
4096
index 1
index 2
index 3
offset
virtual page
Virtual address
31
Page Table Descriptor/Entry
Page Table Descriptor
type
Page Table Pointer
2 1 0
Page Table Entry
ACC
C
M
R
Phy Page Number
type
8 7 6 5 4 2 1 0
Type PTD, PTE, Invalid C - Cacheable M -
Modify R - Reference ACC - Access permissions
32
SVR4 VM Architecture
  • File Mapping Two interpretations
  • Used as a Fundamental Organizational scheme.
    Entire Address Space viewed as a collection of
    mappings to different objects (such as files)
  • Applications map a file into their address space
  • Types of Mappings Shared and Private
  • Memory Object represents mapping from region a
    of memory to backing store (swap, local/remote
    file, frame buffer)
  • VM provides common framework, Memory objects
    provide the specific implementation
  • operations such as fetching and flushing page to
    backing store

33
VM
  • Address space is a set of mappings to data
    objects.
  • An address is only valid if it is mapped to an
    existing object
  • File system provides the name space and
    mechanisms to access data.
  • Uses the vnode layer to interact with the file
    system.
  • Each named memory object is associated with a
    vnode (but a vnode may map to many objects)
  • Unnamed objects represented by anonymous objects
  • Physical memory is treated as a cache for the
    data objects
  • Page is the smallest unit of allocation,
    protection, address translation and mapping.
  • Address space can be thought of as an array of
    pages

34
File Mapping Versus read/write
mmap() Address space read/write Copy
Copy
Virtual Memory System
Buffer Cache
P1 pages
Copy
Copy
35
Fundamental Abstractions (data structs)
  • Page (struct page)
  • Address Space (struct as)
  • segment (struct seg)
  • Hardware Address Translation (struct hat)
  • Anonymous Page (struct anon)

36
VM Architecture
Physical page offset
virtual address
Address Translation
physical address
Proc A
fork/exec
fault
page tables
AS layer
HAT
vnode layer
anon layer
...
swap layer
page layer
Persistent storage
37
Physical Memory
  • Divided into paged and non-paged regions
  • Paged region described by an array of page
    structures, each describing one logical page
    (cluster of hardware pages)
  • Each physical page (page frame)
  • described by struct page
  • mapped to some memory object, with the memory
    object represented by a vnode
  • page identity or name ltvnode, offsetgt

38
Page Struct
  • page struct stores offset and pointer to
    corresponding vnode
  • may sit on several linked lists, has 3 sets of
    pointers
  • hash table of vnode and offset
  • vnode contains list of all object pages currently
    in memory
  • free page list or list of pages waiting to be
    written to backing store
  • Reference count
  • synchronization flags (lock, wanted, in-transit)
  • Copies of modified and referenced bits
  • HAT field used to locate all translations for
    this page

39
AS Layer
  • High-level abstraction describing the virtual
    address space.
  • References a linked list of seg (segment) structs
    that represent non-overlapping page-aligned
    address regions
  • Contains the hat structure and a hint to the last
    segment that had a page fault
  • Supports two set of operations those operating
    on the entire address space and those that affect
    ranges within the space.

40
Segment Drivers
  • Segments represent mappings between backing store
    and address regions
  • Segment represents an abstract base class with
    specific drivers being derived classes.
  • seg struct contains pointer to
  • a seg_ops vector, these represent the virtual
    functions. i.e. the type dependent interface to
    the class.
  • Methods dup, fault, faulta, setprot,
    checkprot, unmap, swapout, sync
  • type-dependent data structure which hold private
    data
  • Each segment defines a create routine

41
Segment Drivers
  • Different types seg_vn, seg_map, seg_dev,
    seg_kmem
  • seg_vn vnode segment, maps to regular files and
    anonymous object.
  • Seg_map One in system. Use by kernel for
    transient file mappings for implementing
    read/write.

42
Process Address Space
proc struct
struct segvn_data
struct seg base size
text
struct segvn_data
struct seg base size
struct as segment list hint struct hat
data
struct segvn_data
struct seg base size
stack
seg_vn ops
struct segu_data
struct seg base size
seg_u ops
u area
43
Anonymous Pages
  • Page with no permanent storage, created when
    process write to a MAP_PRIVATE object
  • Pages can be discarded with process terminates or
    unmaps the page.
  • Swap device used as backing store
  • Example initialized data pages when modified
    become anonymous pages
  • Related but distinct concept is an anonymous
    object.
  • one anonymous object in system represented bu the
    NULL vnode pointer (/dev/zero) and is the source
    of all zero-filled pages.
  • unitialized data and stack regions are
    MAP_PRIVATE to it
  • Shared memory regions are MAP_SHARED to it
  • anonymous object pages are anonymous pages

44
Anonymous
vnode layer read and writes pages to swap device
one for each swap device
Swap device
vnode
Swap info
pointer to anon and freelist
anon ref array
as
anon
0
seg
free list
anon_map
segvn_data
0
page
0
one entry for each page in swap
vnode
page
page
Per page protect
45
Hardware Address Translation Layer
  • Isolates all hardware-dependent code from rest of
    VM
  • Responsible for all address translations
  • setup and maintain mappings used by MMU (page
    tables, directories etc)
  • each process has its own set of translations
  • uses struct hat which is part of the as struct
  • Operations
  • hat layer hat_alloc, hat_free, hat_dup,
    hat_swapin, hat_swapout (build/rebuild tables
    when swapping)
  • range of pages hat_chgprot, hat_unload,
    hat_memload, hat_devload
  • all translation of a page hat_pageunload,
    hat_pagesync (update modified and referenced bits
    using values in page struct)

46
Misc Topics
  • Pagedaemon implements page reclamation
    (replacement) policy. Uses two-handed clock
    algorithm
  • Swapping - swapper daemon will sap procs with
    spaces gets below a threshold

47
Misc Topics Cont
  • Non-page-aligned backing store
  • Virtual swap space in Solaris (swapfs)
  • includes physical memory
  • dynamic reallocation

48
Assessment - Advantages
  • Modular design OO style inteface encapsulating
    functionality
  • Portability HAT layer simplifies porting
  • Sharing copy-on-write sharing, MAP_SHARED,
    shared file mappings
  • Lightweight file access mmap
  • Enabling shared library support
  • Leveraging existing interfaces (vnode)
  • Integrating VM and buffer cache
  • Facilitates break point insertion with MAP_PRIVATE

49
Assessment - Disadvantages
  • Increased table size and maintenance requrements
  • Increased time to read program text and
    initialized data from file rather than swap
  • Performance problems due to longer code paths,
    complexity and indirection
  • disk address are computed dynamically
  • Invariant interfaces to abstractions may lead to
    inefficient implementation (no control over
    implementation)
  • copy-on-write may not be faster than anticipatory
    copying
  • Swap space allocated per-page basis preventing
    optimizations like clustering and prepaging

50
Improvements
  • Reduce high fault rate caused by lazy evaluations
About PowerShow.com