Memory management - PowerPoint PPT Presentation

About This Presentation
Title:

Memory management

Description:

Memory management Ref: Stallings G.Anuradha Why page size is in multiples of 2? The logical addressing scheme is transparent to programmer, assembler, linker. – PowerPoint PPT presentation

Number of Views:181
Avg rating:3.0/5.0
Slides: 53
Provided by: anur9
Category:

less

Transcript and Presenter's Notes

Title: Memory management


1
Memory management
  • Ref Stallings
  • G.Anuradha

2
What is memory management?
  • The task of subdivision of user portion of memory
    to accommodate multiple processes is carried out
    dynamically by the operating system and is known
    as memory management
  • Memory management terms

3
Memory management requirements
4
Memory management requirements Contd
  • Relocation
  • Users generally dont know where they will be
    placed in main memory
  • May want to swap in at a different place
  • Must deal with user pointers
  • Generally handled by hardware
  • Protection
  • Prevent processes from interfering with the O.S.
    or other processes
  • Often integrated with relocation

5
Memory management requirements Contd
  • Sharing
  • Allow processes to share data/programs
  • Logical Organization
  • Main memory and secondary memory are organized
    into linear/1D address space of segments and
    words.
  • Secondary memory is also similarly organized
  • Most programs are modularized
  • Advantages of modular approach
  • Written and compiled independently
  • Can have different degrees of protection
  • Module level of sharing
  • Physical Organization
  • Transferring data in and out of main memory to
    secondary memory cant be assigned to programmers
  • Manage memory disk transfers (System
    responsibility)

Segmentation
6
Memory Partitioning
  • Fixed partitioning
  • Dynamic partitioning
  • Simple Paging
  • Simple segmentation
  • Virtual memory paging
  • Virtual memory segmentation

7
Fixed partitioning
8
(No Transcript)
9
Difficulties in equal size fixed partitions
  • A program may be too big to fit into a partition.
    In such cases overlays can be used
  • Main memory utilization is extremely inefficient.
  • Leads to internal fragmentation
  • BLOCK OF DATA LOADED IS SMALLER THAN THE PARTITION

10
Placement algorithm
  • With equal size partition a process can be loaded
    into a partition as long as there is an available
    partition
  • With unequal size partitions there are two
    possible ways to assign processes to partitions
  • One process queue per partition
  • Single partition

11
Memory assignment for fixed partitioning
12
Advantages and disadvantages of each of the
approaches
  • One process queue per partition
  • Advantages
  • Internal fragmentation is reduced
  • Disadvantages
  • Not optimal from the system point of view
  • Single queue
  • Advantages
  • Degree of flexibility, simple, requires minimal
    OS S/w and processing overhead
  • Disadvantages
  • Limits the number of active processes in the
    system
  • Small jobs will not utilize partition space
    efficiently

IBM Mainframe OS. OS/MFT
13
Dynamic partitioning
Create partitions as programs loaded Avoids
internal fragmentation, but must deal with
external fragmentation
14
Effect of dynamic partitioning
15
Dynamic partitioning
  • External fragmentation-As time goes on more and
    more fragments are added and the effective
    utilization declines
  • External fragmentation can be overcome using
    compaction
  • Compaction
  • Time consuming

16
Placement algorithm
  • Best-fit- Chooses the block that is closest in
    size to the request
  • First-fit- scans memory from the beginning and
    chooses the first available block that is large
    enough
  • Next-fit-Scan memory from the location of the
    last placement and chooses the next available
    block that is large enough

17
(No Transcript)
18
Which amongst them is the best?
  • First fit-
  • Simple, best and fastest
  • Next fit-
  • Next to first fit. Requires compaction in this
    case
  • Best fit-
  • Worst performer. Memory compaction should be done
    as frequently as possible

19
Buddy system
  • Overcomes the drawbacks of both fixed and dynamic
    partitioning schemes

20
Algorithm of buddy system
  • The entire space available for allocation is
    treated as a single block of size 2U
  • If a request of size s ST is
    made then the entire block is allocated
  • Otherwise the block is split into two of size
    2U-1
  • This process continues until the smallest block
    greater than or equal to s is generated and
    allocated to the request

21
Example of a buddy system
22
Free representation of buddy system
Modified version used in UNIX kernel memory
allocation
23
Relocation
Not a major problem with fixed-sized
partitions Easy to load process back into the
same partition Otherwise need to deal with a
process loaded into a new location Memory
addresses may change When loaded into a new
partition If compaction is used To solve this
problem a distinction is made among several types
of addresses.
24
Relocation
  • Different types of address
  • Logical address- reference to a memory location
    independent of the current assignment of data to
    memory
  • Relative address- example of logical address in
    which the address is expressed as a location
    relative to some known point
  • Physical address- actual location in main memory
  • A hardware mechanism is needed for translating
    the relative addresses to physical main memory
    addresses at the time of execution of the
    instruction that contain the reference

25
Hardware support for relocation
26
Base/Bounds Relocation
  • Base Register
  • Holds beginning physical address
  • Add to all program addresses
  • Bounds Register
  • Used to detect accesses beyond the end of the
    allocated memory
  • Provides protection to system
  • Easy to move programs in memory
  • Change base/bounds registers
  • Largely replaced by paging

27
Paging
  • Problems with unequal fixed-size and
    variable-size partitions are external and
    internal fragmentations respectively.
  • If the process is also divided into chunks of
    same size - pages
  • Memory is divided into chunks called frames
  • Then a page can be framed into a page frame
  • Then there will be only internal fragmentation
    especially in the last page of the process

28
Assignment of Process pages to free frames
29
Page tables
  • When a new process D is brought in, it can still
    be loaded even though there is no contiguous
    memory location to store the process.
  • For this the OS maintains a page table for each
    process
  • The page table shows the frame location for each
    page of the process
  • Within the program, each logical address consists
    of a page number and an offset within the page

30
Page table contd
  • In a simple partition, a logical address is the
    location of a word relative to the beginning of
    the program
  • The processor translates it into physical
    address.
  • With paging the logical-physical address
    translation is done by hardware

Logical address page number, offset
Physical address Frame number, offset
processor
31
Data structures when process D is stored in main
memory
Paging similar to fixed size partitioning Differen
ce- 1. partitions are small 2. Program may
occupy more than one partition 3.
Partitions need not be contiguous
32
Computation of logical and physical addresses
  • Page size typically a power of 2 to simplify the
    paging hardware
  • Example (16-bit address, 1K pages)
  • Relative address of 1502 is
  • Top 6 bits (000001) page Page number is 1
  • Bottom 10 bits (0111011110) offset within
    page, in this case 478
  • Thus a program can consist of a maximum of 26
    64 pages of 1K bytes each.

33
(No Transcript)
34
Why page size is in multiples of 2?
  • The logical addressing scheme is transparent to
    programmer, assembler, linker.
  • Easy to implement a function in hardware to
    perform dynamic address translation at runtime.
  • Steps in address translation

35
Logical to physical address translation using
paging
36
Hardware support
  • OS has its own method for storing page tables.
    Pointer to page table is stored with the other
    register values in the PCB, which is reloaded
    whenever the process is loaded.
  • Page table may be stored in special registers if
    the number of pages is small.
  • Page table may be stored in physical memory, and
    a special register, page-table base register,
    points to the page table.(Problem is time taken
    for accessing)

37
Implementation of Page Table
  • Page table is kept in main memory
  • Page-table base register (PTBR) points to the
    page table
  • Page-table length register (PRLR) indicates size
    of the page table
  • In this scheme every data/instruction access
    requires two memory accesses. One for the page
    table and one for the data/instruction.
  • The two memory access problem can be solved by
    the use of a special fast-lookup hardware cache
    called associative memory or translation
    look-aside buffers (TLBs)
  • Some TLBs store address-space identifiers (ASIDs)
    in each TLB entry uniquely identifies each
    process to provide address-space protection for
    that process

38
Hardware support contd
  • Use translation look-aside buffer (TLB). TLB
    stores recently used pairs (page , frame ).
  • It compares the input page against the stored
    ones. If a match is found, the corresponding
    frame is the output. Thus, no physical memory
    access is required.
  • The comparison is carried out in parallel and is
    fast.
  • TLB normally has 64 to 1,024 entries.

39
Paging Hardware With TLB
40
Memory Protection
  • Memory protection implemented by associating
    protection bit with each frame
  • Valid-invalid bit attached to each entry in the
    page table
  • valid indicates that the associated page is in
    the process logical address space, and is thus a
    legal page
  • invalid indicates that the page is not in the
    process logical address space

41
Valid (v) or Invalid (i) Bit In A Page Table
42
Shared pages
  • Possibility of sharing common code
  • This happens in the case if the code is
    re-entrant code(pure code).
  • Re-entrant code is non-self modifying code it
    never changes during execution. Two or more
    processes can simultaneously utilize the same
    code.

43
Shared Pages Example
44
Structure of page table
  • Hierarchical paging
  • Hashed page tables
  • Inverted page tables

45
Hierarchical paging
46
Summary
  1. Main memory is divided into equal sized frames
  2. Each process is divided into frame-sized pages
  3. When a process is brought in all of its pages are
    loaded into available frames and a page table is
    set up
  4. This approach solves the problems in partitioning

47
Segmentation
  • Segmentation is a memory management scheme that
    supports user view of memory
  • Logical address space is a collection of segments
  • Each segment has a name and length
  • Each address has a segment and a offset within a
    segment

48
Users View of a Program
49
Logical View of Segmentation
1
2
3
4
user space
physical memory space
50
Segmentation Hardware
51
Example of Segmentation
Segment 2 , ref to byte 534300534353 Segment
3, ref to byte 85232008524052 Segment 0, ref
to byte 1222 63001222
52
Implementation of Segment Tables
  • Just like page tables the segment tables can be
    kept in registers and accessed
  • When the program contains a large number of
    segments a segment table base register and
    segment table limit register is kept in the
    memory and checks are performed

53
Protection and sharing
  • Segments are always protected becos its
    semantically defined
  • Code and data can be shared in segmentation.
    Segments are shared when entries in the segment
    tables of two different processes point to the
    same physical location

54
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com