REALTIME PROGRAMMING - PowerPoint PPT Presentation

1 / 141
About This Presentation
Title:

REALTIME PROGRAMMING

Description:

Examples of system calls. Whether multiple programs can run on it ... manage scheduling information. priority, round-robin, other scheduling policy, etc. ... – PowerPoint PPT presentation

Number of Views:136
Avg rating:3.0/5.0
Slides: 142
Provided by: sanjav
Category:

less

Transcript and Presenter's Notes

Title: REALTIME PROGRAMMING


1
REAL-TIME PROGRAMMING
  • Real-time Operating Systems

Prof. Dr Sanja Vrane Institute Mihailo
Pupin Phone 2771398 E-mail Sanja.Vranes_at_institut
epupin.com
2
Real-Time Operating Systems
  • An Operating System is a software program or set
    of programs that mediate access between physical
    devices (such as processor, memory, keyboard,
    mouse, monitor, disk drive, network connection,
    etc.) and application programs (such as a word
    processor, Web browser or RT application)
  • A Real Time Operating System or RTOS is an
    operating system that has been developed for
    real-time applications.

3
Operating Systems (definitions)
  • An OS is an abstract machine which can run a
    number of programs.
  • The programs can talk to the OS using special
    methods called system calls.
  • The OS is also a government/police which
    controls the hardware so that
  • the processes do not destroy each other
    provide protection,
  • the hardware is used efficiently
    multiprogramming,
  • it is convenient for programmers e.g. using
    virtual memory.

4
Examples of system calls
  • loading a file from disk and start execution
  • creating a file
  • reading and writing a file
  • removing a file
  • asking what time it is
  • creating a new process
  • terminating execution
  • killing another process

5
Operating Systems categories
  • Whether multiple programs can run on it
    simultaneously
  • Whether it can take advantage of multiple
    processors
  • Whether multiple users can run programs on it
    simultaneously
  • Whether it can reliably prevent application
    programs from directly accessing hardware devices
  • Whether it has built-in support for graphics
  • Whether it has built-in support for networks
  • Whether it has built-in support for
    fault-tolerance

6
Some definitions
  • Definition of a process
  • A process is an executable entity, a peace of
    code embodying one ore more RTS functions
  • smallest execution unit that can be scheduled
  • Each process has among other things
  • instruction pointer, PC, integer and
    floating-point registers, memory,
  • process state one of RUNNING, READY, WAITING,
  • credentials privileges associated with the
    owner
  • The scheduler decides which process should run.

7
Process, Thread and Task
  • A process is a program in execution ...
  • Starting a new process is a heavy job for OS
    memory has to be allocated, and lots of data
    structures and code must be copied. memory pages
    (in virtual memory and in physical RAM) for code,
    data, stack, and for file and other descriptors
    registers in the CPU queues for scheduling IPC
    constructs etc.
  • A thread is a lightweight process, in the sense
    that different threads share the same address
    space.
  • They share global and static variables, file
    descriptors, code area, and heap, but they have
    own thread status, program counter, registers,
    and stack.
  • Shorter creation and context switch times, and
    faster IPC. to save the state of the currently
    running task (registers, stack pointer, PC,
    etc.), and to restore that of the new task.
  • Tasks are mostly threads

8
Some definitions
  • Multiprocessing while one process is waiting for
    some event such as disk access being completed,
    instead of just doing nothing, the OS runs
    another process in the mean time. This is to make
    more efficient use of the hardware.
  • Virtual memory a technique to make the RAM look
    much larger than it actually is so that
    programmers dont have to worry (so much) about
    how much memory is used.

9
Onion model of OS
Hardware
Device management
Task scheduling
Memory management
Task construction
I/O management
File management
User interface
10
RTOS Characteristics
  • multiprocessing
  • fast context switching
  • small size
  • fast processing of external interrupts
  • task scheduling can be enabled by external
    interrupts
  • efficient interprocess communication and
    synchronization
  • no memory protection the kernel and user execute
    in the same space
  • limited memory usage

11
Comparison of RTOS with conventional OS
12
RTOS Functions
  • Task/process management
  • create/delete, suspend/resume task
  • manage scheduling information
  • priority, round-robin, other scheduling policy,
    etc.
  • intertask communication, synchronization
  • semaphore
  • Mailbox
  • event flag
  • memory management
  • dynamic memory allocation
  • memory locking
  • device management
  • standard character I/O service
  • device drivers
  • fault tolerance
  • hardware fault handler
  • exception handling

13
Task management
  • Challenges for a RTOS
  • Creating a RT task, it has to get the memory
    without delay this is difficult because memory
    has to be allocated and a lot of data structures,
    code segment must be copied/initialized
  • The memory blocks for RT tasks must be locked in
    main memory to avoid access latencies due to
    swapping

14
Interrupt Handling
  • Types of interrupts
  • Hardware interrupt (timer, network card ) the
    interrupt handler as a separated task in a
    different context.
  • Software interrupt, (a divide by zero, a memory
    segmentation fault, etc. The interrupt handler
    runs in the context of the interrupting task

15
Handling an Interrupt
1. Normal program execution
3. Processor state saved
4. Interrupt service routine (handler) runs
2. Interrupt occurs
6. Processor state restored
5. Interrupt service routine terminates
7. Normal program execution resumes
16
Challenges in RTOS
  • Interrupt latency
  • ?? The time between the arrival of interrupt and
    the start of corresponding ISR.
  • ?? Modern processors with multiple levels of
    caches and instruction pipelines that
  • need to be reset before ISR can start might
    result in longer latency.
  • Interrupt enable/disable
  • The capability to enable or disable (mask)
    interrupt individually.
  • Interrupt priority
  • ?? to block a new interrupt if an ISR of a
    higher-priority interrupt is still running.
  • ?? the ISR of a lower-priority interrupt is
    preempted by a higher-priority interrupt.
  • ?? The priority problems in task scheduling also
    show up in interrupt handling.
  • Interrupt nesting
  • ?? an ISR servicing one interrupt can itself be
    pre-empted by another interrupt coming from the
    same peripheral device.
  • Interrupt sharing
  • ?? allow different devices to be linked to the
    same hardware interrupt.

17
Interrupt Service Routines
  • Acknowledge the interrupt (tell hardware)
  • Copy peripheral data into a buffer
  • Indicate to other code that data has arrived
  • Longer reaction to interrupt performed outside
    interrupt routine
  • E.g., causes a process to start or resume running

18
Multitasking Techniques
  • big loop, cyclic executive
  • applications are implemented in a loop
  • interrupt-driven scheduling
  • tasks are activated by interrupts
  • foreground/background
  • foreground interrupt service (interrupt-driven
    scheduling)
  • background nonreal-time tasks (big loop)
  • multitasking executive, RTOS
  • multiple tasks are scheduled using
    multiprogramming and advanced scheduling
    mechanisms

19
TCB (Task Control Block)
  • ?? Id
  • ?? Task state (e.g. Idling)
  • ?? Task type (hard, soft, background ...)
  • ?? Priority
  • ?? Other Task parameters
  • ?? period
  • ?? Relative deadline
  • ?? Absolute deadline
  • ?? Context pointer
  • ?? Pointer to program code, data area, stack
  • ?? Pointer to IPC resources (semaphores,
    mailboxes etc)
  • ?? Pointer to other TCBs (preceding, next, queues
    etc)

20
The Kernel
  • Kernel
  • performs basic functions for task management and
    intertask communication
  • usually, resident in main memory
  • Non-preemptive kernel
  • once executed on CPU, a task continues executing
    until it releases control
  • in case of interrupt, it resumes execution upon
    completion of ISR
  • advantages
  • completion time of task execution is predictable
  • no dynamic interference of data between tasks

21
Disadvantages
  • high priority tasks may be blocked unnecessarily
  • real-time properties can not be guaranteed

Low priority task (TL)
time
interrupt
TL terminates and relinquishes the CPU.
Interrupt service routine (ISR)
ISR makes the high priority task ready.
High priority task (TH)
22
Preemptive scheduling
  • upon transition to ready state, a high priority
    task will be dispatched by suspending lower
    priority task in execution
  • in case of interrupt, a higher priority task will
    be scheduled upon completion of ISR
  • advantage
  • higher priority task predictable completion time
    / real-time
  • disadvantage
  • conflict in using shared data and resources

23
Preemptive scheduling (cont.)
Low priority task (TL)
time
interrupt
Interrupt service routine (ISR)
ISR makes the high priority task ready.
High priority task (TH)
24
Non-preemptive priority based scheduling
25
Preemptive priority based scheduling
26
Preemptive Round-robin Scheduling
27
Task scheduling
  • scheduling policies, scheduling disciplines
  • nonreal-time
  • FCFS, round robin, priority-based, shortest job
    first, etc.
  • real-time
  • priority-based, earliest deadline first, etc.
  • priority assignment
  • criteria
  • Importance (criticality), period, deadline
  • execution time, elapsed time after ready
  • method
  • dynamic (at runtime)
  • static (at initialization)

28
Primer ispitnog zadatka
Pretpostavimo da se u datom trenutku lista
spremnih procesa (Ready list) sastoji od 3
procesa A, B, C sledecih karakteristika
  • Posle 5 jedinicnih intervala vremena desi se
    prekid, kome je pridruen prekidni proces D,
    prioriteta 10 i trajanja 4 jedinicna intervala.
    Nacrtati vremenske dijagrame procesa
  • a) ako se koristi algoritam dodele procesora
    baziran na prioritetima sa preuzimanjem
    (Pre-Emptive Priority Scheduling)
  • b) ako se koristi algoritam krunog dodeljivanja
    sa preuzimanjem (Pre-Emptive Round Robin
    Scheduling) pri cemu je lista prvobitno uredjena
    po redu opadajucih prioriteta

29
b)
30
Task state transitions
31
Synchronisation and Communication
  • The correct behaviour of a concurrent program
    depends on synchronisation and communication
    between its processes
  • Synchronisation the satisfaction of constraints
    on the interleaving of the actions of processes
    (e.g. an action by one process only occurring
    after an action by another)
  • Communication the passing of information from
    one process to another
  • Concepts are linked since communication requires
    synchronisation, and synchronisation can be
    considered as contentless communication.
  • Data communication is usually based upon either
    shared variables or message passing.

32
Shared Variable Communication
  • Examples busy waiting, semaphores and monitors
  • Unrestricted use of shared variables is
    unreliable and unsafe due to multiple update
    problems
  • Consider two processes updating a shared
    variable, X, with the assignment X X1
  • load the value of X into some register
  • increment the value in the register by 1 and
  • store the value in the register back to X
  • As the three operations are not indivisible, two
    processes simultaneously updating the variable
    could follow an interleaving that would produce
    an incorrect result

33
Avoiding Interference
  • The parts of a process that access shared
    variables must be executed indivisibly with
    respect to each other
  • These parts are called critical sections
  • The required protection is called mutual
    exclusion

34
Condition Synchronisation
  • Condition synchronisation is needed when a
    process wishes to perform an operation that can
    only sensibly, or safely, be performed if another
    process has itself taken some action or is in
    some defined state
  • E.g. a bounded buffer has 2 condition
    synchronisation
  • the producer processes must not attempt to
    deposit data onto the buffer if the buffer is
    full
  • the consumer processes cannot be allowed to
    extract objects from the buffer if the buffer is
    empty

head
tail
35
Busy Waiting
  • One way to implement synchronisation is to have
    processes set and check shared variables that are
    acting as flags
  • Busy waiting algorithms are in general
    inefficient they involve processes using up
    processing cycles when they cannot perform useful
    work
  • Even on a multiprocessor system they can give
    rise to excessive traffic on the memory bus or
    network
  • (if distributed)

36
Semaphores
  • A semaphore is a non-negative integer variable
    that apart from initialization can only be acted
    upon by two procedures P (or WAIT) and V (or
    SIGNAL)
  • WAIT(S) If the value of S gt 0 then decrement
    its value by one otherwise delay the process
    until S gt 0 (and then decrement its value).
  • SIGNAL(S) Increment the value of S by one.
  • WAIT and SIGNAL are atomic (indivisible). Two
    processes both executing WAIT operations on the
    same semaphore cannot interfere with each other
    and cannot fail during the execution of a
    semaphore operation

37
Binary and quantity semaphores
  • A general semaphore is a non-negative integer
    its value can rise to any supported positive
    number
  • A binary semaphore only takes the value 0 and 1
    the signalling of a semaphore which has the value
    1 has no effect, the semaphore retains the value
    1
  • With an integer (quantity) semaphore the amount
    to be decremented by WAIT (and incremented by
    SIGNAL) is given as a parameter e.g. WAIT (S, i)

38
Definition of boolean semaphors
  • _________________________________________________
    ________________
  • A boolean semaphore is a boolean variable
    upon which only the following three operations
    are allowed and no others (no direct
    assignment to, or use of, the value is
    allowed). All operations are mutually exclusive.

  • __________________________________________________
    _____________________________________
  • init(s, b)

  • __________________________________________________
    _____________________________________
  • This
    initialises the semaphore s to the boolean value
    b.

  • __________________________________________________
    _____________________________________
  • P(s)

  • __________________________________________________
    _____________________________________
  • This
    procedure has the following effect

  • While s 0 DO run other processes OD

  • s 0
  • Mutual
    exclusion is suspended while in the body of the
    wait loop (i.e.
  • when other
    processes are being run).

  • __________________________________________________
    ______________________________________
  • V(s)

  • __________________________________________________
    _______________________________________
  • This
    procedure adds one to the value of s.

  • __________________________________________________
    ________________________________________

39
Definition of integer semaphors
  • ________________________________________
  • An integer semaphore is an integer variable
    upon which only the
  • following three operations are allowed.
  • ______________________________________________
    ____________
  • init(s, i)
  • ______________________________________________
    ____________________________________________
  • This initialises the semaphore s to the
    non-negative integer value i .
  • ______________________________________________
    ____________________________________________
  • P(s)
  • ______________________________________________
    ____________________________________________
  • This procedure has the following effect
  • While s 0 DO run
    other processes OD
  • s s-1
  • Mutual exclusion is suspended while in the
    body of the wait loop (i.e.
  • when other processes are being run).
  • ______________________________________________
    ___________________________________________
  • V(s)
  • ______________________________________________
    ____________________________________________
  • This procedure adds one to the value of s .

40
Condition synchronisation
var consyn semaphore ( init 0 )
process P1 ( waiting process ) statement
X wait (consyn) statement Y end P1
process P2 ( signal process ) statement A
signal (consyn) statement B end P2
41
Mutual Exclusion
( mutual exclusion ) var mutex semaphore (
initially 1 )
42
The Bounded Buffer
procedure Append(I Integer) is begin
Wait(Space_Available) Wait(Mutex Buf(Top)
I Top Top1 Signal(Mutex
Signal(Item_Available) end Append
procedure Take(I out Integer) is begin
Wait(Item_Available) Wait(Mutex) I
BUF(base) Base Base1 Signal(Mutex)
Signal(Space_Available) end Take
43
Producer consumer(single buffering)
  • Producer Consumer
  • WHILE TRUE DO WHILE TRUE DO
  • compute(data)
    Wait(read)
  • Wait(write)
    data buffer
  • buffer data
    Signal(write)
  • Signal(read)
    use(data)
  • OD
    OD
  • Shared write boolean semaphore
    (initially 1)
  • read boolean semaphore
    (initially 0)
  • buffer anytype
  • Local data anytype

44
Producer Consumer(Double buffering)
  • Producer Consumer
  • b 0
    b 0
  • WHILE TRUE DO WHILE TRUE DO
  • compute(data)
    P(readb)
  • Wait(writeb)
    data bufferb
  • bufferb data
    V(writeb)
  • Signal(readb)
    use(data)
  • b b 1 MOD 2
    b b 1 MOD 2
  • OD
    OD
  • Shared write ARRAY 0..1 OF
    semaphore (initially both 1)
  • read ARRAY 0..1 OF
    semaphore (initially both 0)
  • buffer anytype
  • Local b 0..1
  • data anytype

45
Deadlock
  • Two processes are deadlocked if each is holding a
    resource while waiting for a resource held by the
    other

type Sem is ... X Sem 1 Y Sem 1
task B task body B is begin ... Wait(Y) Wait
(X) ... end B
task A task body A is begin ... Wait(X) Wait
(Y) ... end A
46
Criticisms of semaphores
  • Semaphore are an elegant low-level
    synchronisation primitive, however, their use is
    error-prone
  • If a semaphore is omitted or misplaced, the
    entire program collapses. Mutual exclusion may
    not be assured and deadlock may appear just when
    the software is dealing with a rare but critical
    event
  • A more structured synchronisation primitive is
    required
  • No high-level concurrent programming language
    relies entirely on semaphores they are important
    historically

47
Monitors
  • Monitors provide encapsulation, and efficient
    condition synchronisation over shared resources
  • The critical regions are written as procedures
    and are encapsulated together into a single
    module
  • All variables that must be accessed under mutual
    exclusion are hidden all procedure calls into
    the module are guaranteed to be mutually
    exclusive
  • Only the operations are visible outside the
    monitor

48
The Bounded Buffer
monitor buffer export append, take var
BUF array . . . of integer top, base
0..size-1 NumberInBuffer integer
spaceavailable, itemavailable condition
procedure append (I integer) begin if
NumberInBuffer size then
wait(spaceavailable) end if BUFtop
I NumberInBuffer NumberInBuffer1
top (top1) mod size
signal(itemavailable) end append
49
The Bounded Buffer
  • If a process calls take when there is nothing in
    the buffer then it will become suspended on
    itemavailable.

procedure take (var I integer) begin
if NumberInBuffer 0 then
wait(itemavailable) end if I
BUFbase base (base1) mod size
NumberInBuffer NumberInBuffer-1
signal(spaceavailable) end take begin (
initialisation ) NumberInBuffer 0 top
0 base 0 end
  • A process appending an item will, however, signal
    this suspended process when an item does become
    available.

50
Criticisms of Monitors
  • The monitor gives a structured and elegant
    solution to mutual exclusion problems such as the
    bounded buffer
  • It does not, however, deal well with condition
    synchronization requiring low-level condition
    variables

51
Mutual exclusion
  • shared data
  • resides in the same address space
  • global variables, pointers, buffers, lists
  • access through critical sections
  • intertask communication using shared data
  • concurrent accesses must be controlled to avoid
    corrupting the shared data ? mutual exclusion
  • mutual exclusion techniques
  • disabling interrupts
  • disabling the scheduler
  • test-and-set operation
  • semaphores, monitors

52
Memory management
  • Keeps track of memory
  • Identifies programs loaded into memory
  • Amount of space each program uses
  • Available remaining space
  • Prevents programs from reading and writing memory
    outside of their allocated space
  • Maintains queues of waiting programs
  • Allocates memory to programs that are next to be
    loaded
  • De-allocates a programs memory space upon
    program completion

53
Storage Hierarchy
  • Main Memory
  • Cache on Chip (internal cache)
  • External Cache memory
  • RAM

Access time decreases
Capacity decreases
  • Secondary Storage
  • Hard Disk
  • Tertiary Storage
  • CD ROM
  • Floppy
  • Tape

Cost increases
54
Important Memory Terms
55
MEMORY MANAGEMENT
Definitions
  • The concept of a logical address space that is
    bound to a separate physical address space is
    central to proper memory management.
  • Logical address generated by the program also
    referred to as virtual address
  • Physical address address seen by the memory
    unit
  • Logical and physical addresses are the same in
    compile-time and load-time address-binding
    schemes logical (virtual) and physical addresses
    differ in execution-time address-binding scheme

56
Memory Management terms (cont.)
  • Relocatable
  • Means that the program image can reside anywhere
    in physical memory.
  •  
  • Binding
  • Programs need real memory in which to reside.
    When is the location of that real memory
    determined?
  • This is called mapping logical to physical
    addresses.
  • This binding can be done at compile/link time.
    Converts symbolic to relocatable. Data used
    within compiled source is offset within object
    module.
  • Compiler
  • If its known where the program will reside, then
    absolute code is generated. Otherwise compiler
    produces relocatable code.
  • Load
  • Binds relocatable to physical. Can find best
    physical location.
  • Execution
  • The code can be moved around during execution.
    Means flexible virtual mapping.

57
Binding Logical To Physical
MEMORY MANAGEMENT
Source
Binding Can be done at compile/link time.
Converts symbolic to relocatable. Can be done
at load time. Binds relocatable to physical. Can
be done at run time. Code can be moved around
during execution.  
Compiler
Object
Other Objects
Linker
Executable
Libraries
Loader
In-memory Image
58
Memory Management - history
59
Memory management History (cont.)
60
Memory Management history (cont. )
61
MEMORY MANAGEMENT
SINGLE PARTITION ALLOCATION
  • BARE MACHINE
  •  
  • No protection, no overhead.
  • This is the simplest form of memory management.
  • Used by hardware diagnostics, by system boot
    code, real time/dedicated systems.
  • logical physical
  • User can have complete control. Commensurably,
    the operating system has none.
  •   
  • DEFINITION OF PARTITIONS
  •  
  • Division of physical memory into fixed sized
    regions. (Allows addresses spaces to be distinct
    one user can't muck with another user, or the
    system.)
  • The number of partitions determines the level of
    multiprogramming. Partition is given to a process
    when its created or scheduled.

62
Multiprogramming with Fixed Partitions
  • Partition numbers and sizes (equal or unequal)
    are set by operator at system start up based on
    workload statistics
  • Used by IBMs MFT (Multiprogramming with a Fixed
    Number of Tasks) OS for IBM 360 systems a long
    time ago

63
Multiprogramming with Dynamic Partitions
  • Partitions are created at load times

64
Problems with Dynamic Partitions
  • Fragmentation occurs due to creation and deletion
    of segments at load time
  • Fragmentation is eliminated by relocation
  • Relocation has to be fast to reduce overhead
  • Allocation of free memory is done by a placement
    algorithm

65
How to Keep Track of Unused Memory
  • Bit Maps
  • Memory is divided into allocation units of fixed
    size. A bit map with 0 bits for free and 1
    bits for allocated units

66
How to Keep Track of Unused Memory (Cont.)
  • Linked Lists

67
MEMORY MANAGEMENT
SINGLE PARTITION ALLOCATION
Relocation Register
Limit Register
MEMORY
CPU
Yes
lt

Logical Address
Physical Address
No
68
CONTIGUOUS ALLOCATION
MEMORY MANAGEMENT
  • DYNAMIC STORAGE
  •  
  • (Variable sized holes in memory allocated on
    need.)
  • Operating System keeps table of this memory -
    space allocated based on table.
  • Adjacent freed space merged to get largest holes
    - buddy system.
  • ALLOCATION PRODUCES HOLES

OS
OS
OS
process 1
process 1
process 1
process 4
Process 2 Terminates
Process 4 Starts
process 2
process 3
process 3
process 3
69
MEMORY MANAGEMENT
CONTIGUOUS ALLOCATION
  • HOW WE ALLOCATE MEMORY TO NEW PROCESSES?
  •   
  • First fit - allocate the first hole that's big
    enough
  • Best fit - allocate smallest hole that's big
    enough .
  • Worst fit - allocate largest hole.
  • Avoid small holes (external fragmentation). This
    occurs when there are many small pieces of free
    memory.
  • What should be the minimum size allocated,
    allocated in what chunk size?
  • Want to also avoid internal fragmentation. This
    is when memory is handed out in some fixed way
    (power of 2 for instance) and requesting program
    doesn't use it all.

70
MEMORY MANAGEMENT
COMPACTION
Trying to move free memory to one large block.
  Only possible if programs linked with dynamic
relocation  Swapping if using static
relocation, code/data must return to same place.
But if dynamic, can reenter at more advantageous
memory.
OS
OS
OS
P1
P1
P1
P2
P3
P3
P2
P2
P3
71
MEMORY MANAGEMENT
PAGING
New Concept!!
  • Logical address space of a process can be
    noncontiguous process is allocated physical
    memory whenever that memory is available and the
    program needs it.
  • Divide physical memory into fixed-sized blocks
    called frames (size is power of 2, between 512
    bytes and 8192 bytes).
  • Divide logical memory into blocks of same size
    called pages.
  • Keep track of all free frames.
  • To run a program of size n pages, need to find n
    free frames and load program.
  • Set up a page table to translate logical to
    physical addresses.

72
MEMORY MANAGEMENT
PAGING
Address Translation Scheme
  • Address generated by the CPU is divided into
  • Page number (p) used as an index into a page
    table which contains base address of each page in
    physical memory.
  • Page offset (d) combined with base address to
    define the physical memory address sent to the
    memory unit.

d
p
73
MEMORY MANAGEMENT
PAGING
Permits a program's memory to be physically
noncontiguous so it can be allocated from
wherever available. This avoids fragmentation and
compaction.
Frames physical blocks Pages logical blocks
Size of frames/pages is defined by hardware
(power of 2 to ease calculations)
HARDWARE An address is determined by   page
number ( index into table ) offset ---gt
mapping into ---gt base address ( from table )
offset.
74
MEMORY MANAGEMENT
PAGING
MULTILEVEL PAGE TABLE   A means of using page
tables for large address spaces.
75
Virtual Memory (VM)
Virtual memory of process on disk
Map (translate) virtual address to real
Real memory of system
76
Virtual Memory (VM)
  • VM is conceptual
  • It is constructed on disk
  • Size of VM is not limited (usually larger than
    real memory)
  • All process addresses refer to the VM image
  • When the process executes all VM addresses are
    mapped onto real memory

77
(Pure) Paging
  • Virtual and real memory are divided into fixed
    sized pages
  • Programs are divided into pages
  • A process address (both in virtual real memory)
    has two components

78
Paging (Cont.)
  • When process pages are transferred from VM to
    real memory, page numbers must be mapped from
    virtual to real memory addresses
  • This mapping is done by software hardware
  • When the process is started only the first page
    (main) is loaded. Other pages are loaded on demand

79
The relation between virtual addresses and
physical memory addresses
  • 64K Virtual Memory
  • 32K Real Memory

80
Page Tables
  • Index of page table is the virtual page

81
Page Table Entry Fields
  • Validity bit is set when the page is in memory
  • Reference bit is set set by the hardware whenever
    the page is referred
  • Modified bit is set whenever the page is modified
  • Page-protection bits set the access rights (eg.,
    read, write restrictions)

82
Address Mapping in Paging
Real memory address
83
Address Mapping in Paging
  • During the execution every page reference is
    checked against the page map table
  • If the validity bit is set (ie., page is in
    memory) execution continues
  • If the page is not in memory a page fault
    (interrupt - trap) occurs and the page is fetched
    into memory
  • If the memory is full, pages are written back
    using a page replacement algorithm

84
Memory Management Problems Re-visit due to
paging
  • Unused (wasted) memory due to fragmentation
  • Memory may contain parts of program which are not
    used during a run
  • Virtual memory contents are loaded into memory on
    demand
  • Process size is limited with the size of physical
    memory
  • Process size can be larger than real memory
  • Program does not occupy contiguous locations in
    memory (virtual pages are scattered in memory)
  • HIGH OVERHEAD FOR HARD REAL TIME SYSTEMS

85
Segmentation
  • Pages are fixed in size, segments are variable
    sized
  • A segment can be a logical entity such as
  • Main program
  • Some routines
  • Data of program
  • File
  • Stack

86
Segmentation (Cont.)
  • Process addresses are now in the form
  • Segment Map Table has one entry for each segment
    and each entry consist of
  • Segment number
  • Physical segment starting address
  • Segment length

87
Segmentation with Paging
  • Segmentation in virtual memory, paging in real
    memory
  • A segment is composed of pages
  • An address has three components
  • The real memory contains only the demanded pages
    of a segment, not the full segment

88
Addressing in Segmentation with Paging
89
How Big is a Page Table?
  • Consider a full 2 32 byte (4GB) address space
  • Assume 4096 byte (2 12 byte) pages
  • 4 bytes per page table entry
  • The page table has 2 32/2 12 ( 2 20 ) entries
    (one for each page)
  • Page table size would be 2 22 bytes (or 4
    megabytes)

90
Problems with Direct Mapping?
  • Although a page table is of variable length
    depending on the size of process, we can not keep
    them in registers
  • Page table must be in memory for fast access
  • Since a page table can be very large (4MB), page
    tables are stored in virtual memory and be
    subjected to paging like process pages

91
How to Solve?
  • Two-level Lookup (Intel)
  • Inverted Page Tables
  • Translation Lookaside Buffers

92
Two-Level Lookup
93
Two-Level Lookup (Cont.)
94
File System
  • The collection of algorithms and data structures
    which perform the translation from logical file
    operations (system calls) to actual physical
    storage of information

95
Objectives of a File System
  • Provide storage of data and manipulation
  • Guarantee consistency of data and minimise errors
  • Optimise performance (system and user)
  • Eliminate data loss (data destruction)
  • Support variety of I/O devices
  • Provide a standard user interface
  • Support multiple users

96
User Requirements
  • Access files using a symbolic name
  • Capability to create, delete and change files
  • Controlled access to system and other users
    files
  • Capability of restructuring files
  • Capability to move data between files
  • Backup and recovery of files

97
Files
  • Naming
  • Name formation
  • Extensions (Some typical extensions are shown
    below)

98
Files (Cont.)
  • Structuring
  • (a) Byte sequence (as in DOS, Windows UNIX)
  • (b) Record sequence (as in old systems)
  • (c) Tree structure

99
Files (Cont.)
  • File types
  • Regular (ASCII, binary)
  • Directories
  • Character special files
  • Block special files
  • File access
  • Sequential access
  • Random access

100
  • File attributes
  • Read, write, execute, archive, hidden, system
    etc.
  • Creation, last access, last modification

101
File operations
  • Create
  • Delete
  • Open
  • Close
  • Read
  • Write
  • Append
  • Seek
  • Get attributes
  • Set Attributes
  • Rename

102
Directories
  • Where to store attributes
  • In directory entry (DOS, Windows)
  • In a separate data structure (UNIX)
  • Path names
  • Absolute path name
  • Relative path name
  • Working (current) directory
  • Operations
  • Create, delete, rename, open directory, close
    directory, read directory, link (mount), unlink

103
Directories Files (UNIX)
  • Working directory d2
  • Absolute path to file f2 /d1/d2/f2
  • Relative path to file f2 f2

104
Physical Disk Space Management
Sector
Track
  • Each plate is composed of sectors or physical
    blocks which are laid along concentric tracks
  • Sectors are at least 512 bytes in size
  • Sectors under the head and accessed without a
    head movement form a cylinder

105
File System Implementation
  • Contiguous allocation
  • Linked list allocation
  • Linked list allocation using an index (DOS file
    allocation table - FAT)
  • i-nodes (UNIX)

106
Contiguous Allocation
  • The file is stored as a contiguous block of
    data allocated at file creation

(a) Contiguous allocation of disk space for 7
files (b) State of the disk after files D and E
have been removed
107
Contiguous Allocation (Cont.)
  • FAT (file allocation table) contains file name,
    start block, length
  • Advantages
  • Simple to implement (start block length is
    enough to define a file)
  • Fast access as blocks follow each other
  • Disadvantages
  • Fragmentation
  • Re-allocation (compaction)

108
Linked List Allocation
  • The file is stored as a linked list of blocks

109
Linked List Allocation (Cont.)
  • Each block contains a pointer to the next block
  • FAT (file allocation table) contains file name,
    first block address
  • Advantages
  • Fragmentation is eliminated
  • Disadvantages
  • Random access is very slow as links have to be
    followed

110
Linked list allocation using an index (DOS FAT)
First block address is in directory entry
111
Linked list allocation using an index (Cont.)
  • The DOS (Windows) FAT is arranged this way
  • All block pointers are in FAT so that dont take
    up space in actual block
  • Random access is faster since FAT is always in
    memory
  • 16-bit DOS FAT length is (655362)2 131076
    bytes

112
Problem
  • 16-bit DOS FAT can only accommodate 65536
    pointers (ie., a maximum of 64 MB disk)
  • How can we handle large disks such as a 4 GB
    disk?

Clustering
113
i (index)-nodes (UNIX)
114
i-nodes (Cont.)
  • Assume each block is 1 KB in size and 32 bits (4
    bytes) are used as block numbers
  • Each indirect block holds 256 block numbers
  • First 10 blocks file size lt 10 KB
  • Single indirect file size lt 25610 266 KB
  • Double indirect file size lt 256256 266
    65802 KB 64.26 MB
  • Triple indirect file size lt 256256256
    65802 16843018 KB 16 GB

115
Input Output System
  • I/O hardware (classification, device drivers)
  • I/O techniques (programmed, interrupt driven,
    DMA)
  • Structuring I/O software
  • Disks (performance, arm scheduling, common disk
    errors)
  • RAID configurations

116
Classification of I/O Devices
  • Block devices
  • Information is stored in fixed size blocks
  • Block sizes range from 512-32768 bytes
  • I/O is done by reading/writing blocks
  • Hard disks, floppies, CD ROMS, tapes are in this
    category
  • Character devices
  • I/O is done as characters (ie., no blocking)
  • Terminals, printers, mouse, joysticks are in this
    category

117
  • Some typical devices and data rates

118
Device Controllers
  • A controller is an electronic card (PCs) or a
    unit (mainframes) which performs blocking (from
    serial bit stream), analog signal generation (to
    move the disk arm, to drive CRT tubes in
    screens), execution of I/O commands

119
I/O Techniques
  • Programmed I/O
  • Interrupt-driven I/O
  • Direct memory access (DMA)

120
Programmed I/O
  • The processor issues an I/O command on behalf of
    a process to an I/O module
  • The process busy-waits for the operation to be
    completed before proceeding

121
Interrupt-driven I/O
  • Processor issues an I/O command on behalf of a
    process
  • Process is suspended and the I/O starts
  • Processor may execute another process
  • Controller reads a block from the drive serially,
    bit by bit into controllers internal buffer. A
    checksum is computed to verify no reading errors
  • When I/O is finished, the processor is
    interrupted to notify that the I/O is over
  • OS reads controllers buffer a byte (or word) at
    a time into a memory buffer.

122
Direct Memory Access (DMA)
  • A DMA module controls the exchange of data
    between main memory and an I/O device
  • The processor sends a request for the transfer of
    a block of data to the DMA module (block address,
    memory address and number of bytes to transfer)
    and continues with other work
  • DMA module interrupts the processor when the
    entire block has been transferred
  • When OS takes over, it does not have to copy the
    disk block to memory it is already there.

123
DMA (Cont.)
  • DMA unit is capable of transferring data straight
    from memory to the I/O device
  • Cycle Stealing DMA unit makes the CPU unable to
    use the bus until the DMA unit has finished
  • Instruction execution cycle is suspended, NOT
    interrupted
  • DMA has smaller number of interrupts (usually one
    when I/O is finished to acknowledge). Interrupt
    driven I/O may need one interrupt for each
    character if device has no internal buffers.

124
Direct Memory Access (DMA)
  • Operation of a DMA transfer

125
Structuring I/O Software
126
User-Space I/O Software
  • Library of I/O procedures (ie., system calls)
    such as
  • bytes-read read (file_descriptor, buffer,
    bytes to be read)
  • Spooling provides virtual I/O devices

127
Device-Independent I/O Software
  • Uniform interface for device drivers (ie.,
    different devices)
  • Device naming
  • Mapping of symbolic device names to proper device
    drivers
  • Device protection
  • In a multi-user system you can not let all users
    access all I/O devices

128
Device-Independent I/O Software (Cont.)
  • Provide device independent block size
  • Physical block sizes for different devices may
    differ, so we have to provide the same logical
    block sizes
  • Buffering
  • Storage allocation on block devices such as disks
  • Allocating and releasing dedicated devices such
    as tapes

129
Device-Independent I/O Software (Cont.)
  • Error reporting
  • When a bad block is encountered, the driver
    repeats the I/O request several times and issues
    an error message if data can not be recovered

130
Device Drivers
  • One driver per device or device class
  • Device driver
  • Issues I/O commands
  • Checks the status of I/O device (eg. Floppy drive
    motor)
  • Queues I/O requests

131
Device Drivers (Cont.)
  • (a) Without a standard driver interface
  • (b) With a standard driver interface

132
Features of commercial RTOS
  • conformance to standards
  • Real-Time POSIX API standard
  • modularity and scalability
  • speed and efficiency
  • context switch time, interrupt latency, semaphore
    get/release latency, etc
  • system calls
  • Non-preemptable portion made highly optimized
  • prioritized and schedulable interrupt handling
  • support for real-time scheduling
  • fine clock and timer resolution
  • simple memory management

133
Commercial RTOS
  • LynxOS
  • RTLinux
  • pSOSystem
  • QNX/Neutrino
  • VRTX
  • VxWorks
  • Micrium ?c-OS II
  • AvrX

134
Real-time extensions of Linux
  • major shortcomings of Linux
  • the disabling of interrupts when a task is in
    critical sections
  • the disk driver may disable interrupts for a few
    hundred microseconds at a time
  • scheduling
  • scheduling policies
  • SCHED_FIFO, SCHED_RR fixed-priority, applicable
    to real-time tasks
  • SCHED_OTHER time-sharing basis
  • 100 priority levels
  • can determine the maximum and minimum priorities
    associated with a scheduling policy
  • for round-robin policy, the size of the time
    slices given to a task can be set

135
Real-time extensions of Linux(cont.)
  • clock and timer resolution
  • actual resolution of Linux timers 10 millisecond
  • UTIME high resolution time service provides
    microsecond clock and timer granularity
  • threads
  • clone() creates a process that shares the
    address space of its parent process and specified
    parts of the parents context
  • LinuxThreads provides most of POSIX thread
    extension API functions
  • examples
  • KURT Kansas University Real-Time System
  • RT Linux

136
Real-Time POSIX
  • Overview
  • POSIX Portable Operating System Interface
  • an API standard
  • POSIX 1003.1 defines the basic functions of a
    Unix os
  • POSIX thread and real-time extensions
  • POSIX 1003.1b real-time extension
  • prioritized scheduling, enhanced signals, IPC
    primitives, high-resolution timer, memory
    locking, (a)synchronized I/O, contiguous files,
    etc
  • POSIX 1003.1c thread extension
  • creation of threads and management of their
    execution

137
Real-Time POSIX (cont.)
  • Threads
  • basic units of concurrency
  • functions
  • create/initialize/destroy threads
  • manage thread resources
  • schedule executions of threads
  • read/set attributes of a thread
  • priority, scheduling policy, stack size and
    address, etc.
  • Clocks and timers
  • time made visible to the application threads
  • the system may have more than one clock
  • functions
  • get/set time of a specified clock
  • create/set/cancel/destroy timers (up to 32
    timers)
  • timer resolution nanosecond

138
Real-Time POSIX (cont.)
  • Scheduling interface
  • support fixed priority scheduling with at least
    32 priority levels
  • a thread may
  • (1) set and get its own priority and priorities
    of other threads
  • (2) choose among FIFO, round-robin and
    implementation-specific policies
  • in principle, it is possible to support the EDF
    or other dynamic priority algorithms, but with
    high implementation overhead
  • different threads within the same process may be
    scheduled according to different scheduling
    policies

139
Real-Time POSIX (cont.)
  • Synchronization
  • semaphores
  • simple very low overhead
  • unable to control priority inversion
  • mutexes
  • support both priority inheritance and priority
    ceiling protocols
  • condition variables
  • allow a thread to lock a mutex depending on one
    or more conditions being true
  • mutexes are associated with a condition variable
    which defines the waited-for condition

140
Real-Time POSIX (cont.)
  • Interprocess communication
  • messages prioritized
  • send/receive nonblocking
  • receive notification no check necessary for
    message arrivals
  • signals
  • primarily for event notification and software
    interrupt
  • at least eight application-defined signals
  • delivered in priority order
  • can carry data
  • queues blocked signals

141
Real-Time POSIX (cont.)
  • Shared memory and memory locking
  • a process can create a shared memory object
  • in case of virtual memory, applications can
    control memory residency of their code and data
    by locking the entire memory or specified range
    of address space
  • File I/O
  • synchronized I/O
  • two levels of sync data integrity, file
    integrity
  • asynchronous I/O
  • I/O concurrently with CPU processing
Write a Comment
User Comments (0)
About PowerShow.com