FSA, Hierarchical Memory Systems - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

FSA, Hierarchical Memory Systems

Description:

... locations (address and contents are recorded here) ... Tp = 60 ns. Associative Cache. Access order. A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2 H7 I6 A0 B0 ... – PowerPoint PPT presentation

Number of Views:28
Avg rating:3.0/5.0
Slides: 66
Provided by: Lee144
Category:

less

Transcript and Presenter's Notes

Title: FSA, Hierarchical Memory Systems


1
FSA, Hierarchical Memory Systems
  • Prof. Sin-Min Lee
  • Department of Computer Science

CS147 Lecture 12
2
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops
  • Convert the diagram into a chart

3
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops Cont.
  • For D and T Flip Flops

4
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops Cont.
  • For JK Flip Flop

5
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops Cont.
  • Final Implementation

6
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops
  • Convert the diagram into a chart

7
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops Cont.
  • For D and T Flip Flops

8
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops Cont.
  • For JK Flip Flop

9
Implementing FSM with No Inputs Using D, T, and
JK Flip Flops Cont.
  • Final Implementation

10
How can we create a flip-flop using another
flip-flop?
  • Say we have a flip-flop BG with the following
    properties
  • Lets try to implement this flip-flop using a T
    flip-flop

BG Q
00 Q
01 Q
10 1
11 0
11
Step 1Create Table
BG Q Q T
00 0 1 1
00 1 0 1
01 0 0 0
01 1 1 0
10 0 1 1
10 1 1 0
11 0 0 0
11 1 0 1
  • The first step is to draw a table with created
    flip-flop first (in this case BG), Q, Q, and the
    creator flip-flop (in this case T)
  • -Look at Q, Q to determine value of T

12
Step 2Karnaugh Map
BG Q Q T
00 0 1 1
00 1 0 1
01 0 0 0
01 1 1 0
10 0 1 1
10 1 1 0
11 0 0 0
11 1 0 1
  • Draw a Karnaugh Map, based on when T is a 1

Q
0
1
BG
00
1 1
0 0
0 1
1 0
01
10
11
TBGBGQGQ
13
Step 3 Draw Diagram
TBQ BGQGQ
B
G
Q
Q
14
(No Transcript)
15
The Root of the ProblemEconomics
  • Fast memory is possible, but to run at full
    speed, it needs to be located on the same chip as
    the CPU
  • Very expensive
  • Limits the size of the memory
  • Do we choose
  • A small amount of fast memory?
  • A large amount of slow memory?

16
Memory Hierarchy Design (2)
  • It is a tradeoff between size, speed and cost and
    exploits the principle of locality.
  • Register
  • Fastest memory element but small storage very
    expensive
  • Cache
  • Fast and small compared to main memory acts as a
    buffer between the CPU and main memory it
    contains the most recent used memory locations
    (address and contents are recorded here)
  • Main memory is the RAM of the system
  • Disk storage - HDD

17
Memory Hierarchy Design (3)
  • Comparison between different types of memory

HDD
Register
Cache
Memory
size speed /Mbyte
32 - 256 B 2 ns
32KB - 4MB 4 ns 100/MB
128 MB 60 ns 1.50/MB
20 GB 8 ms 0.05/MB
larger, slower, cheaper
18
(No Transcript)
19
(No Transcript)
20
(No Transcript)
21
(No Transcript)
22
(No Transcript)
23
(No Transcript)
24
Memory Hierarchy
  • Can only do useful work at the top
  • 90-10 rule 90 of time is spent of 10 of
    program
  • Take advantage of locality
  • temporal locality
  • keep recently accessed memory locations in cache
  • spatial locality
  • keep memory locations nearby accessed memory
    locations in cache

25
(No Transcript)
26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
(No Transcript)
30
The connection between the CPU and cache is very
fast the connection between the CPU and memory
is slower
31
(No Transcript)
32
(No Transcript)
33
(No Transcript)
34
(No Transcript)
35
(No Transcript)
36
(No Transcript)
37
(No Transcript)
38
The Cache Hit Ratio
  • How often is a word found in the cache?
  • Suppose a word is accessed k times in a short
    interval
  • 1 reference to main memory
  • (k-1) references to the cache
  • The cache hit ratio h is then

39
Reasons why we use cache
  • Cache memory is made of STATIC RAM a
    transistor based RAM that has very low access
    times (fast)
  • STATIC RAM is however, very bulky and very
    expensive
  • Main Memory is made of DYNAMIC RAM a capacitor
    based RAM that has very high access times because
    it has to be constantly refreshed (slow)
  • DYNAMIC RAM is much smaller and cheaper

40
Performance (Speed)
  • Access time
  • Time between presenting the address and getting
    the valid data (memory or other storage)
  • Memory cycle time
  • Some time may be required for the memory to
    recover before next access
  • cycle time access recovery
  • Transfer rate
  • rate at which data can be moved
  • for random access memory 1 / cycle time

(cycle time)-1
41
(No Transcript)
42
Memory Hierarchy
smallest, fastest, most expensive, most
frequently accessed
  • size ? speed ? cost ?
  • registers
  • in CPU
  • internal
  • may include one or more levels of cache
  • external memory
  • backing store

medium, quick, price varies
largest, slowest, cheapest, least frequently
accessed
43
(No Transcript)
44
(No Transcript)
45
(No Transcript)
46
(No Transcript)
47
(No Transcript)
48
Replacing Data
  • Initially all valid bits are set to 0
  • As instructions and data are fetched from memory,
    the cache is filling and some data need to be
    replaced.
  • Which ones?
  • Direct mapping obvious

49
Replacement Policies for Associative Cache
  1. FIFO - fills from top to bottom and goes back to
    top. (May store data in physical memory before
    replacing it)
  2. LRU replaces the least recently used data.
    Requires a counter.
  3. Random

50
Replacement in Set-Associative Cache
  • Which if n ways within the location to replace?
  • FIFO
  • Random
  • LRU

Accessed locations are D, E, A
51
Writing Data
  • If the location is in the cache, the cached value
    and possibly the value in physical memory must
    be updated.
  • If the location is not in the cache, it maybe
    loaded into the cache or not (write-allocate and
    write-noallocate)
  • Two methodologies
  • Write-through
  • Physical memory always contains the correct value
  • Write-back
  • The value is written to physical memory only it
    is removed from the cache

52
Cache Performance
  • Cache hits and cache misses.
  • Hit ratio is the percentage of memory accesses
    that are served from the cache
  • Average memory access time
  • TM h TC (1- h)TP

Tc 10 ns Tp 60 ns
53
(No Transcript)
54
Associative Cache
FIFO h 0.389 TM 40.56 ns
  • Access order
  • A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2
    H7 I6 A0 B0

Tc 10 ns Tp 60 ns
55
Direct-Mapped Cache
  • Access order
  • A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2
    H7 I6 A0 B0

h 0.167 TM 50.67 ns
Tc 10 ns Tp 60 ns
56
2-Way Set Associative Cache
  • Access order
  • A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 V0 G3 C2
    H7 I6 A0 B0

LRU h 0.31389 TM 40.56 ns
Tc 10 ns Tp 60 ns
57
Associative Cache(FIFO Replacement Policy)
A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0
G3 C2 H7 I6 A0 B0
Data A B C A D B E F A C D B G C H I A B
CACHE A A A A A A A A A A A A A A A I I I
CACHE   B B B B B B B B B B B B B B B A A
CACHE     C C C C C C C C C C C C C C C B
CACHE         D D D D D D D D D D D D D D
CACHE             E E E E E E E E E E E E
CACHE               F F F F F F F F F F F
CACHE                         G G G G G G
CACHE                           H H H H
Hit?                      
Hit ratio 7/18
58
Two-way set associative cache(LRU Replacement
Policy)
A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0
G3 C2 H7 I6 A0 B0
Data Data A B C A D B E F A C D B G C H I A B
CACHE 0 A-0 A-1 A-1 A-0 A-0 A-1 E-0 E-0 E-1 E-1 E-1 B-0 B-0 B-0 B-0 B-0 B-1 B-0
CACHE 0   B-0 B-0 B-1 B-1 B-0 B-1 B-1 A-0 A-0 A-0 A-1 A-1 A-1 A-1 A-1 A-0 A-1
CACHE 1         D-0 D-0 D-0 D-1 D-1 D-1 D-0 D-0 D-0 D-0 D-0 D-0 D-0 D-0
CACHE 1               F-0 F-0 F-0 F-1 F-1 F-1 F-1 F-1 F-1 F-1 F-1
CACHE 2     C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-1 C-1 C-1
CACHE 2                               I-0 I-0 I-0
CACHE 3                         G-0 G-0 G-1 G-1 G-1 G-1
CACHE 3                             H-0 H-0 H-0 H-0
Hit? Hit?                      
Hit ratio 7/18
59
Associative Cache with 2 byte line size (FIFO
Replacement Policy)
A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0
G3 C2 H7 I6 A0 B0 A and J B and D C and
G E and F and I and H
Data Data A B C A D B E F A C D B G C H I A B
CACHE   A A A A A A A A A A A A A A I I I I
CACHE   J J J J J J J J J J J J J J H H H H
CACHE     B B B B B B B B B B B B B B B A A
CACHE     D D D D D D D D D D D D D D D J J
CACHE       C C C C C C C C C C C C C C C B
CACHE       G G G G G G G G G G G G G G G D
CACHE               E E E E E E E E E E E E
CACHE               F F F F F F F F F F F F
Hit? Hit?              
Hit ratio 11/18
60
Direct-mapped Cachewith line size of 2 bytes
A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0
G3 C2 H7 I6 A0 B0 A and J B and D C and
G E and F and I and H
Data Data A B C A D B E F A C D B G C H I A B
CACHE 0 A B B A B B B B A A B B B B B B A B
CACHE 1 J D D J D D D D J J D D D D D D J D
CACHE 2     C C C C C C C C C C C C C C C C
CACHE 3     G G G G G G G G G G G G G G G G
CACHE 4             E E E E E E E E E E E E
CACHE 5             F F F F F F F F F F F F
CACHE 6                             I I I I
CACHE 7                             H H H H
Hit? Hit?                    
Hit ratio 7/18
61
Two-way set Associative Cachewith line size of 2
bytes
A0 B0 C2 A0 D1 B0 E4 F5 A0 C2 D1 B0
G3 C2 H7 I6 A0 B0 A and J B and D C and
G E and F and I and H
Data Data A B C A D B E F A C D B G C H I A B
CACHE 0 A-0 A-1 A-1 A-0 A-1 A-1 E-0 E-0 E-1 B-0 B-0 B-0 B-0 B-0 B-0 B-0 B-1 B-0
CACHE 1 J-0 J-1 J-1 J-0 J-1 J-1 F-0 F-0 F-1 D-0 D-0 D-0 D-0 D-0 D-0 D-0 D-1 D-0
CACHE 0   B-0 B-0 B-1 B-0 B-0 B-1 B-1 A-0 A-0 A-1 A-1 A-1 A-1 A-1 A-1 A-0 A-1
CACHE 1   D-0 D-0 D-1 D-0 D-0 D-1 D-1 J-0 J-0 J-1 J-1 J-1 J-1 J-1 J-1 J-0 J-1
CACHE 2     C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-0 C-1 C-1 C-1 C-1
CACHE 3     G-0 G-0 G-0 G-0 G-0 G-0 G-0 G-0 G-0 G-0 G-0 G-0 G-1 G-1 G-1 G-1
CACHE 2                             I-0 I-0 I-0 I-0
CACHE 3                             H-0 H-0 H-0 H-0
Hit? Hit?              
Hit ratio 12/18
62
Page Replacement - FIFO
  • FIFO is simple to implement
  • When page in, place page id on end of list
  • Evict page at head of list
  • Might be good? Page to be evicted has been in
    memory the longest time
  • But?
  • Maybe it is being used
  • We just dont know
  • FIFO suffers from Beladys Anomaly fault rate
    may increase when there is more physical memory!

63
FIFO vs. Optimal
  • Reference string ordered list of pages accessed
    as process executes
  • Ex. Reference String is A B C A B D A D B C B
  • OPTIMAL
  • A B C A B D A D B C B

System has 3 page frames
5 Faults
toss A or D
A B C D A B C
FIFO A B C A B D A D B C B
toss ?
7 faults
64
Least Recently Used (LRU)
  • Replace the page that has not been used for the
    longest time

3 Page Frames Reference String - A B C A B D A
D B C
LRU 5 faults
A B C A B D A D B C
65
LRU
  • Past experience may indicate future behavior
  • Perfect LRU requires some form of timestamp to be
    associated with a PTE on every memory reference
    !!!
  • Counter implementation
  • Every page entry has a counter every time page
    is referenced through this entry, copy the clock
    into the counter.
  • When a page needs to be changed, look at the
    counters to determine which are to change
  • Stack implementation keep a stack of page
    numbers in a double link form
  • Page referenced move it to the top
  • No search for replacement
Write a Comment
User Comments (0)
About PowerShow.com