Hierarchical Memory Systems - PowerPoint PPT Presentation

1 / 62
About This Presentation
Title:

Hierarchical Memory Systems

Description:

Hierarchical Memory Systems Prof. Sin-Min Lee Department of Computer Science – PowerPoint PPT presentation

Number of Views:146
Avg rating:3.0/5.0
Slides: 63
Provided by: Lee149
Category:

less

Transcript and Presenter's Notes

Title: Hierarchical Memory Systems


1
Hierarchical Memory Systems
  • Prof. Sin-Min Lee
  • Department of Computer Science

CS147 Lecture 12
2
(No Transcript)
3
(No Transcript)
4
(No Transcript)
5
(No Transcript)
6
(No Transcript)
7
(No Transcript)
8
Implementing JK Flip-Flop using only a T Flip-Flop
Note how the areas marked off with a blue box
behave like a T flip-flop, while the area within
the purple box behave like a D flip-flop.
From this last chart, we can derive the following
chart
9
Implementing JK Flip-Flop using only a T Flip-Flop
To derive the next chart, we work in reverse,
asking, What is the input into the T (toggle)
function that will result in the output shown in
the previous chart? In this case, the first
column of Q is 0 and our circled value is a 0 a
0 will give this result.
The input that will give us a 1, when Q is 1, is
also 0. Refer back to the T flip-flop chart to
see that on 0, there is no change 1 toggles.
10
Implementing JK Flip-Flop using only a T Flip-Flop
This is the final Karnaugh map and the associated
equation for T.
11
Implementing T Flip-Flop using only a JK Flip-Flop
This time, we are doing the reverse again, asking
what values of J and K will give us the
corresponding values in the T chart above. 00 or
01 will give 0, so we enter 0X. X is our
don't care value it can be 0 or 1.
12
Implementing T Flip-Flop using only a JK Flip-Flop
Once we derive all the values, we have to split
this into two, in order to get an equation that
defines J and another than defines K.
13
Implementing T Flip-Flop using only a JK Flip-Flop
Here is the final implementation.
14
Implementing this FSM using a T Flip-Flop
Using the values from the first chart, we can get
this second chart. Then, we apply the same
reverse method to determine what input values we
would need to arrive at the ones listed in this
second chart.
15
Implementing this FSM using a T Flip-Flop
T XQ' X'Q
16
Implementing this FSM using a D Flip-Flop
This time we use the same FSM and same initial
chart, but now derive an equation for D.
17
Implementing this FSM using a D Flip-Flop
Since this is a delay, the corresponding chart is
the same.
18
Implementing this FSM using a D Flip-Flop
Finally, here is our graph.
19
Implementing Flip-Flop
20
How can we create a flip-flop using another
flip-flop?
  • Say we have a flip-flop BG with the following
    properties
  • Lets try to implement this flip-flop using a T
    flip-flop

BG Q
00 Q
01 Q
10 1
11 0
21
Step 1Create Table
BG Q Q T
00 0 1 1
00 1 0 1
01 0 0 0
01 1 1 0
10 0 1 1
10 1 1 0
11 0 0 0
11 1 0 1
  • The first step is to draw a table with created
    flip-flop first (in this case BG), Q, Q, and the
    creator flip-flop (in this case T)
  • -Look at Q, Q to determine value of T

22
Step 2Karnaugh Map
BG Q Q T
00 0 1 1
00 1 0 1
01 0 0 0
01 1 1 0
10 0 1 1
10 1 1 0
11 0 0 0
11 1 0 1
  • Draw a Karnaugh Map, based on when T is a 1

Q
0
1
BG
00
1 1
0 0
0 1
1 0
01
10
11
TBGBGQGQ
23
Step 3 Draw Diagram
TBQ BGQGQ
B
G
Q
Q
24
(No Transcript)
25
(No Transcript)
26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
(No Transcript)
30
The Root of the ProblemEconomics
  • Fast memory is possible, but to run at full
    speed, it needs to be located on the same chip as
    the CPU
  • Very expensive
  • Limits the size of the memory
  • Do we choose
  • A small amount of fast memory?
  • A large amount of slow memory?

31
Memory Hierarchy Design (2)
  • It is a tradeoff between size, speed and cost and
    exploits the principle of locality.
  • Register
  • Fastest memory element but small storage very
    expensive
  • Cache
  • Fast and small compared to main memory acts as a
    buffer between the CPU and main memory it
    contains the most recent used memory locations
    (address and contents are recorded here)
  • Main memory is the RAM of the system
  • Disk storage - HDD

32
Memory Hierarchy Design (3)
  • Comparison between different types of memory

HDD
Register
Cache
Memory
size speed /Mbyte
32 - 256 B 2 ns
32KB - 4MB 4 ns 100/MB
128 MB 60 ns 1.50/MB
20 GB 8 ms 0.05/MB
larger, slower, cheaper
33
(No Transcript)
34
(No Transcript)
35
(No Transcript)
36
(No Transcript)
37
(No Transcript)
38
(No Transcript)
39
Memory Hierarchy
  • Can only do useful work at the top
  • 90-10 rule 90 of time is spent of 10 of
    program
  • Take advantage of locality
  • temporal locality
  • keep recently accessed memory locations in cache
  • spatial locality
  • keep memory locations nearby accessed memory
    locations in cache

40
(No Transcript)
41
(No Transcript)
42
(No Transcript)
43
(No Transcript)
44
(No Transcript)
45
The connection between the CPU and cache is very
fast the connection between the CPU and memory
is slower
46
(No Transcript)
47
(No Transcript)
48
(No Transcript)
49
(No Transcript)
50
(No Transcript)
51
(No Transcript)
52
(No Transcript)
53
The Cache Hit Ratio
  • How often is a word found in the cache?
  • Suppose a word is accessed k times in a short
    interval
  • 1 reference to main memory
  • (k-1) references to the cache
  • The cache hit ratio h is then

54
Reasons why we use cache
  • Cache memory is made of STATIC RAM a
    transistor based RAM that has very low access
    times (fast)
  • STATIC RAM is however, very bulky and very
    expensive
  • Main Memory is made of DYNAMIC RAM a capacitor
    based RAM that has very high access times because
    it has to be constantly refreshed (slow)
  • DYNAMIC RAM is much smaller and cheaper

55
Performance (Speed)
  • Access time
  • Time between presenting the address and getting
    the valid data (memory or other storage)
  • Memory cycle time
  • Some time may be required for the memory to
    recover before next access
  • cycle time access recovery
  • Transfer rate
  • rate at which data can be moved
  • for random access memory 1 / cycle time

(cycle time)-1
56
(No Transcript)
57
Memory Hierarchy
smallest, fastest, most expensive, most
frequently accessed
  • size ? speed ? cost ?
  • registers
  • in CPU
  • internal
  • may include one or more levels of cache
  • external memory
  • backing store

medium, quick, price varies
largest, slowest, cheapest, least frequently
accessed
58
(No Transcript)
59
(No Transcript)
60
(No Transcript)
61
(No Transcript)
62
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com