Lecture note 8 Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses - PowerPoint PPT Presentation

1 / 55
About This Presentation
Title:

Lecture note 8 Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses

Description:

Review: Four Questions for Memory Hierarchy Designers ... Limits cache to page size: what if want bigger caches and uses same trick? ... – PowerPoint PPT presentation

Number of Views:127
Avg rating:3.0/5.0
Slides: 56
Provided by: randyhkatz
Category:

less

Transcript and Presenter's Notes

Title: Lecture note 8 Memory Hierarchy Misses, 3 Cs and 7 Ways to Reduce Misses


1
Lecture note 8Memory Hierarchy Misses, 3 Cs and
7 Ways to Reduce Misses
  • Pradondet Nilagupta
  • (Based on notes Robert F. Hodson --- CNU)
  • (Based on notes by Randy H. Katz)

2
Review Who Cares About the Memory Hierarchy?
  • Processor Only Thus Far in Course
  • CPU cost/performance, ISA, Pipelined Execution
  • CPU-DRAM Gap
  • 1980 no cache in ?proc 1995 2-level cache, 60
    trans. on Alpha 21164 ?proc (150 clock cycles
    for a miss!)

3
Review Four Questions for Memory Hierarchy
Designers
  • Q1 Where can a block be placed in the upper
    level? (Block placement)
  • Fully Associative, Set Associative, Direct Mapped
  • Q2 How is a block found if it is in the upper
    level? (Block identification)
  • Tag/Block
  • Q3 Which block should be replaced on a miss?
    (Block replacement)
  • Random, LRU
  • Q4 What happens on a write? (Write strategy)
  • Write Back or Write Through (with Write Buffer)

4
Review Cache Performance Equations
CPUtime (CPU execution cycles Mem stall
cycles) Cycle time Mem stall cycles Mem
accesses Miss rate Miss penalty CPUtime IC
(CPIexecution Mem accesses per instr Miss
rate Miss penalty) Cycle time Misses
per instr Mem accesses per instr Miss
rate CPUtime IC (CPIexecution Misses per
instr Miss penalty) Cycle time
5
Improving Memory Performance
Memory
Latency single trip delay
Bandwidth maximum throughput
Cache
CPU
6
Review Improving Cache Performance
  • 1. Reduce the miss rate,
  • 2. Reduce the miss penalty, or
  • 3. Reduce the time to hit in the cache.

7
Reducing Misses
  • Classifying Misses 3 Cs
  • CompulsoryThe first access to a block is not in
    the cache, so the block must be brought into the
    cache. These are also called cold start misses or
    first reference misses.(Misses in Infinite
    Cache)
  • CapacityIf the cache cannot contain all the
    blocks needed during execution of a program,
    capacity misses will occur due to blocks being
    discarded and later retrieved.(Misses in Size )
  • ConflictIf the block-placement strategy is set
    associative or direct mapped, conflict misses (in
    addition to compulsory and capacity misses) will
    occur because a block can be discarded and later
    retrieved if too many blocks map to its set.
    These are also called collision misses or
    interference misses.(Misses in N-way Associative)

8
3Cs Absolute Miss Rate
Conflict
9
3Cs Relative Miss Rate(scaled to DM miss rate)
Conflict
10
How to Reduce Misses?
  • Increase Block Size
  • Increase Associativity
  • Use a Victim Cache
  • Use a Pseudo Associative Cache
  • Hardware Prefetching
  • Compiler-Controlled Prefetching
  • Compiler Optimizations

11
1. Increase Block Size
  • One way to reduce the miss rate is to increase
    the block size
  • Reduce compulsory misses - why?
  • Take advantage of spacial locality
  • However, larger blocks have disadvantages
  • May increase the miss penalty (need to get more
    data)
  • May increase hit time (need to read more data
    from cache and larger mux)
  • May increase conflict and capacity misses
  • Increasing the block size can help, but dont
    overdo it.

12
1. Reduce Misses via Larger Block Size
Cache Size (bytes)
25
1K
20
4K
15
Miss
16K
Rate
10
64K
5
256K
0
16
32
64
128
256
13
2. Reduce Misses via Higher Associativity
  • Increasing associativity helps reduce conflict
    misses
  • 21 Cache Rule
  • The miss rate of a direct mapped cache of size N
    is about equal to the miss rate of a 2-way set
    associative cache of size N/2
  • Disadvantages of higher associativity
  • Need to do large number of comparisons
  • Need n-to-1 multiplexer for n-way set associative
  • Could increase hit time

14
Example Avg. Memory Access Time vs. Associativity
  • Example assume CCT 1.10 for 2-way, 1.12 for
    4-way, 1.14 for 8-way vs. CCT of direct mapped
  • Cache Size Associativity
  • (KB) 1-way 2-way 4-way 8-way
  • 1 7.65 6.60 6.22 5.44
  • 2 5.90 4.90 4.62 4.09
  • 4 4.60 3.95 3.57 3.19
  • 8 3.30 3.00 2.87 2.59
  • 16 2.45 2.20 2.12 2.04
  • 32 2.00 1.80 1.77 1.79
  • 64 1.70 1.60 1.57 1.59
  • 128 1.50 1.45 1.42 1.44
  • (Red means A.M.A.T. not improved by more
    associativity)
  • Does not take into account effect of slower clock
    on rest of program

15
Avg. Memory Access vs Cache Size vs Associativity
Memory Access Time
16
3. Reducing Misses via Victim Cache
  • Add a small fully associative victim cache to
    place data discarded from regular cache
  • When data not found in cache, check victim cache
  • 4-entry victim cache removed 20 to 95 of
    conflicts for a 4 KB direct mapped data cache
  • Get access time of direct mapped with reduced
    miss rate

17
4. Reducing Misses via Pseudo-Associativity
  • How to combine fast hit time of Direct Mapped and
    have the lower conflict misses of 2-way SA cache?
  • Divide cache on a miss, check other half of
    cache to see if there, if so have a pseudo-hit
    (slow hit)
  • Drawback CPU pipeline is hard if hit takes 1 or
    2 cycles
  • Better for caches not tied directly to processor

Hit Time
Miss Penalty
Pseudo Hit Time
Time
18
Example
Compare direct-mapped, 2-way set associative,
and pseudo associative. Use AMAT It takes two
extra cycles to find an entry in the alternative
location (one cycle to check and one cycle to
swap) Miss rate pseudo Miss rate 2-way Miss
penalty pseudo Miss penalty 1-way 1 Hit time
pseudo Hit time 1-way Alternate hit rate
pseudo 2 Alternate hit rate pseudo Hit rate
2-way Hit rate 1-way Miss rate
1-way Miss rate 2-way
19
5. Reducing Misses by HW Prefetching of
Instruction Data
  • E.g., Instruction Prefetching
  • Alpha 21064 fetches 2 blocks on a miss
  • Extra block placed in stream buffer
  • On miss check stream buffer
  • Works with data blocks too
  • Jouppi 1990 1 data stream buffer got 25 misses
    from 4KB cache 4 streams got 43
  • Palacharla Kessler 1994 for scientific
    programs for 8 streams got 50 to 70 of misses
    from 2 64KB, 4-way set associative caches
  • Prefetching relies on extra memory bandwidth that
    can be used without penalty

20
6. Reducing Misses by SW Prefetching Data
  • Compiler inserts data prefetch instructions
  • Load data into register (HP PA-RISC loads)
  • Cache Prefetch load into cache (MIPS IV,
    PowerPC, SPARC v. 9)
  • Special prefetching instructions cannot cause
    faultsa form of speculative execution
  • Issuing Prefetch Instructions takes time
  • Is cost of prefetch issues lt savings in reduced
    misses?
  • Higher superscalar reduces difficulty of issue
    bandwidth

21
7. Reducing Misses by Compiler Optimizations
  • Instructions
  • Reorder procedures in memory so as to reduce
    misses
  • Profiling to look at conflicts between groups of
    instructions
  • McFarling 1989 reduced caches misses by 75 on
    8KB direct mapped cache with 4 byte blocks
  • Data
  • Merging Arrays improve spatial locality by
    single array of compound elements vs. 2 arrays
    (prevents the same indicies being used)
  • Loop Interchange change nesting of loops to
    access data in order stored in memory
  • Loop Fusion Combine 2 independent loops that
    have same looping and some variables overlap
  • Blocking Improve temporal locality by accessing
    blocks of data repeatedly vs. going down whole
    columns or rows

22
Merging Arrays Example
  • / Before /
  • int valSIZE
  • int keySIZE
  • / After /
  • struct merge
  • int val
  • int key
  • struct merge merged_arraySIZE
  • Reducing conflicts between val key improve
    spatial locality

23
Array Organizations
Row Col
Memory
0 1 2 3 4
0 1 2 0 1 2 0 1 2 0 1 2 0 1 2
Xij
Row Major Ordering
24
Loop Interchange Example
  • / Before /
  • for (j 0 j lt 100 j j1)
  • for (i 0 i lt 5000 i i1)
  • xij 2 xij
  • / After /
  • for (i 0 i lt 5000 i i1)
  • for (j 0 j lt 100 j j1)
  • xij 2 xij
  • Sequential accesses Instead of striding through
    memory every 100 words

25
Loop Fusion Example
  • / Before /
  • for (i 0 i lt N i i1)
  • for (j 0 j lt N j j1)
  • aij 1/bij cij
  • for (i 0 i lt N i i1)
  • for (j 0 j lt N j j1)
  • dij aij cij
  • / After /
  • for (i 0 i lt N i i1)
  • for (j 0 j lt N j j1)
  • aij 1/bij cij
  • dij aij cij
  • 2 misses per access to a c vs. one miss per
    access

26
Blocking Example
  • / Before /
  • for (i 0 i lt N i i1)
  • for (j 0 j lt N j j1)
  • r 0
  • for (k 0 k lt N k k1)
  • r r yikzkj
  • xij r
  • Two Inner Loops
  • Read all NxN elements of z
  • Read N elements of 1 row of y repeatedly
  • Write N elements of 1 row of x
  • Capacity Misses a function of N Cache Size
  • 3 NxNx4 gt no capacity misses otherwise ...
  • Idea compute on BxB submatrix that fits

27
Blocking Example
  • / After /
  • for (jj 0 jj lt N jj jjB)
  • for (kk 0 kk lt N kk kkB)
  • for (i 0 i lt N i i1)
  • for (j jj j lt min(jjB-1,N) j j1)
  • r 0
  • for (k kk k lt min(kkB-1,N) k k1)
  • r r yikzkj
  • xij xij r
  • Capacity Misses from 2N3 N2 to 2N3/B N2
  • B called Blocking Factor
  • Conflict Misses Too?

28
Snapshot of Array during Matrix Multiply
j
k
j
i
i
k
X Y
x Z
Blue indicates the most recently access values.
29
Snapshot of Array during Matrix Multiply with
Blocking
j
k
j
i
i
k
X Y
x Z
Blue indicates the most recently access values.
30
Reducing Conflict Misses by Blocking(by reducing
the number of active words in the cache)
  • Conflict misses in caches not FA vs. Blocking
    size
  • Lam et al 1991 a blocking factor of 24 had a
    fifth the misses vs. 48 despite both fit in cache

31
Summary of Compiler Optimizations to Reduce Cache
Misses
32
Summary
  • 3 Cs Compulsory, Capacity, Conflict Misses
  • Reducing Miss Rate
  • 1. Reduce Misses via Larger Block Size
  • 2. Reduce Misses via Higher Associativity
  • 3. Reducing Misses via Victim Cache
  • 4. Reducing Misses via Pseudo-Associativity
  • 5. Reducing Misses by HW Prefetching Instr, Data
  • 6. Reducing Misses by SW Prefetching Data
  • 7. Reducing Misses by Compiler Optimizations
  • Remember danger of concentrating on just one
    parameter when evaluating performance
  • Next lecture reducing Miss penalty

33
Review Improving Cache Performance
  • 1. Reduce the miss rate,
  • 2. Reduce the miss penalty, or
  • 3. Reduce the time to hit in the cache.

34
1. Reducing Miss Penalty Read Priority over
Write on Miss
  • Write through with write buffers
  • RAW conflicts with main memory reads on cache
    misses
  • If simply wait for write buffer to empty, might
    increase read miss penalty
  • Check write buffer contents before read
  • if no conflicts, let the memory access continue
  • Write Back Read miss replacing dirty block
  • Normal Write dirty block to memory, and then do
    the read
  • Instead copy the dirty block to a write buffer,
    then do the read, and then do the write
  • CPU stall less since restarts as soon as do read

35
2. Subblock Placement to Reduce Miss Penalty
  • To reduce memory required for tags
  • Make tags smaller
  • The resulting blocks are large
  • To reduce miss penalty
  • Dont load full block on a miss
  • Have bits per subblock to indicate valid

Valid Bits
Sub-blocks
Tags
36
3. Reduce Miss Penalty Early Restart and
Critical Word First
  • Dont wait for full block to be loaded before
    restarting CPU
  • Early restartAs soon as the requested word of
    the block arrives, send it to the CPU and let
    the CPU continue execution
  • Critical Word FirstRequest the missed word first
    from memory and send it to the CPU as soon as it
    arrives let the CPU continue execution while
    filling the rest of the words in the block. Also
    called wrapped fetch and requested word first
  • Generally useful only in large blocks,
  • Spatial locality a problem tend to want next
    sequential word, so not clear if benefit by early
    restart

block
37
4. Reduce Miss Penalty Non-blocking Caches to
reduce stalls on misses
  • Non-blocking cache or lockup-free cache allow
    data cache to continue to supply cache hits
    during a miss
  • requires out-of-order executuion CPU
  • hit under miss reduces the effective miss
    penalty by working during miss vs. ignoring CPU
    requests
  • hit under multiple miss or miss under miss
    may further lower the effective miss penalty by
    overlapping multiple misses
  • Significantly increases the complexity of the
    cache controller as there can be multiple
    outstanding memory accesses
  • Requires muliple memory banks (otherwise cannot
    support)
  • Penium Pro allows 4 outstanding memory misses

38
Value of Hit Under Miss for SPEC
0-gt1 1-gt2 2-gt64 Base
Hit under n Misses
Integer
Floating Point
  • FP programs on average AMAT 0.68 -gt 0.52 -gt
    0.34 -gt 0.26
  • Int programs on average AMAT 0.24 -gt 0.20 -gt
    0.19 -gt 0.19
  • 8 KB Data Cache, Direct Mapped, 32B block, 16
    cycle miss

39
5th Miss Penalty
  • L2 Equations
  • AMAT Hit TimeL1 Miss RateL1 x Miss
    PenaltyL1
  • Miss PenaltyL1 Hit TimeL2 Miss RateL2 x Miss
    PenaltyL2
  • AMAT Hit TimeL1 Miss RateL1 x (Hit TimeL2
    Miss RateL2 Miss PenaltyL2)
  • Definitions
  • Local miss rate misses in this cache divided by
    the total number of memory accesses to this cache
    (Miss rateL2)
  • Global miss ratemisses in this cache divided by
    the total number of memory accesses generated by
    the CPU (Miss RateL1 x Miss RateL2)
  • Global Miss Rate is what matters

40
Local and Global Miss Rates
  • 32 KByte L1 cache
  • Global miss rate close to single level cache rate
    provided L2 gtgt L1
  • Dont use local miss rate because it is a
    function of the miss rate of the first level
    cache.
  • L2 not tied to clock cycle!
  • Cost A.M.A.T.
  • Generally Fast Hit Times and fewer misses
  • Since hits are few, target miss reduction

41
Reducing Misses Which apply to L2 Cache?
  • Reducing Miss Rate
  • 1. Reduce Misses via Larger Block Size
  • 2. Reduce Conflict Misses via Higher
    Associativity
  • 3. Reducing Conflict Misses via Victim Cache
  • 4. Reducing Conflict Misses via
    Pseudo-Associativity
  • 5. Reducing Misses by HW Prefetching Instr, Data
  • 6. Reducing Misses by SW Prefetching Data
  • 7. Reducing Capacity/Conf. Misses by Compiler
    Optimizations

42
L2 cache block size A.M.A.T.
  • 32KB L1, 512KB L2, 4-byte path to memory, 1
    cycle to send the address, 6 cycles to access the
    data, and 1 word/cycle to transfer the data

43
Reducing Miss Penalty Summary
  • Five techniques
  • Read priority over write on miss
  • Subblock placement
  • Early Restart and Critical Word First on miss
  • Non-blocking Caches (Hit under Miss, Miss under
    Miss)
  • Second Level Cache
  • Can be applied recursively to Multilevel Caches
  • Danger is that time to DRAM will grow with
    multiple levels in between
  • First attempts at L2 caches can make things
    worse, since increased worst case is worse

44
Review Improving Cache Performance
  • 1. Reduce the miss rate,
  • 2. Reduce the miss penalty, or
  • 3. Reduce the time to hit in the cache.

45
Reducing Hit Time
  • Hit time affects the CPU clock rate
  • Even for machines that take multiple cycles to
    access the cache
  • Techniques
  • Small and simple caches
  • Avoiding address translation
  • Pipelining writes
  • Small subblocks

46
1. Fast Hit times via Small and Simple Caches
  • With direct mapped caches, you can overlap the
    tag check with transmitting the data to the
    processor.
  • Almost always to keep the L1 cache small enough
    to remain on the chip
  • Example Alpha 21164
  • Level 1 8KB Instruction and 8KB data cache
  • Level 2 96KB unified cache
  • Both caches are direct mapped and on- chip

47
2. Fast hits by Avoiding Address Translation
  • Send virtual address to cache? Called Virtually
    Addressed Cache or just Virtual Cache vs.
    Physical Cache
  • Every time process is switched logically must
    flush the cache otherwise get false hits
  • Cost is time to flush compulsory misses from
    empty cache
  • Dealing with aliases (sometimes called synonyms)
    Two different virtual addresses map to same
    physical address
  • I/O must interact with cache, so need virtual
    address
  • Solution to aliases
  • HW guaranteess covers index field direct
    mapped, they must be uniquecalled page coloring
  • Solution to cache flush
  • Add process identifier tag that identifies
    process as well as address within process cant
    get a hit if wrong process

48
Virtually Addressed Caches
CPU
CPU
CPU
VA
VA
VA
VA Tags

PA Tags
TB

TB
VA
PA
PA
L2
TB

MEM
PA
PA
MEM
MEM
Overlap access with VA translation requires
index to remain invariant across translation
Conventional Organization
Virtually Addressed Cache Translate only on
miss Synonym Problem
49
2. Fast Cache Hits by Avoiding Translation
Process ID impact
  • Purple is uniprocess
  • Yellow is multiprocess when flush cache
  • Red is multiprocess when use Process ID tag
  • Y axis Miss Rates up to 20
  • X axis Cache size from 2 KB to 1024 KB

50
Avoiding Translation During Indexing Virtually
Indexed and Physically Tagged
  • If index is physical part of address, can start
    tag access in parallel with translation so that
    can compare to physical tag
  • Limits cache to page size what if want bigger
    caches and uses same trick?
  • Higher associativity moves barrier to right
  • Page coloring

51
3. Fast Hit Times Via Pipelined Writes
  • Pipeline Tag Check and Update Cache as separate
    stages current write tag check previous write
    cache update
  • Only STORES in the pipeline empty during a miss

52
Pipelined Writes
CPU
  • In shade is Delayed Write Buffer must be
    checked on reads either complete write or read
    from buffer

53
4. Fast Writes on Misses Via Small Subblocks
  • If most writes are 1 word, subblock size is 1
    word, write through then always write subblock
    tag immediately
  • Tag match and valid bit already set Writing the
    block was proper, nothing lost by setting valid
    bit on again.
  • Tag match and valid bit not set The tag match
    means that this is the proper block writing the
    data into the subblock makes it appropriate to
    turn the valid bit on.
  • Tag mismatch This is a miss and will modify the
    data portion of the block. Since write-through
    cache, no harm was done memory still has an
    up-to-date copy of the old value. Only the tag to
    the address of the write and the valid bits of
    the other subblock need be changed because the
    valid bit for this subblock has already been set
  • Doesnt work with write back due to last case

54
What is the Impact of What Youve Learned About
Caches?
  • 1960-1985 Speed f(no. operations)
  • 1990
  • Pipelined Execution Fast Clock Rate
  • Out-of-Order execution
  • Superscalar Instruction Issue
  • 1998 Speed f(non-cached memory accesses)
  • What does this mean for
  • Compilers?, Operating Systems?, Algorithms?
    Data Structures?

55
Cache Optimization Summary
  • Technique MR MP HT Complexity
  • Larger Block Size 0Higher
    Associativity 1Victim Caches 2Pseudo-As
    sociative Caches 2HW Prefetching of
    Instr/Data 2Compiler Controlled
    Prefetching 3Compiler Reduce Misses 0
  • Priority to Read Misses 1Subblock Placement
    1Early Restart Critical Word 1st
    2Non-Blocking Caches 3Second Level
    Caches 2
  • Small Simple Caches 0Avoiding Address
    Translation 2Pipelining Writes 1

miss rate
miss penalty
hit time
Write a Comment
User Comments (0)
About PowerShow.com