By: Luong Hoang - PowerPoint PPT Presentation

Loading...

PPT – By: Luong Hoang PowerPoint presentation | free to download - id: 17160d-ZDc1Z



Loading


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation
Title:

By: Luong Hoang

Description:

Example: ISP's web cache can be preloaded with pages that have been accessed frequently ... What does TLB stand for? TLB Translation Lookaside Buffer ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 23
Provided by: non8145
Learn more at: http://www.cs.sjsu.edu
Category:
Tags: does | for | hoang | isp | luong | stand | what

less

Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: By: Luong Hoang


1
Caches Caching Part 2
  • By Luong Hoang

2
What will be discussed?
  • Multi-level Cache Hierarchy
  • Preloading Caches
  • Caches Used with Memory
  • TLB as a Cache
  • Demand Paging as a Form of Caching
  • Physical Memory Cache
  • Write Through Write Back
  • Cache Coherence
  • L1, L2, L3 Caches
  • Instruction Data Caches
  • Virtual Memory Caching Cache Flush
  • Direct Mapping Memory Cache
  • Set Associative Memory Cache

3
Hierarchy
FASTER
SMALLER
SLOWER
LARGER
4
  • The concept of multi-level cache is using cache
    to improve the performance of cache
  • A 2nd cache improves performance provided that
    the cost to access the 2nd cache is lower than
    the cost of the original

5
Cost Equation
Remember the old cost equation? Cost rCh
(1-r)Cm
Now with the 2nd cache the Cost equation
is Cost r1Ch1 r2Ch2 (1-r1-r2)Cm r1 hit
ratio of new cache r2 hit ratio of orignial
cache Ch1 cost of accessing new cache
Ch2 cost of accessing original cache
6
Preloading Caches
  • Many cache systems perform well in a steady state
    than during startup.
  • To eliminate this high cost during setup, we can
    preload the cache.
  • Of course this only works where the cache can
    foresee the requests.
  • Example ISPs web cache can be preloaded with
    pages that have been accessed frequently

7
Caches Used with Memory
  • Memory is both expensive and slow, so there was a
    need for ways to improve performance without the
    cost of high-speed memory
  • It was discovered that a small amount of cache
    dramatically improved performance
  • By 1980, many computer systems had a single cache
    located between the processor and memory

8
TLB as a Cache
TLB Translation Lookaside Buffer
What does TLB stand for?
TLB is used to optimize performance of a paging
system. TLB is nothing more than a cache it
looks up a page table entry, the MMU (Memory
Management Unit) stores the entry in the TLB. A
successful match for the same page will receive a
response from the TLB The TLB uses a LRU strategy
9
Demand Paging a Form of Caching
  • Demand Paging
  • technique that follows the same
  • general scheme as segmentation
  • 1.) divides a program into pieces
  • 2.) keeps the pieces on external storage until
    needed
  • 3.) loads an individual piece when the piece is
    referenced

The cache corresponds to main memory and data
stored corresponds to the external storage where
pages are kept until needed Page replacement
policy corresponds to cache replacement policy
10
Physical Memory Cache
  • Early computers used a physical memory system,
    meaning that the processors specified a physical
    address and the memory system responded to the
    physical address. So the cache had to understand
    and use physical addresses.
  • Physical memory cache is designed to
    simultaneously search the local cache and access
    the underlying memory

11
Write Through/ Write Back
  • Write-Through Cache
  • Forwards the request to memory but the cache must
    also check to see whether an item is in the
    cache. If so, the cache must update its copy
  • The cache is kept a copy and forwards the write
    operation to the underlying memory
  • Write Back
  • Cache keeps the data item locally.
  • Only writes value to memory if and only if the
    value reaches the end of the LRU list must be
    replaced

12
Cache Coherence
  • In multiprocessor environment, performance is
    optimized by giving each processor its own cache.
    But the 2 optimization can conflict.
  • To avoid the 2 from read/write for the same
    address, all devices that access memory must
    folllow a cache coherence protocol.
  • Requires one cache to inform the other if it
    reads from an address, so basically requires the
    caches to communicate

13
L1, L2, L3
  • L1 cache refers to the cache onboard processor
  • L2 cache is an external cache
  • L3 cache the cache built into the physical memory
  • This is a multi-level cache hierarchy, as
    stated before by adding another cache can improve
    performance

Interesting Note With chip sizes so large that
the latest processors can integrate a cache
hierarchy on itself. The differences between L1
L2 are weakening
14
Example of AMD Athlon Processor with L1, L2 caches
15
(No Transcript)
16
Instruction Data Caches
  • Should all memory references pass through a
    single cache?
  • Instruction fetches tend to be executed at an
    adjacent address
  • Data accesses in some programs are usually at
    random address, not always close to one another
  • Point Inserting random references in the series
    of request tends to worsen cache performance

17
Virtual Memory Caching/Cache Flush
Virtual Memory Caching - When using virtual
memory with caching, using the virtual addresses
increases memory access speed because the cache
can respond before the MMU translate the virtual
address into a physical address, but it needs
extra hardware that allows the cache to interact
with the VMS
But when the operating systems changes from one
application to another it must change items in
the cache Why? The application will use the same
addresses to refer to a NEW set of values What to
do? One way is to do a Cache Flush which is
removing all values from the cache
18
Implementation of Memory Caching
  • Each entry in a memory cache contains 2 values
  • 1.) a memory address
  • 2.) the value of the byte found at that address
  • But storing entire addresses with each entry is
    inefficient
  • There are 2 different implementations to reduce
    the amount
  • of space needed
  • Direct Mapping Memory Cache
  • Set Associative Memory Cache

19
Direct Mapping Memory Cache
  • Cache divides the memory and the chace into a set
    of equal size blocks (the block size is a power
    of 2)
  • A unique tag value is assigned to each group of
    blocks
  • The point of tags are to identify large groups of
    bytes a tag uses less bits to identify a
    section of memory than a full memory address

20
memory
block
0 1 2 3 0 1 2 3 0 1 2 3
cache
- TAG 0
Tag
Value
0 1 2 3
- TAG 1
- TAG 2
21
Set Associative Memory Cache
  • Uses hardware parallelism to give more
    flexibility. As parallelism increases, set
    associative memory cache performance increases
  • It does so by having multiple underlying caches
    has hardware to search all of them concurrently.
  • Unlike direct mapping cache, since there are
    several underlying caches, there will not be
    addresses contending for the same slot in the
    cache

22
Tag
Tag
Value
Value
0 1 2 3
0 1 2 3
This increases chances of a cache hit with
several caches instead of one
About PowerShow.com