Riyadh Philanthropic Society For Science - PowerPoint PPT Presentation

1 / 23
About This Presentation
Title:

Riyadh Philanthropic Society For Science

Description:

Riyadh Philanthropic Society For Science Prince Sultan College For Woman Dept. of Computer & Information Sciences CS 251 Introduction to Computer Organization ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 24
Provided by: busi438
Category:

less

Transcript and Presenter's Notes

Title: Riyadh Philanthropic Society For Science


1
Riyadh Philanthropic Society For Science Prince
Sultan College For Woman Dept. of Computer
Information Sciences CS 251 Introduction
to Computer Organization Assembly
Language Lecture 9 (Memory Organization)
2
  • Lecture Outline
  • Memory Organization
  • Bits
  • Memory Addresses
  • Words
  • Byte Ordering
  • Summary
  • Cache Memory
  • Introduction
  • Locality Principle
  • Design Issues
  • Cache Memory
  • Reading 2.2 - 2.2.3, 2.2.5-2.2.6

Memory Organization
3
Memory Organization - Bits
  • The basic unit of memory is the binary digit,
    called a bit.
  • A bit may contain a 0 or 1.
  • These collection of bits are organized into
    cells (locations) each of
  • which can store a piece of information.

Memory Organization
4
Memory Organization - Memory Addresses
  • Each cell has a number, called its address, by
    which programs can
  • refer to it.
  • If a memory has k cells, they will have
    addresses 0 to k-1.
  • All cells in a memory contain the same number of
    bits.
  • Adjacent cells have consecutive addresses (by
    definition).

Memory Organization
5
Memory Organization

Three ways of organizing a 96-bit memory
Address Address Address
0 1 2 3 4 5 6 7 8 9 10 11
0 1 2 3 4 5 6 7
0 1 2 3 4 5
16 bits
12 bits
Cell
Cell
8 bits
Cell
Memory Organization
6
Memory Organization
  • Memory addresses are expressed as binary numbers.
  • Ex. An address used to reference a memory with
    12 cells needs at
  • least 4 bits in order to express all the
    numbers from 0 to 11.
  • The number of bits in the address determines the
    maximum number
  • of directly addressable cells in the memory
    and is independent of
  • the number of bits per cell.

Memory Organization
7
Memory Organization - Words
  • The significance of the cell is that it is the
    smallest addressable unit.
  • Nearly all computer manufacturers have
    standardized on an 8-bit
  • cell, which is called a byte.
  • Bytes are grouped into words.
  • A computer with a 32-bit word has 4 bytes/word.
    Whereas a
  • computer with a 64-bit word has 8 bytes/word.
  • Most instructions operate on entire words (ex.
    adding two words).
  • Thus, a 32-bit machine will have 32-bit
    registers and instructions for
  • manipulating 32-bit words.

Memory Organization
8
Memory Organization - Words
  • What about a 64-bit machine?
  • It will have 64-bit registers and instructions
    for moving, adding,
  • subtracting, and otherwise manipulating 64-bit
    words.

Memory Organization
9
Memory Organization - Byte Ordering
  • There are 2 conventions for storing data in
    memory

Big endian
little endian 0

0 4
4
8
8 12

12 The byte numbering
begins at the high The byte numbering begins
at the low order big bytes
order little bytes A 32-bit
integer (ex. 6) is represented A 32-bit
integer (ex. 6) is represented 0110 in byte 3
(or 7 or 11,etc.) 0110 in byte 0
(or 4 or 8,etc.)
Address
Address
0 1 2 3 4 5 6 7 8 9 10
11 12 13 14 15
3 2 1 0 7 6 5 4 11 10 9
8 15 14 13 12
Byte
Byte
32 bits word
32 bits word
Memory Organization
10
Memory Organization - Byte Ordering
  • Big endian
    little endian
  • 0
    0
  • 4
    4
  • 8
    8
  • 12
    12
  • Problem occurs when data is combination of
    integers, character strings and other data type.
  • Problem begins when two different type of
    computers transfer data.

Address
Address
J I M S M I T H 0 0
0 0 0 0 6
M I J T I M S 0 0 0
H 0 0 0 6
Byte
Byte
32 bits word
32 bits word
Memory Organization
11
Memory Organization - Byte Ordering
  • Transfer from big endian to little endian
    machine one byte at a time
  • 0
    0
  • 4
    4
  • 8
    8
  • 12
    12
  • Transmission has reversed the order of characters
    in a word (OK) and order of bytes in an integer
    also reversed (NOT REQUIRED).
  • Therefore original integer 6 is changed and
    garbled.
  • Swapping the bytes ( still does not solve the
    entire problem)
  • Solution Standard for byte ordering when
    exchanging data between different machines
    should be made.

Address
Address
J I M S M I T H 0 0
0 0 0 0 6
M I J T I M S 0 0 0
H 6 0 0 0
Byte
Byte
32 bits word
32 bits word
12
Memory Organization - Summary

M bits
Address 00...0 00...1 2 -1
Memory Words
n
2 memory size
n
  • The memory unit is specified by the number of
    words it contain and the number
  • of bits in each word (memory width).

Memory Organization
13
Introduction- Cache
  • Historically, CPUs have always been faster than
    memories.
  • As memories improved, so have CPUs, preserving
    the imbalance.
  • What this imbalance means in practice is that
    after the CPU issues a
  • memory request, it will not get the word it
    needs for many CPU
  • cycles.
  • The slower the memory, the more cycles the CPU
    will have to wait.

2
Cache Memory
14
Introduction - Cache
  • CPU designers are using new facilities, making
    CPUs go even faster.
  • Memory designers have usually used new
    technology to increase the
  • capacity of their chip, not the speed.
  • So the problem appears to be getting worse in
    time.
  • Actually, the problem is not technology, but
    economics.
  • Engineers know how to build memories that are as
    fast as CPUs, but
  • to run at full speed, they have to be located
    on the CPU chip
  • (because going over the bus to memory is very
    slow).
  • Putting a large memory on the CPU chip makes it
    bigger, which
  • makes it more expensive (there is also limits
    to how big a CPU chip
  • can be made).

3
Cache Memory
15
Introduction - Cache
  • Thus, the choice comes down to having
  • A small amount of fast memory or
  • A large amount of slow memory.
  • What is preferable is having a large amount of
    fast memory at a low
  • price.
  • Techniques are known for combining a small
    amount of fast
  • memory with a large amount of slow memory to
    get the speed of
  • the fast memory (almost) and the capacity of
    the large memory at a
  • moderate price.
  • The small fast memory is called a cache.

4
Cache Memory
16
Cache Memories
  • The basic idea behind a cache is simple
  • The most heavily used memory words are kept in
    the cache.
  • When the CPU needs a word, it first looks in the
    cache.
  • If the word is not there, the CPU looks in the
    main memory.
  • If a substantial fraction of the words are in
    the cache, the average
  • access time can be greatly reduced.
  • Thus, success or failure depends on what
    fraction of the words are in
  • the cache.

5
Cache Memory
17
Cache Memories
  • As we know, programs do not access their
    memories completely at
  • random.
  • If a given memory reference is to address A, it
    is likely that the next
  • memory reference will be in the general
    vicinity of A.
  • Examples
  • Program instructions (except for branches and
    procedure calls)
  • are fetched from consecutive locations
    in memory.
  • Most program execution time is spent in loops,
    in which a
  • limited number of instructions are executed
    over and over.
  • The observation that the memory references made
    in any short time
  • interval tend to use only a small fraction of
    the total memory is
  • called the locality principle.

7
Cache Memory
18
Locality Principle
  • This principle forms the basis for all caching
    systems.
  • The general idea is that when a word is
    referenced, it and some of its
  • neighbors are brought from the large slow
    memory into the cache,
  • so that the next time it is used, it can be
    addressed quickly.
  • The cache is logically between the CPU and main
    memory.
  • Physically, there are several possible place it
    could be located.

8
Cache Memory
19
Locality Principle
  • If a word is read or written k times in a short
    interval, the computer
  • will need
  • 1 reference to slow memory.
  • K - 1 references to fast memory.
  • The larger k is, the better the overall
    performance.
  • Thus, mean access time c ( 1 - h ) m.

Cache Memory
9
20
Locality Principle
Cache Memory
10
21
Locality Principle
  • Using the locality principle as a guide, main
    memories and caches
  • are divided up into fixed-size blocks.
  • The blocks inside the cache are commonly
    referred to as cash lines.
  • When a cache miss occurs, the entire cache line
    is loaded from the
  • main memory into the cache, not just the word
    needed.
  • Ex. With a 64-byte line size, a reference to
    memory address 261 will
  • pull the line consisting of bytes 256 to 319
    into one cache line. With
  • a little bit of luck, some of the other words
    in the cache line will be
  • needed shortly.
  • Cache design is an increasingly important
    subject for high-
  • performance CPUs.

Cache Memory
11
22
Design Issues

1. Cost the bigger the cache, the better it
performs, the more it costs.
  • 2. Size of the cache line a 16-kB cache can be
    divided up into
  • 1k lines of 16 bytes.
  • 2k lines of 8 bytes, etc.

3. Cache Organization that is how does the cache
keep track of which memory words are
currently being held.
  • 4. Number of caches it is common these days to
    have chips with
  • A primary cache on chip.
  • A secondary cache off chip but in the same
    package as the CPU
  • chip.
  • A third cache still further away.

Cache Memory
12
23
Design Issues
  • 5. Whether instructions and data are kept in the
    same cache
  • Unified cache instruction and data use same
    cache
  • (simple design).
  • Split Cache (Harvard architecture) two split
    caches, one for
  • instructions and another for data.

What are the advantages of the harvard
architecture?
  • Allows parallel access to instructions and data.
  • The content of the instruction cache never has
    to be written back to
  • memory, since instructions are not normally
    modified during
  • execution.

Cache Memory
13
Write a Comment
User Comments (0)
About PowerShow.com