UNIVERSITY OF COLOMBO - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

UNIVERSITY OF COLOMBO

Description:

UNIVERSITY OF COLOMBO – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 37
Provided by: BIT18
Category:

less

Transcript and Presenter's Notes

Title: UNIVERSITY OF COLOMBO


1
UNIVERSITY OF COLOMBO SCHOOL OF
COMPUTING
IT2101 Computer Architecture Operating Systems
DEGREE OF BACHELOR OF INFORMATION TECHNOLOGY
2
Agenda
Review of Past Examination Questions
3
  • Questions 23 and 24 are based on the following
    piece of assembly code fragment which is typical
    for adding two integers. MOV R1, 100MOV R2,
    100MOV (R1), 50ADD R2, (R1)

4
After completion of the code execution
Q23
  • (a) memory location 100 contains value 100.
  • (b) memory location 50 contains value100.
  • (c) memory location 100 contains value 50.
  • (d) memory location 100 contains value 150.
  • (e) memory location 50 contains value 150.
  •  

5
At the end of execution of the code, which of the
following are the values of registers R1 and R2
respectively?
Q24
  • (a) R1100, R2100
  • (b) R1 100, R2150
  • (c) R1150, R2100
  • (d) R1100, R2250
  • (e) R1150, R2250

6
Addressing Modes
7
  • MOV R1, 100 / Move literal value 100 to R1 /
    MOV R2, 100 / Move literal value 100 to R2 /
    MOV (R1), 50 / Move literal value 50 to the
    memory location indirect R1(I.e location 100 /
    ADD R2, (R1) / Add indirect R1 to the contents
    of R2 /

8
MOV R1, 100
R1
R2
100
9
R1
R2
100
100
MOV R2, 100
10
R1
R2
100
100
MOV (R1), 50
11
R1
R2
100
150
10050
R2 ? R2(R1)
ADD R2, (R1)
12
After completion of the code execution
Q23
  • (a) memory location 100 contains value 100.
  • (b) memory location 50 contains value100.
  • (c) memory location 100 contains value 50.
  • (d) memory location 100 contains value 150.
  • (e) memory location 50 contains value 150.
  •  

13
At the end of execution of the code, which of the
following are the values of registers R1 and R2
respectively?
Q24
  • (a) R1100, R2100
  • (b) R1 100, R2150
  • (c) R1150, R2100
  • (d) R1100, R2250
  • (e) R1150, R2250

14
When a particular high-level language code
fragment is compiled, it produces the following
set of machine instructions.
Q15
  • MOV R1, j
  • BEQZ R1, label
  • MOV R2, 0
  • JMP exit
  • label MOV R2, R3
  • exit

15
Assuming that values p and q are stored in
registers R2 and R3 respectively, to which
high-level language code fragment does the above
machine code closely correspond?
Q15
(a) if (j ? 0) p q else p 0 (b) if
(j 0) q p else q 0 (c)if (j 0) p
q else p 0 (d)if (j 0) p 0 else
p q (e)if (j ? 0) p 0 else p q
16
MOV R1,j / Move immediate value j to R1/ BEQZ
R1,label / Brach if R10 to label / MOV R2,0
/ Move immediate value 0 to R2 / JMP exit /
Goto exit / label MOV R2, R3 / Move value of
R3 to R2 / exit
Therefore we can see that If R10 PQ else
(R1gt0) P0
17
When a particular high-level language code
fragment is compiled, it produces the following
set of machine instructions.
Q15
  • MOV R1, j
  • BEQZ R1, label
  • MOV R2, 0
  • JMP exit
  • label MOV R2, R3
  • exit

18
Q15
Assuming that values p and q are stored in
registers R2 and R3 respectively, to which
high-level language code fragment does the above
machine code closely correspond?
(a) if (j ? 0) p q else p 0 (b) if
(j 0) q p else q 0 (c)if (j 0) p
q else p 0 (d)if (j 0) p 0 else
p q (e)if (j ? 0) p 0 else p q c
19
Which of the following observations have led to
the development of cache-main memory hierarchy
system in computers?

Q16
(a) There is a higher probability of repeated
access to any data item that has been accessed in
the recent past. (b) There is a higher
probability of access to any data item that is
physically closer to any other data item that has
been accessed in the recent past.
20
Which of the following observations have led to
the development of cache-main memory hierarchy
system in computers?
Q16
  • c) Combining a smaller but high speed memory and
    a larger but slower speed memory gives a better
    overall performance at a lower cost.
  • (d) The operating system requires a paging system
    to handle the large virtual memory.
  • (e) The development of RISC architectures require
    such a hierarchical memory system to exist.

21
Cache
  • Small amount of fast memory
  • Sits in between main memory and CPU

Figure 1
22
Cache operation- overview
  • CPU requests contents of memory location
  • Check cache for this data
  • If present, get from cache (fast)
  • If not present, read required block from main
    memory to cache
  • Then deliver from cache to CPU

23
Locality of Reference
  • During the course of the execution of a program,
    memory references tend to cluster
  • e.g. loops, arrays
  • Temporal Coherence
  • There is a higher probability that an item that
    has been accessed will be accessed again.
  • Spatial Coherence
  • There is a higher probability that an item will
    be accessed in a future will be closer to an item
    that has been accessed

24
Which of the following observations have led to
the development of cache-main memory hierarchy
system in computers?
Q16

(a) There is a higher probability of repeated
access to any data item that has been accessed in
the recent past. (b) There is a higher
probability of access to any data item that is
physically closer to any other data item that has
been accessed in the recent past.
25
Which of the following observations have led to
the development of cache-main memory hierarchy
system in computers?
Q16
  • c) Combining a smaller but high speed memory and
    a larger but slower speed memory gives a better
    overall performance at a lower cost.
  • (d) The operating system requires a paging system
    to handle the large virtual memory.
  • (e) The development of RISC architectures require
    such a hierarchical memory system to exist.

26
Which of the following statement(s) is/are true
with regard to simple instruction pipelining?
Q20
  • (a) Pipelining aims to achieve a CPI of 1.
  • (b) Pipelining requires more hardware resources
    on the CPU.
  • (c) Branch instructions cause disruption to
    pipeline operation.
  • (d) Pipelining aims to achieve a CPI greater than
    1.
  • (e) Memory referencing instructions may cause
    disruption to pipeline efficiency.

27
Instruction Pipelining
  • Analogy Similar to the use of an assembly line
    in a manufacturing plant.
  • Product goes through a series of stages of
    production stages worked out simultaneously.
  • The process is termed as pipelining
  • Exploit the fact that an instruction has a number
    of steps in the cycle Apply the same concept as
    in an manufacturing assembly line to instruction
    execution

28
Instruction Pipelining
  • E.g. Subdivide instruction processing into two
    stages fetch and execute
  • Use the time during execution where there is no
    memory access to fetch the next instruction

29
Instruction Pipelining
  • IF fetch and execute are of equal duration, the
    instruction cycle would be halved?
  • ? Unlikely due to following 2 reasons
  • Execution time generally longer than fetch time
    execution involves reading and storing operands
    plus operations hence fetch stage may have to
    wait for some time before it can empty the buffer
  • A conditional branch instruction makes the
    address of the next instruction to be fetched
    unknown. wait until the execution decides on the
    next instruction address

30
Instruction Pipelining
  • Over come problem of conditional branch by
    guessing
  • Rule? when a conditional branch is passed from
    fetch to execute, fetch the next instruction
    after branch if branch is not taken then use it
    else discard and fetch the new instruction

31
Instruction Pipelining (contd..)
  • More stages to gain further speed up
  • Fetch Instruction (FI)
  • Decode Instruction (DI)
  • Calculate Operands (CO) calculate the effective
    address of each source operand. (address
    calculations e.g. register indirect etc)
  • Fetch Operands (FO) fetch operands from memory
  • Execute Instruction (EI)
  • Write Operands (WO)

32
Instruction Pipelining
  • 6 stage pipelining
  • Assumptions
  • Each stage as of equal duration
  • Each instruction goes through all 6 stages
  • All the 6 stages can be carried out in parallel
  • No memory conflicts
  • FO,FI,WO involves memory access, and diagram
    assumes that they can be carried in parallel. But
    most memory systems will NOT permit that. But
    value may be in cache, FO or WO stage can be null
  • See figure 2 execution time for 9 instruction
    from 54 time units to 14 time units

33
Figure 2
34
Which of the following statement(s) is/are true
with regard to simple instruction pipelining?
Q20
  • (a) Pipelining aims to achieve a CPI of 1.
  • (b) Pipelining requires more hardware resources
    on the CPU.
  • (c) Branch instructions cause disruption to
    pipeline operation.
  • (d) Pipelining aims to achieve a CPI greater than
    1.
  • (e) Memory referencing instructions may cause
    disruption to pipeline efficiency.

35
Contact
  • External Degree Unit (EDU) of the University of
    Colombo School of Computing
  • No. 221/2A, Dharmapala Mawatha,
  • Colombo 7.
  • Phone 074-720511
  • Fax 074-720512
  • http//www.bit. lk

36
Thank you
Write a Comment
User Comments (0)
About PowerShow.com