Computer Systems Lab Courses - PowerPoint PPT Presentation

About This Presentation
Title:

Computer Systems Lab Courses

Description:

... Java, High Performance Fortran (HPF), Linda. CS 363 GMU Spring 2005. 6 ... Named for Ada Lovelace (1815-1851) first programmer (worked with Charles Babbage) ... – PowerPoint PPT presentation

Number of Views:63
Avg rating:3.0/5.0
Slides: 69
Provided by: tjh5
Learn more at: https://www.tjhsst.edu
Category:

less

Transcript and Presenter's Notes

Title: Computer Systems Lab Courses


1
Computer Systems Lab Courses
2
Programming Languages
  • Languages are
  • an abstraction used by the
  • programmer to express an idea
  • interface to the underlying
  • computer architecture

Sebesta Fig. 1.2
2
CS 363 GMU Spring 2005
2
3
Why study Programming Languages?
  • Increases ability to express ideas in a language
  • wide variety of programming features
  • Improves ability to choose appropriate language
  • Each language has strengths and weaknesses in
    term of expressing ideas
  • Improves ability to learn new languages
  • different paradigms, different features
  • What does the future of programming languages
    hold?
  • Improves understanding of significance of
    implementation
  • Provides ability to design new languages
  • Domain specific languages increasingly popular

CS 363 GMU Spring 2005
3
4
Language Paradigms
  • Imperative
  • Central features are variables, assignment
    statements, and iteration
  • Ex C, Pascal, Fortran
  • Object-oriented
  • Encapsulate data objects with processing
  • Inheritance and dynamic type binding
  • Grew out of imperative languages
  • Ex C, Java
  • Functional
  • Main means of making computations is by applying
    functions to given parameters
  • Ex LISP, Scheme, Haskell

CS 363 GMU Spring 2005
4
5
Language Paradigms
  • Logic
  • Declarative ? Rule-based implicit control flow
  • Ex Prolog
  • Dataflow
  • Declarative ? Model computation as information
    flow implicit control flow
  • Inherently parallel
  • Event-Driven
  • Continuous loop with handlers that respond to
    events generated in unpredictable order, such as
    mouse clicks
  • Often an add-on feature
  • Ex Java
  • Concurrent
  • Multiple interacting processes
  • Often an add-on feature
  • Ex Java, High Performance Fortran (HPF), Linda

CS 363 GMU Spring 2005
5
6
Programming Domains
  • Scientific applications
  • One of the earliest uses of computers
  • Large number of floating point computations
  • Long running
  • Imperative (Fortran, C) and Parallel (High
    Performance Fortran)
  • Business applications
  • Produce reports, use decimal numbers and
    characters
  • Increasingly toward web-centric (Java, Perl,
    XML-based languages)
  • Imperative (Cobol) and domain specific (SQL)
  • Artificial intelligence
  • Model human behavior and deduction
  • Symbol manipulation
  • Functional (Lisp) and Logical (Prolog)
  • Systems programming
  • Need efficiency because of continuous use
  • Parallel and event driven
  • Imperative (C)

CS 363 GMU Spring 2005
6
7
Influences on Language Design
  • Programming methodologies
  • 1950s and early 1960s Simple applications worry
    about machine efficiency
  • Late 1960s People efficiency became important
    readability, better control structures
  • Structured programming
  • Top-down design and step-wise refinement
  • Late 1970s Process-oriented to data-oriented
  • data abstraction
  • Middle 1980s Object-oriented programming

CS 363 GMU Spring 2005
7
8
Compilers
Syntactic/semantic structure
tokens
Syntactic structure
Source language
Machine language
Input Data
Output
CS 363 GMU Spring 2005
8
9
Interpreters
Source language
Output
Input Data
CS 363 GMU Spring 2005
9
10
What makes a language successful?
  • Expressive Power
  • Included features impact programmer use
  • Ease of use for Novice
  • Pascal, Basic, Logo
  • Ease of Implementation
  • Excellent Compilers
  • Economics, Patronage, Legacy

CS 363 GMU Spring 2005
10
11
Comparative Languages
11
12
FORTRAN - 1957
  • FORmula TRANslating systems
  • FORTRAN I - 1957
  • (FORTRAN 0 - 1954 - not implemented)
  • Designed by John Backus for the new IBM 704,
    which had index registers and floating point
    hardware
  • Environment of development
  • Computers were small and unreliable
  • Applications were scientific
  • No programming methodology or tools
  • Machine efficiency was most important

CS 363 GMU Spring 2005
12
13
FORTRAN
  • FORTRAN 90 - 1990
  • Modules
  • Dynamic arrays
  • Pointers
  • Recursion
  • CASE statement
  • Parameter type checking

CS 363 GMU Spring 2005
13
14
FORTRAN
  • Contributions
  • First widely used programming language
  • Changed the way people interacted with computers
  • Set the standard for compilers
  • Goal generate machine code similar to that of
    machine language programmers ? highly optimized
    compiler
  • Much of the theory of compilers was developed in
    this compiler.

CS 363 GMU Spring 2005
14
15
Sample Fortran
  • subroutine defcolor(rgb,nframe)
  • implicit none
  • integer nframe
  • integer ihpixf, jvpixf
  • parameter(ihpixf 256, jvpixf 256)
    ! pixel size
  • character1 rgb(3,ihpixf,jvpixf)
    ! RGB data array
  • local integer i, j, idummy
  • do 100 j 1, jvpixf
  • do 100 i 1, ihpixf
  • idummy i3nframe j2
  • idummy mod(idummy,256)
    ! assuming color depth is 256 (0--255)
  • rgb(1,i,j) char(idummy) ! red
  • idummy i1nframe j3
  • idummy mod(idummy,256)
  • rgb(2,i,j) char(idummy)
    ! green
  • idummy i5nframe j7
  • idummy mod(idummy,256)
  • rgb(3,i,j) char(idummy)
    ! blue
  • 100 continue

CS 363 GMU Spring 2005
15
16
Sample Fortran
  • real4 one,eps, ht, tf
  • real8 one8, eps8
  • one 1.
  • one8 1.
  • eps 1.
  • eps8 1.
  • ht 100000
  • tf 24.
  • print , 'Test for precision of real4,
    based on 1eps1'
  • 10 eps eps/2.
  • if (one .ne. oneeps) go to 10
  • eps 2.eps
  • print ,' relative precision is ',eps
  • print
  • print ,'Test for precision of real4,',
    'based on 100000eps100000'
  • eps 1.
  • 15 eps eps/2.
  • if (ht .ne. hteps) go to 15
  • eps 2.eps
  • print ,' relative precision is ',eps
  • print
  • print ,'Test for precision of real4,',
    'based on 24eps24'
  • eps 1.
  • 17 eps eps/2.
  • if (tf .ne. tfeps) go to 17
  • eps 2.eps
  • print ,' relative precision is ',eps
  • print
  • print , 'Test for precision of real8,
    based on 1eps1'
  • 20 eps8 eps8/2.
  • if (one8 .ne. one8eps8) go to 20
  • eps8 2.eps8
  • print ,' relative precision is ',eps8
  • end

CS 363 GMU Spring 2005
16
17
LISP - 1959
  • LISt Processing language
  • AI research needed a language that
  • Processed data in lists (rather than arrays)
  • Allowed symbolic computation (rather than
    numeric)
  • Only two data types atoms and lists
  • Syntax is based on lambda calculus
  • No variables or assignment
  • Control via recursion and conditional expressions
  • (A B C D) apply function A to arguments B C D

CS 363 GMU Spring 2005
17
18
LISP
  • Pioneered functional programming
  • First interpreters slow
  • COMMON LISP and Scheme are contemporary dialects
    of LISP
  • ML, Miranda, and Haskell are related languages

CS 363 GMU Spring 2005
18
19
LISP Sample
  • Problem remove the first occurrence of atom A
    from list L
  • (define (remove L A)
  • (cond ( (null? L) '() )
  • ( ( (car L) A) (cdr L))
    Match found!
  • (else (cons (car L)(remove (cdr L)
    A)))

  • keep searching
  • )
  • )

CS 363 GMU Spring 2005
19
20
BASIC - 1964
  • Beginners All-purpose Symbolic Instruction Code
  • Designed by Kemeny Kurtz at Dartmouth
  • Design Goals
  • Easy to learn and use for non-science students
  • pleasant and friendly
  • Fast turnaround for homework
  • Free and private access
  • User time is more important than computer time
  • Current popular dialect Visual BASIC
  • First widely used language with time sharing

CS 363 GMU Spring 2005
20
21
APL and SNOBOL
  • Characterized by dynamic typing and dynamic
    storage allocation
  • APL (A Programming Language) 1962
  • Designed as a hardware description language (at
    IBM)
  • Highly expressive (many operators, for both
    scalars and arrays of various dimensions)
  • Programs are very difficult to read
  • SNOBOL(1964)
  • Designed as a string manipulation language (at
    Bell Labs)
  • Powerful operators for string pattern matching

CS 363 GMU Spring 2005
21
22
Snobol Example
  • Find biggest words and numbers in a test string
  • BIGP (P TRY GT(SIZE(TRY,SIZE(BIG)))
    BIG FAIL
  • STR 'IN 1964 NFL ATTENDANCE JUMPED TO
    4,807,884
  • 'AN INCREASE OF 401,810.'
  • P SPAN('0123456789,')
  • BIG
  • STR BIGP
  • OUTPUT 'LONGEST NUMBER IS '
  • BIG P SPAN('ABCDEFGHIJKLMNOPQRSTUVWXYZ')
  • BIG
  • STR BIGP
  • OUTPUT 'LONGEST WORD IS ' BIG
  • END

CS 363 GMU Spring 2005
22
23
ALGOL 68 - 1968
  • From the continued development of ALGOL 60, but
    it is not a superset of that language
  • Features
  • User-defined data structures
  • Reference types
  • Dynamic arrays (called flex arrays)
  • Contribution
  • Had even less usage than ALGOL 60 BUT had strong
    influence on subsequent languages, especially
    Pascal, C, and Ada

CS 363 GMU Spring 2005
23
24
Important ALGOL Descendants
  • Pascal - 1971
  • Designed by Wirth, who quit the ALGOL 68
    committee (didn't like the direction)
  • Designed for teaching structured programming
  • Small, simple, nothing really new
  • From mid-1970s until the late 1990s, it was the
    most widely used language for teaching
    programming in colleges

CS 363 GMU Spring 2005
24
25
Pascal Sample
  • program fibonacci(input, output)
  • type natural 0..maxint
  • var fnm1,fnm2, fn,n,i natural
  • begin
  • readln(n)
  • if n lt 1 then writeln(n) F0 0 and F1 1
  • else begin compute Fn
  • fnm2 0 fnm1 1
  • for i 2 to n do begin
  • fn fnm1 fnm2
  • fnm2 fnm1
  • fnm1 fn
  • end
  • writeln(fn)
  • end
  • end.

CS 363 GMU Spring 2005
25
26
Important ALGOL Descendants
  • C - 1972
  • Designed for systems programming (at Bell Labs by
    Dennis Richie)
  • Evolved primarily from B, but also ALGOL 68
  • Powerful set of operators, but poor type checking
  • Initially spread through UNIX

CS 363 GMU Spring 2005
26
27
C Sample
  • include ltstdio.hgt
  • main()
  • int fnm1,fnm2, fn,n,i
  • scanf(d,n)
  • if (n lt 1) printf(d\n, n) / F0 0 and F1
    1/
  • else / compute Fn/
  • fnm2 0 fnm1 1
  • for (i 2 iltn i)
  • fn fnm1 fnm2
  • fnm2 fnm1
  • fnm1 fn
  • printf(d\n,fn)

CS 363 GMU Spring 2005
27
28
Prolog - 1972
  • Developed at the University of Aix-Marseille, by
    Comerauer and Roussel, with some help from
    Kowalski at the University of Edinburgh
  • Applications in AI, DBMS
  • Based on formal logic
  • Non-procedural
  • Can be summarized as being an intelligent
    database system that uses an inferencing process
    to infer the truth of given queries
  • Inefficient execution

CS 363 GMU Spring 2005
28
29
Prolog Sample
  • parent(X,Y) - mother(X,Y).
  • parent(X,Y) - father(X,Y).
  • sibling(X,Y) - mother(M,X), mother(M,Y),
    father(D,X), father(D,Y),X \ Y.
  • haschildren(X) - parent(X,_).
  • grandparent(X,Y) - parent(X,M),parent(M,Y).
  • cousin(X,Y) - parent(A,X),parent(B,Y),sibling(A,B
    ).
  • mother(anne,mary).
  • mother(anne,liz).
  • mother(anne,susan).
  • mother(anne,virginia).
  • mother(elizabeth,russ).
  • mother(mabel,anne).
  • father(james,zelie).
  • father(james,harry).
  • father(james,ned).
  • father(john,will).
  • father(john,russell).
  • father(jim,rachel).
  • father(jim,maggie).
  • father(tim,patrick).

grandparent(anne,Y).
CS 363 GMU Spring 2005
29
30
Ada - 1983 (began in mid-1970s)
  • Huge design effort, involving hundreds of people,
    much money, and about eight years
  • Environment More than 450 different languages
    being used for DOD embedded systems (no software
    reuse and no development tools)
  • Named for Ada Lovelace (1815-1851) first
    programmer (worked with Charles Babbage).

CS 363 GMU Spring 2005
30
31
Ada Sample
  • package body ArrayCalc is
  • function sum return integer is
  • temp integer
  • -- Body of function sum
  • begin
  • temp 0
  • for i in 1..v.sz loop
  • temp temp v.val(i)
  • end loop
  • v.sz0
  • return temp
  • end sum
  • procedure setval(argin integer) is
  • begin
  • v.sz v.sz1
  • v.val(v.sz)arg
  • end setval
  • end

with Text_IO use Text_IO with ArrayCalc use
ArrayCalc procedure main is k, m integer
begin -- of main get(k) while kgt0
loop for j in 1..k loop get(m)
put(m,3) setval(m) end loop
new_line put("SUM ")
put(ArrayCalc.sum,4) new_line get(k)
end loop end
  • package ArrayCalc is
  • type Mydata is private
  • function sum return integer
  • procedure setval(argin integer)
  • private
  • size constant 99
  • type myarray is array(1..size) of integer
  • type Mydata is record
  • val myarray
  • sz integer 0
  • end record
  • v Mydata
  • end

CS 363 GMU Spring 2005
31
32
Smalltalk - 1972-1980
  • Developed at Xerox PARC, initially by Alan Kay,
    later by Adele Goldberg
  • First full implementation of an object-oriented
    language (data abstraction, inheritance, and
    dynamic type binding)

CS 363 GMU Spring 2005
32
33
C - 1985
  • Developed at Bell Labs by Stroustrup
  • Evolved from C and SIMULA 67
  • Facilities for object-oriented programming, taken
    partially from SIMULA 67, were added to C
  • Also has exception handling
  • A large and complex language, in part because it
    supports both procedural and OO programming
  • Rapidly grew in popularity, along with OOP
  • ANSI standard approved in November, 1997

CS 363 GMU Spring 2005
33
34
C Related Languages
  • Eiffel - a related language that supports OOP
  • (Designed by Bertrand Meyer - 1992)
  • Not directly derived from any other language
  • Smaller and simpler than C, but still has most
    of the power
  • Delphi (Borland)
  • Pascal plus features to support OOP
  • More elegant and safer than C

CS 363 GMU Spring 2005
34
35
Java (1995)
  • Developed at Sun in the early 1990s
  • Based on C
  • Significantly simplified (does not include
    struct, union, enum, pointer arithmetic, and half
    of the assignment coercions of C)
  • Supports only OOP
  • Has references, but not pointers
  • Includes support for applets and a form of
    concurrency

CS 363 GMU Spring 2005
35
36
Scripting Languages
  • JavaScript
  • HTML-embedded at client side and executed by the
    client (i.e. web browser)
  • create dynamic HTML documents
  • PHP
  • HTML enabled server side
  • Interpreted by server when a document in which it
    is embedded is requested.
  • Output of interpretation is HTML that replaces
    the PHP

CS 363 GMU Spring 2005
36
37
Supercomputer Applications
37
38
Topics
  • Parallel computing and High Performance Computing
  • Clusters
  • MPI Message Passing Interface
  • Passing messages among processes
  • OpenGL and computer graphics applications

39
Parallel Computing Overview
  • a high performance parallel computer is a
    computer that can solve large problems in a much
    shorter time than a single desktop computer.
  • characterized by fast CPUs, large memory, a high
    speed interconnect, and high speed input/output.
  • able to speed up computations both by making the
    sequential components run faster and by doing
    more operations in parallel.

40
  • High performance parallel computers are in demand
    because there is a need for tremendous
    computational capabilities in science,
    engineering, and business.
  • There are applications that require gigabytes of
    memory and gigaflops of performance.
  • Today, application scientists are striving for
    terascale performance, to permit an even larger
    class of problems to be solved.

41
  • Terascale refers to computers that perform more
    than one trillion floating-point operations per
    second, called "teraflops"
  • High performance parallel computers are used in a
    wide variety of disciplines.
  • Meteorologists use them for the prediction of
    tornadoes and thunderstorms
  • help computational biologists analyze DNA
    sequences
  • used by pharmaceutical companies in the design of
    new drugs,

42
  • by oil companies for their seismic exploration,
  • Wall Street for the analysis of financial markets
  • NASA uses them for aerospace vehicle design,
  • the entertainment industry uses them for special
    effects in movies and commercials
  • The common characteristic of all these complex
    scientific and business applications is the need
    to perform computations on large datasets or
    large equations.

43
Parallelism in Computer Programs
  • It used to be thought that computer programs were
    sequential in nature
  • algorithm is defined as the "sequence of steps"
    necessary to carry out a computation.
  • In the first 30 years of computer use, programs
    were run sequentially because of this thinking.
  • the 1980's saw great successes with parallel
    computers.
  • Dr. Geoffrey Fox published a book entitled
    Parallel Computing Works! helped lead to a
    reversal in thinking.

44
  • Parallel computing is what a computer does when
    it carries out more than one computation at a
    time using more than one processor.
  • If one processor can perform the arithmetic in
    time t, then ideally p processors can perform the
    arithmetic in time t/p.
  • The benefit of parallelism is that it allows
    researchers to do computations on problems they
    previously were unable to solve.

45
Comparison of Parallel Computers
The hardware components of parallel computers
kinds of processors types of memory
organization flow of control
interconnection networks We'll see what is
common to these parallel computers, and what
makes each one of them unique.
46
Processors
There are three types of parallel computers 1.
computers with a small number of powerful
processors 2. computers with a large number of
less powerful processors 3. computers that are
medium scale in between the two extremes
47
A Small Number of Powerful Processors
They are general-purpose computers that perform
especially well on applications that have large
vector lengths. The examples of this type of
computer are the Cray SV1 and the Fujitsu VPP5000.
48
A Large Number of Less Powerful Processors
Computers with a large number of less powerful
processors, named a Massively Parallel Processor
(MPP), typically have thousands of processors.
The processors are usually proprietary and
air-cooled. Because of the large number of
processors, the distance between the furthest
processors can be quite large requiring a
sophisticated internal network that allows
distant processors to communicate with each other
quickly.
49
Medium Scale Computers
Medium scale computers typically have hundreds of
processors. The processor chips are usually not
proprietary rather they are commodity processors
like the Pentium III. These are general-purpose
computers that perform well on a wide range of
applications. The most common example of this
class is the Linux Cluster.
50
Trends and Examples
Decade Processor Type Computer
Example 1970s Pipelined, Proprietary
Cray-1 1980s Massively Parallel,
Proprietary Thinking
Machines CM2 1990s
Superscalar, RISC, Commodity SGI

Origin2000 2000s CISC, Commodity
Workstation
Clusters
51
(No Transcript)
52
(No Transcript)
53
(No Transcript)
54
(No Transcript)
55
Background on MPI
  • MPI - Message Passing Interface
  • Library standard defined by a committee of
    vendors, implementers, and parallel programmers
  • Used to create parallel SPMD programs based on
    message passing
  • 100 portable one standard, many implementations
  • Available on almost all parallel machines in C
    and Fortran
  • Over 100 advanced routines but 6 basic

56
Key Concepts of MPI
  • Used to create parallel SPMD programs based on
    message passing
  • Normally the same program is running on several
    different processors
  • Processors communicate using message passing
  • Typical methodology

start job on n processors do i1 to j each
processor does some calculation pass messages
between processor end do end job
57
Summary
  • MPI is used to create parallel programs based on
    message passing
  • Usually the same program is run on multiple
    processors
  • The 6 basic calls in MPI are
  • MPI_INIT( ierr )
  • MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr )
  • MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
  • MPI_Send(buffer, count,MPI_INTEGER,destination,
    tag, MPI_COMM_WORLD, ierr)
  • MPI_Recv(buffer, count, MPI_INTEGER,source,tag,
    MPI_COMM_WORLD, status,ierr)
  • call MPI_FINALIZE(ierr)

58
MPI helloworld.c
include ltmpi.hgt main(int argc, char argv)
int numtasks, rank MPI_Init(argc,
argv) MPI_Comm_size(MPI_COMM_WORLD,
numtasks) MPI_Comm_rank(MPI_COMM_WORLD,
rank) printf("Hello World from process
d of d\n, rank, numtasks)
MPI_Finalize()
59
MPI_COMM_WORLD
  • MPI_INIT defines a communicator called
    MPI_COMM_WORLD for every process that calls it.
  • All MPI communication calls require a
    communicator argument
  • MPI processes can only communicate if they share
    a communicator.
  • A communicator contains a group which is a list
    of processes
  • Each process has its rank within the
    communicator
  • A process can have several communicators

60
Sample Run and Output
  • A Run with 3 Processes
  • manjragt mpirun -np 3 -machinefile machines.list
    helloworld
  • Hello World from process 0 of 3
  • Hello World from process 1 of 3
  • Hello World from process 2 of 3
  • A Run by default
  • manjragt helloworld
  • Hello World from process 0 of 1

61
Sample Run and Output
  • A Run with 6 Processes
  • manjragt mpirun -np 6 -machinefile machines.list
    helloworld
  • Hello World from process 0 of 6
  • Hello World from process 3 of 6
  • Hello World from process 1 of 6
  • Hello World from process 5 of 6
  • Hello World from process 4 of 6
  • Hello World from process 2 of 6
  • Note Process execution need not be in process
    number order.

62
Image ProcessingEdge Detection

63
Parallel Game of Life
..o......... ..o......... ..o......... .....oo....
. .oo......... ....ooo.ooo. ...ooo...... .........
... .oo.....o... ........o... ...oo...o... .......
.....
64
Computer Architecture
65
Classification of Computer Architectures
  • 1. Basic SISD Architectures Traditional
    Machines
  • 1. Zero Address Stack Approach
  • 2. One Address Accumulator Approach
  • 3. Two Address Register/Memory Address
    Approach
  • 2. Advanced Architectures RISC Machines
  • 3. SIMD Architectures Vector Pipelines and
    Parallelism
  • 4. MIMD Architectures Distributed Processing
    (PVM), Hypercube Architecture

66
Digital Logic Level
  • 1. Logic gates, Boolean Algebra, Circuit
    Design, Multiplexers
  • 2. Parity, Error correction (Hamming Code),
    and Data Compression (Huffman Code)

67
Microprogramming and Microachitectures
  • 1. Microprocessors and Instruction formats
  • 2. Opcode design Expanding opcodes
  • 3. Target Machine verses hardware
    considerations
  • 4. Registers and word length
  • 5. Example Mic-1 Microprogram Level
    Interpreter

68
Assembly Language level
  • 1. Assembly Language Instruction Sets,
    Introduction to SPIM
  • 2. Branching and Loops
  • 3. Addressing Techniques
  • 4. Input and Output
  • 5. Program Tuning
Write a Comment
User Comments (0)
About PowerShow.com