20-755: The Internet Lecture 1: Introduction - PowerPoint PPT Presentation

About This Presentation
Title:

20-755: The Internet Lecture 1: Introduction

Description:

Title: Darwin: Customizable Resource Management for Value-added Services http://www.cs.cmu.edu/~darwin Author: Campus User Last modified by – PowerPoint PPT presentation

Number of Views:135
Avg rating:3.0/5.0
Slides: 53
Provided by: Camp141
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: 20-755: The Internet Lecture 1: Introduction


1
20-755 The InternetLecture 1 Introduction
  • David OHallaron
  • School of Computer Science and
  • Department of Electrical and Computer Engineering
  • Carnegie Mellon University
  • Institute for eCommerce, Summer 1999

2
Todays lecture
  • Course overview (25 min)
  • Internet history (25 min)
  • break (10 min)
  • Research overview (50 min)

3
Course Goals
  • Understand the basic Internet infrastructure
  • review of basic computer system and
    internetworking concepts, TCP/IP protocol suite.
  • Understand how this infrastructure is used to
    provide Internet services
  • client-server programming model
  • existing Internet services
  • building secure, scalable, and highly available
    services
  • Understand how to write Internet programs
  • Use DNS and HTTP to map a part of the CMU
    Internet
  • Build a server that provides an interesting
    Internet service.

4
Teaching approach
  • Approach the Internet from a host-centric
    viewpoint
  • How the Internet is used to provide services.
  • Complements the network-centric viewpoint of
    20-770 Communications and Networking.
  • Students learn best by doing
  • In our case, this means programming.

5
Course organization
  • 14 lectures
  • Readings from the textbook and supplementary
    readings are posted beforehand.
  • Guest lecture Bruce Maggs, SCS Assoc Prof and VP
    for Research at Akamai, a Boston-based Internet
    startup.
  • Evaluation
  • Class participation (10)
  • Two programming homeworks (20) (groups of up to
    2)
  • Programming project (50) (groups of up to 2)
  • Final exam (20)
  • Office Hours
  • Mon 200-330
  • These are nominal times. Visit anytime my door is
    open.

6
Programming assignments
  • Will be done on euro.ecom.cmu.edu
  • Pentium-class PC server running Linux
  • Homeworks will use Perl5.
  • Project can use language of your choice.
  • Question
  • Does the class need additional tutoring in
    editing and running Perl5 programs on a Unix box?

7
Scheduling issues
  • Well need to double up on lectures (1030-1220
    and 130-320) on three different days
  • Mon July 12
  • Fri July 16
  • Fri July 23
  • No class Fri Aug 6.

8
Course coverage
  • Intro to computer systems (2 lectures)
  • Review of internetworking (2 lectures)
  • Client-server computing (1 lecture)
  • Web technology (2 lectures)
  • Other Internet applications (1 lecture)
  • Secure servers (1 lecture)
  • Scalable and available servers (2 lectures)
  • RPC-based computing (1 lecture)
  • Internet startup guest lecture

9
Internet history
  • Sources
  • Leiner et. al, A brief history of the Internet,
    www.isoc.org/internet-history/brief.html
  • R. H. Zakon, Hobbes Internet Timeline, v4.1,
    www.isoc.org/guest/zakon/Internet/History/HIT.html
  • D. Comer, The Internet Book, Sec. Edition,
    Prentice-Hall, 1997.

10
ARPANET Origins
  • 1962
  • J.C.R. Licklider (MIT) describes Galactic
    Network.
  • Licklider becomes head of computer research at
    Defense Advanced Research Program (DARPA) and
    convinces eventual successor, Lawrence Roberts
    (MIT), among others, of the importance of the
    concept.
  • 1964
  • Leonard Kleinrock (MIT) publishes first book on
    packet switching.
  • 1965
  • Roberts and Thomas Merrill build first wide-area
    network (using a dial-up phone line!) between MA
    and CA.
  • 1967
  • Roberts (now at DARPA) publishes plan for
    ARPANET, running at a blistering rate of 50
    kbps.

11
ARPANET Origins (cont)
  • 1968
  • DARPA issues RFQ for the packet switch component.
  • BBN (led by Frank Heart) wins contract and
    designs switch called an Interface Message
    Processor (IMP)
  • Bob Kahn (DARPA) works on overall ARPANET arch.
  • Roberts and Howard Frank (Network Analysis Corp)
    work on network topology and economics.
  • Kleinrock (UCLA) builds network measurement
    system.
  • 1969
  • First IMP installed at UCLA (first ARPANET node).
  • Nodes added at SRI, UCSB, and Utah.
  • By the end of the year the 4-node ARPANET is
    working, with 56kbps lines supplied by ATT

12
ARPANET Origins (cont)
  • 1970
  • BBN, RAND, and MIT added to ARPANET.
  • Network Working Group (NWG), under Steve Crocker,
    designed initial host-to-host protocol (NCP).
  • 1971
  • 15 hosts UCLA, SRI, UCSB, Utah, BBN, MIT, RAND,
    SDC, Harvard, Lincoln Labs, UIUC, CWRU, CMU,
    NASA/Ames.
  • Ray Tomlinson (BBN) writes first ARPANET email
    program (origin of the _at_ sign).
  • email becomes the first Internet killer app.

13
Birth of Internetworking
  • 1972
  • Kahn (DARPA) introduces idea of open
    architecture networking
  • Each network must stand on its own, with no
    internal changes allowed to connect to the
    Internet.
  • Communications would be on a best-effort basis.
  • black boxes (later called gateways and
    routers would be used to connect the networks)
  • No global control at the operations level.
  • 1973
  • Metcalf and Boggs (Xerox) develop Ethernet.
  • 1974
  • Kahn and Vint Cerf (Stanford) publish first
    details of TCP, which is later split into TCP and
    IP in 1978.

14
Birth of Internetworking
  • 1980
  • Berkeley releases open source BSD Unix with a
    TCP/IP.
  • 1982
  • DARPA establishes TCP/IP as the protocol suite
    for ARPANET, offering first definition of an
    internet.
  • 1983
  • Jan 1 ARPANET switches from NCP to TCP/IP.
  • 1984
  • Mockpetris (USC/ISI) invents DNS.
  • Number of ARPANET hosts surpasses 1,000.
  • 1985
  • symbolics.com becomes first registered domain
    name.
  • other firsts cmu.edu, purdue.edu, rice.edu,
    ucla.edu, css.gov, mitr.org

15
Birth of Internetworking
  • 1986
  • NSFNET backbone created (56Kbps) between 5
    supercomputing sites (Princeton, Pittsburgh, San
    Diego, Ithica, Urbana), allowing explosion of
    University sites.
  • 1988
  • Internet worm attack
  • NSFNET backbone upgraded to T1 (1.544 Mbps).
  • 1989
  • Number of hosts breaks 100,000.
  • 1990
  • ARPANET ceases to exist.
  • world.std.com becomes first commercial dial-up
    ISP.

16
The Web changed everything...
  • 1991
  • Tim Berners-Lee (CERN) invents the World Wide Web
    (HTTP server and text-based Lynx browser)
  • NSFNET backbone upgraded to T3 (44.736 Mbps).
  • 1993
  • Mosaic WWW browser developed by Marc Andreessen
    (UIUC)
  • 1995
  • WWW traffic surpasses ftp as the source of
    greatest Internet traffic.
  • Netscape goes public.
  • NSFNET decommissioned and replaced by
    interconnected commercial network providers.
  • 1999
  • MCI/Worldcom upgrades its US backbone to 2.5Gbps.

17
Internet Domain Survey(www.isc.org)
18
Summary
  • The Internet has had an enormous impact on the
    world economy and day-to-day lives.
  • mechanism for world-wide information
    dissemination.
  • medium for collaboration and interaction without
    regard to geographic location.
  • One of the most successful examples of
    government, university, and business partnership.
  • Possible only because of sustained government
    investment and commitment to research and
    development.
  • Successful because of commitment by passionate
    researchers to rough consensus and working code
    (David Clarke, MIT)

19
Break time!
20
Dv A toolkit for visualizing massive remote
datasets
  • David OHallaron
  • School of Computer Science and
  • Department of Electrical and Computer Engineering
  • Carnegie Mellon University
  • Institute for eCommerce, Summer 1999

21
Internet service models
request
client
server
response
  • Traditional lightweight service model
  • small to moderate amount of computation to
    satisfy requests
  • e.g. serving web pages, stock quotes, online
    trading, search engines
  • Proposed heayweight service model
  • massive amounts of computations to satisfy
    requests
  • scientific visualization, data mining, medical
    imaging

22
(No Transcript)
23
Quake Project
  • Carnegie Mellon
  • David OHallaron (CS and ECE)
  • Jacobo Bielak PI and Omar Ghattas (CivE)
  • University of California Berkeley
  • Jonathan Shewchuk (EECS)
  • Southern California Earthquake Center
  • Steve Day and Harold Magistrale (San Diego State)
  • Kogakuin University, Tokyo
  • Yoshi Hisada

24
Teora, Italy 1980
25
San Fernando Valley
26
San Fernando Valley (top view)
Hard rock
epicenter
x
Soft soil
27
San Fernando Valley (side view)
Soft soil
Hard rock
28
San Fernando Valley (side view)
Soft soil
Hard rock
29
Initial node distribution
30
Unstructured mesh
31
Unstructured mesh (top view)
32
Partitioned unstructured finite element mesh of
San Fernando
nodes
element
33
Communication graph
Vertices processors Edges communications
34
Quake solver code
NODEVECTOR3 disp3, M, C, M23 MATRIX3 K
/ matrix and vector assembly / FORELEM(i)
... / time integration loop / for
(iter 1 iter lt timesteps iter)
MV3PRODUCT(K, dispdispt, dispdisptplus)
dispdisptplus - IP.dt IP.dt
dispdisptplus 2.0 M dispdispt -
(M - IP.dt / 2.0 C)
dispdisptminus - ...) dispdisptplus
dispdisptplus / (M IP.dt / 2.0 C) i
disptminus disptminus dispt dispt
disptplus disptplus i
35
Archimedes
www.cs.cmu.edu/quake
MVPRODUCT(A,x,w)
Problem
Finite element
DOTPRODUCT(x,w,xw)
Geometry (.poly)
algorithm (.arch)
r r/xw
.c
.node, .ele
a.out
.part
parallel system
.pack
36
Northridge quake simulation
  • 40 seconds of an aftershock from the Jan 17, 1994
    Northridge quake in San Fernando Valley of
    Southern California.
  • Model
  • 50 x 50 x 10 km region of San Fernando Valley.
  • 13,422,563 nodes, 76,778,630 linear tetrahedral
    elements, 1 Hz frequency resolution, 20 meter
    spatial resolution.
  • Simulation
  • 0.0024s timestep
  • 16,666 timesteps (40M x 40M SMVP each timestep).
  • 15 GBytes of DRAM.
  • 6.5 hours on 256 PEs of Cray T3D (150 MHz 21064
    Alphas, 64 MB/PE).
  • Comp 16,679s (71) Comm 575s (2) I/O
    5995s(25)
  • 80 trillion (1012) flops (sustained 3.5 GFLOPS).
  • 800 GB/575s (burst rate of 1.4 GB/s).

37
Kobe 2/2/95 aftershock
38
Kobe 2/2/95 aftershock
39
(No Transcript)
40
Visualization of 1994 Northridge aftershock
41
Visualization of 1994 Northridge aftershock
42
Typical Quake viz pipeline
local display and input
resolution
contours
ROI
scene
interpolation
isosurface extraction
scene synthesis
reading
rendering
remote database
vtk library routines
FEM solver engine
materials database
43
Heavyweight grid service model
WAN
Local compute hosts (allocated once per
request by the service user)
Remote compute hosts (allocated once per service
by the service provider)
44
Active frames
Active Frame Server
Input Active Frame
Output Active Frame
Active frame interpreter
Frame data
Frame data
Frame program
Frame program
Application libraries e.g, vtk
Host
45
Overview of a Dv visualization service
User inputs
Display
Remote dataset
Local Dv client
Request frame
Response frames
...
Dv Server
Dv Server
Dv Server
Dv Server
Resp. frames
Resp. frames
Resp. frames
(Request Server)
Remote DV Active Frame Servers
Local DV Active Frame Servers
46
Grid-enabling vtk with Dv
request frame request server, scheduler,
flowgraph, data reader
local Dv client
status
result
request server
local Dv server
...
...
reader
scheduler
response frames (to other Dv servers) native
data, scheduler, flowgraph,control
remote machine (Dv request server)
local machine (Dv client)
47
Scheduling Dv programs
  • Scheduling at request frame creation time
  • all response frames use same schedule
  • performance portability (i.e. adjusting to
    heterogeneous resources) is possible.
  • no adaptivity (i.e., adjusting to dynamic
    resources)
  • Scheduling at response frame creation time
  • performance portability and limited adaptivity.
  • Scheduling at response frame delivery time
  • performance portability and greatest degree of
    adaptivity.
  • per-frame scheduling overhead a potential
    disadvantage.

48
Scheduling scenarios
Ultrahigh Bandwidth Link
powerful local server
low-end remote server
49
Scheduling scenarios
High Bandwidth Link
high-end remote server
powerful local workstation
50
Scheduling scenarios
Low Bandwidth Link
high-end remote server
local PC
51
Scheduling scenarios
Low Bw Link
High Bandwidth Link
high-end remote server
low-end local PC or PDA
powerful local proxy server
52
Summary
  • Heavyweight grid service model
  • service providers can constrain resources
    allocated to a particular service
  • service users can contribute resources to improve
    response time of throughput
  • Active frames
  • general software framework for providing
    heavyweight Internet services
  • framework can be specialized for a particular
    service type
  • Dv
  • specialized version of active frame server for
    vizualization
  • grid-enables existing vtk toolkit
  • flexible framework for experimenting with
    scheduling algs
Write a Comment
User Comments (0)
About PowerShow.com