PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services http://www.planet-lab.org - PowerPoint PPT Presentation

About This Presentation
Title:

PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services http://www.planet-lab.org

Description:

PlanetLab is a global research network that supports the ... Linux kernel (Fedora Core) Vservers (namespace isolation) Schedulers (performance isolation) ... – PowerPoint PPT presentation

Number of Views:44
Avg rating:3.0/5.0
Slides: 38
Provided by: jas571
Category:

less

Transcript and Presenter's Notes

Title: PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services http://www.planet-lab.org


1
PlanetLab An open platform for developing,
deploying, and accessing planetary-scale
serviceshttp//www.planet-lab.org
  • Overview

Adapted from Peterson et als talks
2
Overview
  • PlanetLab is a global research network that
    supports the development of new network services.
    Since the beginning of 2003, more than 1,000
    researchers at top academic institutions and
    industrial research labs have used PlanetLab to
    develop new technologies for distributed storage,
    network mapping, peer-to-peer systems,
    distributed hash tables, and query processing.

3
PlanetLab Today
  • 838 machines spanning 4140 sites and 40 countries
  • Supports distributed virtualization
  • each of 600 network services running in their
    own slice

www.planet-lab.org
4
Long-Running Services
  • Content Distribution
  • CoDeeN Princeton
  • Coral NYU, Stanford
  • Cobweb Cornell
  • Storage Large File Transfer
  • LOCI Tennessee
  • CoBlitz Princeton
  • Information Plane
  • PIER Berkeley, Intel
  • PlanetSeer Princeton
  • iPlane Washington
  • DHT
  • Bamboo (OpenDHT) Berkeley, Intel
  • Chord (DHash) MIT

5
Services (cont)
  • Routing / Mobile Access
  • i3 Berkeley
  • DHARMA UIUC
  • VINI Princeton
  • DNS
  • CoDNS Princeton
  • CoDoNs Cornell
  • Multicast
  • End System Multicast CMU
  • Tmesh Michigan
  • Anycast / Location Service
  • Meridian Cornell
  • Oasis NYU

6
Services (cont)
  • Internet Measurement
  • ScriptRoute Washington, Maryland
  • Pub-Sub
  • Corona Cornell
  • Email
  • ePost Rice
  • Management Services
  • Stork (environment service) Arizona
  • Emulab (provisioning service) Utah
  • Sirius (brokerage service) Georgia
  • CoMon (monitoring service) Princeton
  • PlanetFlow (auditing service) Princeton
  • SWORD (discovery service) Berkeley, UCSD

7
Usage Stats
  • Slices 600
  • Users 2500
  • Bytes-per-day 4 TB
  • IP-flows-per-day 190M
  • Unique IP-addrs-per-day 1M

8
Slices
9
Slices
10
Slices
11
User Opt-in
Client
Server
NAT
12
Per-Node View
Node Mgr
Local Admin
VM1
VM2
VMn

Virtual Machine Monitor (VMM)
13
Virtualization
Node Mgr
Owner VM
VM1
VM2
VMn

Virtual Machine Monitor (VMM)
14
Global View

PLC


15
Design Challenges
  • Minimize centralized control without violating
    trust assumptions.
  • Balance the need for isolation with the reality
    of scarce resources.
  • Maintain a stable and usable system while
    continuously evolving it.

16
PlanetLab Architecture
  • Node manger (one per node)
  • Create slices for service managers
  • When service managers provide valid tickets
  • Allocate resources for virtual-servers
  • Resource Monitor (one per node)
  • Track nodes available resources
  • Tell agents about available resources

17
PlanetLab Architecture (cont)
  • Agents (centralized)
  • Track nodes free resources
  • Advertise resources to resource brokers
  • Issue tickets to resource brokers
  • Tickets may be redeemed with node managers to
    obtain the resource

18
PlanetLab Architecture (cont)
  • Resource Broker (per service)
  • Obtain tickets from agents on behalf of service
    managers
  • Service Managers (per service)
  • Obtain tickets from broker
  • Redeem tickets with node managers to acquire
    resources
  • If resources can be acquired, start service

19
Slice Management Obtaining a Slice
Agent
Broker
Service Manager
20
Obtaining a Slice
Agent
Broker
Resource Monitor
Service Manager
21
Obtaining a Slice
Agent
Broker
Resource Monitor
Service Manager
22
Obtaining a Slice
Agent
ticket
Broker
Resource Monitor
Service Manager
23
Obtaining a Slice
Agent
ticket
Broker
Service Manager
Resource Monitor
Resource Monitor
24
Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
Resource Monitor
Resource Monitor
25
Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
26
Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
27
Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
28
Obtaining a Slice
Agent
ticket
Broker
Service Manager
ticket
ticket
29
Obtaining a Slice
Agent
ticket
Broker
Service Manager
ticket
Node Manager
ticket
Node Manager
30
Obtaining a Slice
Agent
ticket
Broker
Service Manager
31
Obtaining a Slice
Agent
ticket
Broker
Service Manager
32
Trust Relationships
  • Princeton
  • Berkeley
  • Washington
  • MIT
  • Brown
  • CMU
  • NYU
  • EPFL
  • Harvard
  • HP Labs
  • Intel
  • NEC Labs
  • Purdue
  • UCSD
  • SICS
  • Cambridge
  • Cornell
  • princeton_codeen
  • nyu_d
  • cornell_beehive
  • att_mcash
  • cmu_esm
  • harvard_ice
  • hplabs_donutlab
  • idsl_psepr
  • irb_phi
  • paris6_landmarks
  • mit_dht
  • mcgill_card
  • huji_ender
  • arizona_stork
  • ucb_bamboo
  • ucsd_share
  • umd_scriptroute

Trusted Intermediary (PlanetLab Central ) (PLC,
agent)
N x N
33
Trust Relationships (cont)
Node Owner
PLC Agent
Service Developer (User)
1) PLC expresses trust in a user by issuing it
credentials to access a slice
2) Users trust PLC to create slices on their
behalf and inspect credentials
3) Owner trusts PLC to vet users and map network
activity to right user
4) PLC trusts owner to keep nodes physically
secure
  1. Each node boots from an immutable file system,
    loading a boot manager, a public key for PLC, and
    a node-specific secret key
  2. User accounts are created through an authorized
    PI associated with each site
  3. PLC runs an auditing service that records info
    about packet flows.

34
Decentralized Control
  • Owner autonomy
  • owners allocate resources to favored slices
  • owners selectively disallow un-favored slices
  • Delegation
  • PLC grants tickets that are redeemed at nodes
  • enables third-party management services
  • Federation
  • create private PlanetLabs
  • now distribute MyPLC software package
  • establish peering agreements

35
Resource Allocation
  • Decouple slice creation and resource allocation
  • given a fair share (1/Nth) by default when
    created
  • acquire/release additional resources over time,
    including resource guarantees
  • Protect against thrashing and over-use
  • Link bandwidth, upper bound on sustained rate
    (protect campus bandwidth)
  • Memory, kill largest user of physical memory when
    swap at 85

36
CoMon
  • Performance monitoring services, providing a
    monitoring statistics for Planetlab at both a
    node level and a slice level.
  • Two daemons running on each node, one for
    node-centric data and the other for slice-centric
    data
  • The archived files contain the results of the
    checks every 5 minutes. They are available as
    http//summer.cs.princeton.edu/status/dump_cotop_Y
    YYYMMDD.bz2 for the slice-centric data and
    http//summer.cs.princeton.edu/status/dump_comon_Y
    YYYMMDD.bz2 for the node-centric data.
  • The data collected can be accessed via query
    interface
  • http//comon.cs.princeton.edu

37
Node Availability
Write a Comment
User Comments (0)
About PowerShow.com