Title: PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services http://www.planet-lab.org
1PlanetLab An open platform for developing,
deploying, and accessing planetary-scale
serviceshttp//www.planet-lab.org
Adapted from Peterson et als talks
2Overview
- PlanetLab is a global research network that
supports the development of new network services.
Since the beginning of 2003, more than 1,000
researchers at top academic institutions and
industrial research labs have used PlanetLab to
develop new technologies for distributed storage,
network mapping, peer-to-peer systems,
distributed hash tables, and query processing.
3PlanetLab Today
- 838 machines spanning 4140 sites and 40 countries
-
- Supports distributed virtualization
- each of 600 network services running in their
own slice
www.planet-lab.org
4Long-Running Services
- Content Distribution
- CoDeeN Princeton
- Coral NYU, Stanford
- Cobweb Cornell
- Storage Large File Transfer
- LOCI Tennessee
- CoBlitz Princeton
- Information Plane
- PIER Berkeley, Intel
- PlanetSeer Princeton
- iPlane Washington
- DHT
- Bamboo (OpenDHT) Berkeley, Intel
- Chord (DHash) MIT
5Services (cont)
- Routing / Mobile Access
- i3 Berkeley
- DHARMA UIUC
- VINI Princeton
- DNS
- CoDNS Princeton
- CoDoNs Cornell
- Multicast
- End System Multicast CMU
- Tmesh Michigan
- Anycast / Location Service
- Meridian Cornell
- Oasis NYU
6Services (cont)
- Internet Measurement
- ScriptRoute Washington, Maryland
- Pub-Sub
- Corona Cornell
- Email
- ePost Rice
- Management Services
- Stork (environment service) Arizona
- Emulab (provisioning service) Utah
- Sirius (brokerage service) Georgia
- CoMon (monitoring service) Princeton
- PlanetFlow (auditing service) Princeton
- SWORD (discovery service) Berkeley, UCSD
7Usage Stats
- Slices 600
- Users 2500
- Bytes-per-day 4 TB
- IP-flows-per-day 190M
- Unique IP-addrs-per-day 1M
8Slices
9Slices
10Slices
11User Opt-in
Client
Server
NAT
12Per-Node View
Node Mgr
Local Admin
VM1
VM2
VMn
Virtual Machine Monitor (VMM)
13Virtualization
Node Mgr
Owner VM
VM1
VM2
VMn
Virtual Machine Monitor (VMM)
14Global View
PLC
15Design Challenges
- Minimize centralized control without violating
trust assumptions. - Balance the need for isolation with the reality
of scarce resources. - Maintain a stable and usable system while
continuously evolving it.
16PlanetLab Architecture
- Node manger (one per node)
- Create slices for service managers
- When service managers provide valid tickets
- Allocate resources for virtual-servers
- Resource Monitor (one per node)
- Track nodes available resources
- Tell agents about available resources
17PlanetLab Architecture (cont)
- Agents (centralized)
- Track nodes free resources
- Advertise resources to resource brokers
- Issue tickets to resource brokers
- Tickets may be redeemed with node managers to
obtain the resource
18PlanetLab Architecture (cont)
- Resource Broker (per service)
- Obtain tickets from agents on behalf of service
managers - Service Managers (per service)
- Obtain tickets from broker
- Redeem tickets with node managers to acquire
resources - If resources can be acquired, start service
19Slice Management Obtaining a Slice
Agent
Broker
Service Manager
20Obtaining a Slice
Agent
Broker
Resource Monitor
Service Manager
21Obtaining a Slice
Agent
Broker
Resource Monitor
Service Manager
22Obtaining a Slice
Agent
ticket
Broker
Resource Monitor
Service Manager
23Obtaining a Slice
Agent
ticket
Broker
Service Manager
Resource Monitor
Resource Monitor
24Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
Resource Monitor
Resource Monitor
25Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
26Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
27Obtaining a Slice
Agent
ticket
Broker
ticket
ticket
Service Manager
28Obtaining a Slice
Agent
ticket
Broker
Service Manager
ticket
ticket
29Obtaining a Slice
Agent
ticket
Broker
Service Manager
ticket
Node Manager
ticket
Node Manager
30Obtaining a Slice
Agent
ticket
Broker
Service Manager
31Obtaining a Slice
Agent
ticket
Broker
Service Manager
32Trust Relationships
- Princeton
- Berkeley
- Washington
- MIT
- Brown
- CMU
- NYU
- EPFL
- Harvard
- HP Labs
- Intel
- NEC Labs
- Purdue
- UCSD
- SICS
- Cambridge
- Cornell
-
- princeton_codeen
- nyu_d
- cornell_beehive
- att_mcash
- cmu_esm
- harvard_ice
- hplabs_donutlab
- idsl_psepr
- irb_phi
- paris6_landmarks
- mit_dht
- mcgill_card
- huji_ender
- arizona_stork
- ucb_bamboo
- ucsd_share
- umd_scriptroute
-
Trusted Intermediary (PlanetLab Central ) (PLC,
agent)
N x N
33Trust Relationships (cont)
Node Owner
PLC Agent
Service Developer (User)
1) PLC expresses trust in a user by issuing it
credentials to access a slice
2) Users trust PLC to create slices on their
behalf and inspect credentials
3) Owner trusts PLC to vet users and map network
activity to right user
4) PLC trusts owner to keep nodes physically
secure
- Each node boots from an immutable file system,
loading a boot manager, a public key for PLC, and
a node-specific secret key - User accounts are created through an authorized
PI associated with each site - PLC runs an auditing service that records info
about packet flows.
34Decentralized Control
- Owner autonomy
- owners allocate resources to favored slices
- owners selectively disallow un-favored slices
- Delegation
- PLC grants tickets that are redeemed at nodes
- enables third-party management services
- Federation
- create private PlanetLabs
- now distribute MyPLC software package
- establish peering agreements
35Resource Allocation
- Decouple slice creation and resource allocation
- given a fair share (1/Nth) by default when
created - acquire/release additional resources over time,
including resource guarantees - Protect against thrashing and over-use
- Link bandwidth, upper bound on sustained rate
(protect campus bandwidth) - Memory, kill largest user of physical memory when
swap at 85
36CoMon
- Performance monitoring services, providing a
monitoring statistics for Planetlab at both a
node level and a slice level. - Two daemons running on each node, one for
node-centric data and the other for slice-centric
data - The archived files contain the results of the
checks every 5 minutes. They are available as
http//summer.cs.princeton.edu/status/dump_cotop_Y
YYYMMDD.bz2 for the slice-centric data and
http//summer.cs.princeton.edu/status/dump_comon_Y
YYYMMDD.bz2 for the node-centric data. - The data collected can be accessed via query
interface - http//comon.cs.princeton.edu
37Node Availability