Towards Understanding Vast Networks of Tiny Devices - PowerPoint PPT Presentation

1 / 44
About This Presentation
Title:

Towards Understanding Vast Networks of Tiny Devices

Description:

Tiny Operating System for Range of Highly-Constrained Application-specific environments ... Tiny OS Concepts. Scheduler Graph of Components ... – PowerPoint PPT presentation

Number of Views:78
Avg rating:3.0/5.0
Slides: 45
Provided by: DavidEC151
Category:

less

Transcript and Presenter's Notes

Title: Towards Understanding Vast Networks of Tiny Devices


1
Towards Understanding VastNetworks of Tiny
Devices
  • David Culler
  • Computer Science Division
  • U.C. Berkeley
  • www.cs.berkeley.edu/culler
  • Intel Research _at_ Berkeley
  • www.intel-research.net

2
Technology Push
  • Complete embedded systems going microscopic

3
Application Pull
  • Complete embedded systems going microscopic
  • Huge space of applications for deeply embedded,
    networked sensors and actuators

Ubiquitous Computing
4
Bridging the Technolgy-Application Gap
  • Power-aware, communication-centric node
    architecture
  • Tiny Operating System for Range of
    Highly-Constrained Application-specific
    environments
  • Network Architecture for vast, self-organized
    collections
  • Programming Environments for aggregate
    applications in a noisy world
  • Demonstration Applications

5
Critical issues
  • Highly constrained devices
  • power, storage, bandwidth, energy, visibility
  • primitive I/O hierarchy
  • Observation and action inherently distributed
  • many small nodes coordinate and cooperate on
    overall task
  • The structure of the SYSTEM changes
  • Devices ARE the infrastructure
  • ad hoc, self-organized network of sensors
  • Highly dynamic
  • passive vigilance most of the time
  • concurrency-intensive bursts
  • highly correlated behavior
  • variation in connectivity over time
  • failure is common

6
A Networked Sensor/Act Node
  • 1 x 1.5 motherboard
  • ATMEL 4Mhz, 8bit MCU, 512 bytes RAM, 8KB pgm
    flash
  • 900Mhz Radio (RF Monolithics) 1-10 m range
  • ATMEL network pgming assist
  • Radio Signal strength control and sensing
  • I2C EPROM (logging)
  • Base-station ready
  • stackable expansion connector
  • all ports, i2c, pwr, clock
  • Several sensor boards
  • basic protoboard
  • tiny weather station (temp,light,hum,press)
  • vibrations (2d acc, temp, LIGHT)
  • accelerometers
  • magnetometers
  • Integrated quarter size node

7
A Open Platform for NEST
  • Atmel ATMEGA103
  • 4 Mhz 8-bit CPU
  • 128KB Instruction Memory
  • 4KB RAM
  • 4 Mbit flash (AT45DB041B)
  • SPI interface
  • 1-4 uj/bit r/w
  • RFM TR1000 radio
  • 50 kb/s
  • Focused hardware acceleration
  • Network programming
  • Same 51-pin connector
  • Analog compare interrupts
  • Same tool chain

Cost-effective power source
2xAA form factor
8
Major features
  • 16x program memory size (128 KB)
  • 8x data memory size (4 KB)
  • 16x secondary storage (512 KB flash)
  • 5x radio bandwidth (50 Kb/s)
  • 6 ADC channels available
  • Same processor performance
  • Allows for external SRAM expansion
  • Provides sub microsecond RF synchronization
    primitive
  • Provides unique serial IDs
  • On-board DC booster
  • Remains Compatible with Rene Hardware and current
    tool chain

9
Expansion Capabilities
  • Backwards compatible to existing sensor boards
  • eliminated i2c-2 (was for EEPROM, which is now
    ext. SPI)
  • eliminated UART2
  • added two analog compare lines
  • added five interrupt lines (were unknown)
  • added two PWM lines
  • 6 ADC channels
  • 10 bits/sample
  • 10K samples/second
  • I2C Expansion Bus (i2c-1)
  • SPI Expansion Bus
  • 8 Digital I/O or Power Control Lines (was 4)
  • Can connect external SRAM for CPU data memory (up
    to 64KB)
  • lose most sensor capability
  • address lines share with lowest priority devices
    (LEDS, Flash ctrl)
  • still allows radio, flash, and programming

10
Experimenting at Scale
11
Example TinyOS study
  • UAV drops 10 nodes along road,
  • hot-water pipe insulation for package
  • Nodes self-configure into linear network
  • Synchronize (to 1/32 s)
  • Calibrate magnetometers
  • Each detects passing vehicle
  • Share filtered sensor data with 5 neighbors
  • Each calculates estimated direction velocity
  • Share results
  • As plane passes by,
  • joins network
  • upload as much of missing dataset as possible
    from each node when in range
  • 7.5 KB of code!
  • While servicing the radio in SW every 50 us!

12
A Operating System for Tiny Devices?
  • Ideally would be synthesized from appln
    requirements, but
  • Traditional approaches
  • command processing loop (wait request, act,
    respond)
  • monolithic event processing
  • bring full thread/socket posix regime to platform
  • Alternative
  • provide framework for concurrency and modularity
  • never poll, never block
  • interleaving flows, events, energy management
  • allow appropriate abstractions to emerge

13
Tiny OS Concepts
  • Scheduler Graph of Components
  • constrained two-level scheduling model threads
    events
  • Component
  • Commands,
  • Event Handlers
  • Frame (storage)
  • Tasks (concurrency)
  • Constrained Storage Model
  • frame per component, shared stack, no heap
  • Very lean multithreading
  • Efficient Layering

Events
Commands
send_msg(addr, type, data)
power(mode)
init
Messaging Component
Internal State
internal thread
TX_packet(buf)
Power(mode)
TX_packet_done (success)
init
RX_packet_done (buffer)
14
Application Graph of Components
Route map
router
sensor appln
application
Active Messages
Radio Packet
Serial Packet
packet
Temp
photo
SW
Example ad hoc, multi-hop routing of photo
sensor readings
HW
UART
Radio byte
ADC
byte
3450 B code 226 B data
clocks
RFM
bit
Graph of cooperating state machines on shared
stack
15
TOS Execution Model
  • commands request action
  • ack/nack at every boundary
  • call cmd or post task
  • events notify occurrence
  • HW intrpt at lowest level
  • may signal events
  • call cmds
  • post tasks
  • Tasks provide logical concurrency
  • preempted by events
  • Migration of HW/SW boundary

data processing
application comp
message-event driven
active message
event-driven packet-pump
crc
event-driven byte-pump
encode/decode
event-driven bit-pump
16
TinyOS Execution Contexts
Tasks
events
commands
Interrupts
Hardware
17
Dynamics of Events and Threads
bit event end of byte end of packet
end of msg send
thread posted to start send next message
bit event filtered at byte layer
radio takes clock events to detect recv
18
Quantitative Analysis...
Power down when task queue empty
19
Maintaining Scheduling Agility
  • Need logical concurrency at many levels of the
    graph
  • While meeting hard timing constraints
  • sample the radio in every bit window
  • Retain event-driven structure throughout
    application
  • Tasks extend processing outside event window
  • All operations are non-blocking
  • lock-free scheduling queue

20
Typical split-phase pattern
char TOS_EVENT(SENS_OUTPUT_CLOCK_EVENT)() return
TOS_CALL_COMMAND(SENS_GET_DATA)() char
TOS_EVENT(SENS_DATA_READY)(int data) VAR(buffer)
VAR(index) data if (full())
TOS_POST_TASK(FILTER_DATA) return 1
  • clock event handler initiates data collection
  • sensor signals data ready event
  • data event handler posts task to process data

21
Tiny Event-Driven Active Messages
TOS_FRAME_BEGIN(INT_TO_RFM_frame) char
pending TOS_Msg msg TOS_FRAME_END(INT_TO_RF
M_frame) ... sTOS_COMMAND(SEND_MSG)(TOS_MSG_BCA
ST,AM_MSG(INT_READING),VAR(msg))) ... char
TOS_EVENT(MSG_SEND_DONE)(TOS_MsgPtr
sentBuffer) ... TOS_MsgPtr TOS_MSG_EVENT(INT_RE
ADING)(TOS_MsgPtr val) ... return val
22
Conservative Component Coupling
  • Each component has bounded state and concurrency
  • Each command interface has explicit handshake
  • each component must deal with rejections
  • Message layer may reject send request if full or
    busy
  • Requestor cannot busy wait
  • send_done event broadcast to all potential
    senders
  • send_buffer pointer used to disambiguate
  • can elect to retry or drop

23
Communication Storage Management
  • Strict ownership protocol at appln components
  • Each component owns set of buffers
  • send comp. gives out-buffer till send_done
  • component tracks state
  • receive system gives in-buffer to handler
  • handler must return a free buffer to the system
  • if completely consumed, returns same
  • otherwise, returns another one it owns
  • if none available, must give back incoming

24
Crossing Layers without buffering
  • stack consists of series of data pumps
  • each peels off portion of packet and feeds to
    lower layer
  • task starts event-driven data pump

appln comp
buffer handoff
active message
packet-pump
Radio Packet
byte-pump
upper fsm
Radio byte
lower fsm
bit-pump
RFM
25
Deadline Avoidance
  • Pipelines transmission transmits single byte
    while encoding next byte
  • Trades 1 byte of buffering for easy deadline
  • Separates high level latencies from low level
    real-time requirements
  • Encoding Task must complete before byte
    transmission completes
  • Decode must complete before next byte arrives


Encode Task
Byte 2
Byte 1
Byte 3
Byte 4
Bit transmission
Byte 1
Byte 2
Byte 3
start
RFM Bits
26
Node Communication Architecture
Classic Protocol Processor
Direct Device Control
Hybrid Accelerator
27
Novel Protocol Examples
  • Low-power Listening
  • Really Tight Application-level Time
    Synchronization
  • Localization
  • Wake-up
  • MACs
  • Self-organization

28
Low-Power Listening
  • Costs about as much to listen as to xmit, even
    when nothing is received
  • Must turn radio off when there is nothing to
    hear.
  • Time-slotting and rendezvous cost BW and Latency
  • Can turn radio on/of in
  • Small sub-msg recv sampling
  • Trade small, infrequent tx cost for freq. Rx
    savings

29
Exposing Time Synchronization Up
  • Many applications require correlated data
    sampling
  • Distributed time sync accuracy bounded by ½ the
    variance in RTT.
  • Successful radio transmission requires sub-bit
    synchronization
  • Provide accurate timestamping with msg delivery
  • Jitter capture accuracy) 0.625 us (clock synch)

30
Localization
  • Many applications need to derive physical
    placement from observations
  • Spatial sampling, proximity, context-awareness
  • Radio is another sensor
  • Sample base-band to estimate distance
  • Need a lot of statistical data
  • Calibration and multiple-observations are key
  • Acoustic time-of-flight alternative
  • Requires good time synchronization

31
Feedback within comm. stack
  • Media access control
  • radio is single shared channel (spatial
    multiplexing)
  • want to avoid protocol msgs (RTS, CTS)
  • CSMA implemented in software
  • traffic is highly correlated
  • must randomize initial delay, as well as backoff
  • able to deliver 70 of channel bandwidth fairly
  • Byte layer implements listening and backoff
  • If fails to acquire channel
  • signals failure event
  • propagates up the stack
  • application gets send_done failure
  • able to adapt sampling rate, transmission rate,
    etc.

32
The nodes are the infrastructure
  • Network discovery and multihop routing are just
    additional active message handlers
  • every node is also a router
  • Example Beacon-based Network discovery
  • if (new mcast) then
  • record parent
  • retransmit from self
  • else
  • record neighborhood info

33
Network Discovery Radio Cells
34
Network Discovery
35
Self-organization has complex dynamics
2e
2b
2a
2d
1
0
2c
36
Growth Dynamics
37
Systematic Studies of Large Nets
38
Multihop Bandwidth Management
  • Should self-organize into fair, dynamic multihop
    net
  • Hidden nodes between each pair of levels
  • CSMA is not enough
  • Pmsg-to-base drops with each hop
  • Investment in packet increases with distance
  • need to optimize for low-power fairness!
  • RTS/CTS costly (power BW)
  • Local rate control to approx. fairness
  • Priority to forwarding, adjust own data rate
  • Additive increase, multiplicative decrease
  • Listen for retransmission as ack

39
Example Multihop Adaptive Transmission Control
Max rate 4 samples/sec - rate 4p Channel BW
20 p/s - cannot expect more than 1/3 thru
parent Monitor number of children (n) a(n) 1/n
b ½ p p a(n) on success (echo) p
p b without rate control, success drops ½
per hop
40
Rich set of additional challenges
  • Network Programming
  • Efficient and robust security primitives
  • Aggregation Algorithms
  • Application specific virtual machines
  • Resilient aggregators
  • Programming support for systems of generalized
    state machines
  • Programming the unstructured aggregate
  • SPMD, Data Parallel, Query Processing, Tuples
  • Understanding how an extreme system is behaving
    and what is its envelope
  • adversarial simulation
  • Self-configuring, self-correcting systems

41
Intel Research _at_ Berkeley
  • Expanding the team to explore extreme networked
    systems
  • Kevin Fall, networking
  • Alan Mainwaring, operating systems
  • Anind Dey, ubiquitous computing
  • David Gay, programming languages
  • ??, collaborative signal processing
  • ??, low-power HW design
  • ??, databases
  • ??, wireless
  • Tap into novel technology, engineering and
    manufacturing resources
  • CMOS radios, UWB, MEMs, Storage
  • MCM mote

42
Longer Range Activities
  • Drive down the size and up the scale
  • Fingernail mote
  • Computational fabrics, surfaces,
  • Planetary-scale Services
  • Open experimental broad-coverage overlay network

43
A new paradigm is emerging
  • Complete embedded systems going microscopic
  • Huge space of applications for deeply embedded,
    networked sensors and actuators
  • So we are looking at dense, fine-grain
    distributed systems of systems tightly coupled to
    the physical world
  • control loops at many levels
  • networking is central
  • many new constraints

44
Summary
webs.cs.berkeley.edu or www.tinyos.org
  • Distribute the embedded system over many small
    devices
  • Integrated them with communication
  • New set of embedded software challenges
  • local scheduling, synthesis, etc. must address
    resource constraints
  • plus the distributed aspects
  • New networking problems at every level
  • Operating against energy constraints
  • rather than overload
  • Inherent asynchrony
  • NEST platform due in Jan
Write a Comment
User Comments (0)
About PowerShow.com