Title: Lecture 2: Software Platforms
1Lecture 2 Software Platforms
- Anish Arora
- CIS788.11J
- Introduction to Wireless Sensor Networks
- Lecture uses slides from tutorials prepared by
authors of these platforms -
2Outline
- Discussion includes OS and also prog. methodology
- some environments focus more on one than the
other - Focus is on node centric platforms
- (vs. distributed system centric platforms)
- composability, energy efficiency, robustness
- reconfigurability and pros/cons of interpreted
approach - Platforms
- TinyOS (applies to XSMs) slides from UCB
- EmStar (applies to XSSs) slides from UCLA
- SOS slides from UCLA
- Contiki slides from Upsaala
- Virtual machines (Maté) slides from UCB
3References
- NesC, Programming Manual, The Emergence of
Networking Abstractions and Techniques in TinyOS,
TinyOS webpage - EmStar An Environment for Developing Wireless
Embedded Systems Software, Sensys04 Paper, EmStar
webpage - SOS Mobisys paper, SOS webpage
- Contiki Emnets Paper, Sensys06 Paper, Contiki
webpage - Mate ASPLOS Paper, Mate webpage, (SUNSPOT)
4Traditional Systems
- Well established layers of abstractions
- Strict boundaries
- Ample resources
- Independent applications at endpoints communicate
pt-pt through routers - Well attended
Application
Application
User
System
Network Stack
Transport
Threads
Network
Address Space
Data Link
Files
Physical Layer
Drivers
Routers
5Sensor Network Systems
- Highly constrained resources
- processing, storage, bandwidth, power, limited
hardware parallelism, relatively simple
interconnect - Applications spread over many small nodes
- self-organizing collectives
- highly integrated with changing environment and
network - diversity in design and usage
- Concurrency intensive in bursts
- streams of sensor data
network traffic - Robust
- inaccessible, critical operation
- Unclear where the
boundaries belong
- ? Need a framework for
- Resource-constrained concurrency
- Defining boundaries
- Appln-specific processing
- allow abstractions to emerge
6Choice of Programming Primitives
- Traditional approaches
- command processing loop (wait request, act,
respond) - monolithic event processing
- full thread/socket posix regime
- Alternative
- provide framework for concurrency and modularity
- never poll, never block
- interleaving flows, events
7TinyOS
- Microthreaded OS (lightweight thread support) and
efficient network interfaces -
- Two level scheduling structure
- Long running tasks that can be interrupted by
hardware events - Small, tightly integrated design that allows
crossover of software components into hardware
8Tiny OS Concepts
- Scheduler Graph of Components
- constrained two-level scheduling model threads
events - Component
- Commands
- Event Handlers
- Frame (storage)
- Tasks (concurrency)
- Constrained Storage Model
- frame per component, shared stack, no heap
- Very lean multithreading
- Efficient Layering
Events
Commands
send_msg(addr, type, data)
power(mode)
init
Messaging Component
Internal State
internal thread
TX_packet(buf)
Power(mode)
init
RX_packet_done (buffer)
TX_packet_done (success)
9Application Graph of Components
Route map
Router
Sensor Appln
application
Active Messages
Example ad hoc, multi-hop routing of photo
sensor readings
Serial Packet
Radio Packet
packet
Temp
Photo
SW
3450 B code 226 B data
HW
UART
Radio byte
ADC
byte
clock
RFM
bit
Graph of cooperating state machines on shared
stack
10TOS Execution Model
- commands request action
- ack/nack at every boundary
- call command or post task
- events notify occurrence
- HW interrupt at lowest level
- may signal events
- call commands
- post tasks
- tasks provide logical concurrency
- preempted by events
data processing
application comp
message-event driven
active message
event-driven packet-pump
crc
event-driven byte-pump
encode/decode
event-driven bit-pump
11Event-Driven Sensor Access Pattern
command result_t StdControl.start() return
call Timer.start(TIMER_REPEAT, 200) event
result_t Timer.fired() return call
sensor.getData() event result_t
sensor.dataReady(uint16_t data)
display(data) return SUCCESS
SENSE
LED
Photo
Timer
- clock event handler initiates data collection
- sensor signals data ready event
- data event handler calls output command
- device sleeps or handles other activity while
waiting - conservative send/ack at component boundary
12TinyOS Commands and Events
... status call CmdName(args) ...
command CmdName(args) ... return status
event EvtName(args) ... return status
... status signal EvtName(args)
...
13TinyOS Execution Contexts
- Events generated by interrupts preempt tasks
- Tasks do not preempt tasks
- Both essentially process state transitions
14Handling Concurrency Async or Sync Code
Async methods call only async methods (interrupts
are async) Sync methods/tasks call only sync
methods Potential race conditions any update
to shared state from async code any update to
shared state from sync code that is also
updated from async code Compiler rule if a
variable x is accessed by async code, then any
access of x outside of an atomic statement is a
compile-time error Race-Free Invariant any
update to shared state is either not a potential
race condition (sync code only) or is within an
atomic section
15Tasks
- provide concurrency internal to a component
- longer running operations
- are preempted by events
- not preempted by tasks
- able to perform operations beyond event context
- may call commands
- may signal events
... post TskName() ...
task void TskName ...
16Typical Application Use of Tasks
- event driven data acquisition
- schedule task to do computational portion
event result_t sensor.dataReady(uint16_t data)
putdata(data) post processData()
return SUCCESS task void processData()
int16_t i, sum0 for (i0 i maxdata
i) sum (rdatai 7)
display(sum shiftdata)
- 128 Hz sampling rate
- simple FIR filter
- dynamic software tuning for centering the
magnetometer signal (1208 bytes) - digital control of analog, not DSP
- ADC (196 bytes)
17Task Scheduling
- Typically simple FIFO scheduler
- Bound on number of pending tasks
- When idle, shuts down node except clock
- Uses non-blocking task queue data structure
- Simple event-driven structure control over
complete application/system graph - instead of complex task priorities and IPC
18Maintaining Scheduling Agility
- Need logical concurrency at many levels of the
graph - While meeting hard timing constraints
- sample the radio in every bit window
- Retain event-driven structure throughout
application - Tasks extend processing outside event window
- All operations are non-blocking
19The Complete Application
SenseToRfm
generic comm
IntToRfm
AMStandard
RadioCRCPacket
UARTnoCRCPacket
packet
noCRCPacket
photo
Timer
MicaHighSpeedRadioM
phototemp
SecDedEncode
SW
byte
RandomLFSR
SPIByteFIFO
HW
ADC
UART
ClockC
bit
SlavePin
20Programming Syntax
- TinyOS 2.0 is written in an extension of C,
called nesC - Applications are too
- just additional components composed with OS
components - Provides syntax for TinyOS concurrency and
storage model - commands, events, tasks
- local frame variable
- Compositional support
- separation of definition and linkage
- robustness through narrow interfaces and reuse
- interpositioning
- Whole system analysis and optimization
21Component Interface
- logically related set of commands and events
StdControl.nc interface StdControl
command result_t init() command result_t
start() command result_t stop()
Clock.nc interface Clock command result_t
setRate(char interval, char scale) event
result_t fire()
22Component Types
- Configuration
- links together components to compose new
component - configurations can be nested
- complete main application is always a
configuration - Module
- provides code that implements one or more
interfaces and internal behavior
23Example of Top Level Configuration
configuration SenseToRfm // this module does
not provide any interface implementation
components Main, SenseToInt, IntToRfm, ClockC,
Photo as Sensor Main.StdControl -gt
SenseToInt Main.StdControl -gt IntToRfm
SenseToInt.Clock -gt ClockC SenseToInt.ADC -gt
Sensor SenseToInt.ADCControl -gt Sensor
SenseToInt.IntOutput -gt IntToRfm
Main
StdControl
SenseToInt
ADCControl
IntOutput
ADC
Clock
24Nested Configuration
includes IntMsg configuration IntToRfm
provides interface IntOutput interface
StdControl implementation components
IntToRfmM, GenericComm as Comm IntOutput
IntToRfmM StdControl IntToRfmM
IntToRfmM.Send -gt Comm.SendMsgAM_INTMSG
IntToRfmM.SubControl -gt Comm
StdControl
IntOutput
IntToRfmM
SendMsgAM_INTMSG
SubControl
GenericComm
25IntToRfm Module
command result_t StdControl.start() return
call SubControl.start() command result_t
StdControl.stop() return call
SubControl.stop() command result_t
IntOutput.output(uint16_t value) ...
if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg),
data) return SUCCESS ... event
result_t Send.sendDone(TOS_MsgPtr msg, result_t
success) ...
includes IntMsg module IntToRfmM uses
interface StdControl as SubControl
interface SendMsg as Send provides
interface IntOutput interface StdControl
implementation bool pending struct
TOS_Msg data command result_t
StdControl.init() pending FALSE
return call SubControl.init()
26Atomicity Support in nesC
- Split phase operations require care to deal with
pending operations - Race conditions may occur when shared state is
accessed by premptible executions, e.g. when an
event accesses a shared state, or when a task
updates state (premptible by an event which then
uses that state) - nesC supports atomic block
- implemented by turning of interrupts
- for efficiency, no calls are allowed in block
- access to shared variable outside atomic block is
not allowed
27Supporting HW Evolution
- Distribution broken into
- apps top-level applications
- tos
- lib shared application components
- system hardware independent system components
- platform hardware dependent system components
- includes HPLs and hardware.h
- interfaces
- tools development support tools
- contrib
- beta
- Component design so HW and SW look the same
- example temp component
- may abstract particular channel of ADC on the
microcontroller - may be a SW I2C protocol to a sensor board with
digital sensor or ADC - HW/SW boundary can move up and down with minimal
changes
28Example Radio Byte Operation
- Pipelines transmission transmits byte while
encoding next byte - Trades 1 byte of buffering for easy deadline
- Encoding task must complete before byte
transmission completes - Decode must complete before next byte arrives
- Separates high level latencies from low level
real-time rqmts
Encode Task
Byte 2
Byte 1
Byte 3
Byte 4
Bit transmission
Byte 1
Byte 2
Byte 3
start
RFM Bits
29Dynamics of Events and Threads
bit event gt end of byte gt end of packet gt
end of msg send
thread posted to start send next message
bit event filtered at byte layer
radio takes clock events to detect recv
30Sending a Message
- Refuses to accept command if buffer is still
full or network refuses to accept send command - User component provide structured msg storage
31Send done Event
event result_t IntOutput.sendDone(TOS_MsgPtr
msg, result_t success) if
(pending msg data) pending
FALSE signal IntOutput.outputComplete(success)
return SUCCESS
- Send done event fans out to all potential senders
- Originator determined by match
- free buffer on success, retry or fail on failure
- Others use the event to schedule pending
communication
32Receive Event
event TOS_MsgPtr ReceiveIntMsg.receive(TOS_MsgPtr
m) IntMsg message (IntMsg )m-gtdata
call IntOutput.output(message-gtval)
return m
- Active message automatically dispatched to
associated handler - knows format, no run-time parsing
- performs action on message event
- Must return free buffer to the system
- typically the incoming buffer if processing
complete
33Tiny Active Messages
- Sending
- declare buffer storage in a frame
- request transmission
- name a handler
- handle completion signal
- Receiving
- declare a handler
- firing a handler automatic
- Buffer management
- strict ownership exchange
- tx send done event ? reuse
- rx must return a buffer
34Tasks in Low-level Operation
- transmit packet
- send command schedules task to calculate CRC
- task initiates byte-level data pump
- events keep the pump flowing
- receive packet
- receive event schedules task to check CRC
- task signals packet ready if OK
- byte-level tx/rx
- task scheduled to encode/decode each complete
byte - must take less time that byte data transfer
35TinyOS tools
- TOSSIM a simulator for tinyos programs
- ListenRaw, SerialForwarder java tools to receive
raw packets on PC from base node - Oscilloscope java tool to visualize (sensor)
data in real time - Memory usage breaks down memory usage per
component (in contrib) - Peacekeeper detect RAM corruption due to stack
overflows (in lib) - Stopwatch tool to measure execution time of code
block by timestamping at entry and exit (in osu
CVS server) - Makedoc and graphviz generate and visualize
component hierarchy - Surge, Deluge, SNMS, TinyDB
36Scalable Simulation Environment
- target platform TOSSIM
- whole application compiled for host native
instruction set - event-driven execution mapped into event-driven
simulator machinery - storage model mapped to thousands of virtual
nodes - radio model and environmental
- model plugged in
- bit-level fidelity
- Sockets basestation
- Complete application
- including GUI
37Simulation Scaling
38TinyOS 2.0 basic changes
- Scheduler improve robustness and flexibility
- Reserved tasks by default (? fault tolerance)
- Priority tasks
- New nesC 1.2 features
- Network types enable link level cross-platform
interoperability - Generic (instantiable) components, attributes,
etc. - Platform definition simplify porting
- Structure OS to leverage code reuse
- Decompose h/w devices into 3 layers
presentation, abstraction, device-independent - Structure common chips for reuse across platforms
- so platforms are a collection of chips msp430
CC2420 - Power mgmt architecture for devices controlled by
resource reservation -
- Self-initialisation
- App-level notion of instantiable services
39TinyOS 2.0
- Incorporates new language features in nesC 1.2
- generic (instantiable) components, typed
interfaces, attributes, etc. - Hardware dependent/independent boundaries
- HPL -gt HAL -gt HIL
- Platforms are a collection of chips
- msp430 CC2420
- Extensible scheduler (a component)
- Reserved tasks by default (fault tolerance)
- You can have priority tasks
- Interface updates
- Timer
- Send
- StdControl, SplitControl, etc.
40Whats really new?
- Cleaner, more composable system architecture
- Guarantee resource availability through static
configuration - Subsystems accessed via one of
- compile-time virtualisation
- run-time resource reservation requests
- exclusive access for one client (optional
compile-time enforcement)
41Whats really new?
- Many subsystems designed
- A/D conversion
- expose via both platform-specific and
general-purpose interfaces - Timer
- virtualised millisecond timers, and a
platform-specific set of high-precision alarms - Resource reservation
- reusable arbiters make low-level services safe
and relatively easy to use - Power management
- mostly automatic, for devices controlled by
resource reservation - Sensor boards
- general interfaces to allow applications to
support many sensor boards - Serial port
- platform-independent serial messages
- Basic link-level messaging
- network types allow cross-platform wireless
messaging
42Whats really new?
- code supporting these designs on the following
chips - microcontrollers Atmel ATmega128, TI MSP430,
Intel PXA27x - radios Chipcon CC1000 and CC2420, Infineon
TDA5250 - flash Atmel AT45DBxx, ST M25P
- plus
- general code (scheduling, arbitration, random
numbers, ) - tossim 2.0
- gives us the four platforms
- mica2, micaz, telosb, intel mote2
43TinyOS 2.0 (contd.)
- For application-level programming, notion of
services - Instantiable components presenting an OS
abstraction - Less wiring, no more parameterized interfaces,
unique(), etc.
generic configuration AMSender(am_type_t type)
provides interface AMSend provides interface
Packet generic configuration TimerMilli()
provides interface TimerltTMilligt components
AMSender(AM_TYPE), TimerMilli() App.AMSender -gt
Sender App.Timer -gt TimerMilli
44TinyOS Limitations
- Static allocation allows for compile-time
analysis, but can make programming harder - No support for heterogeneity
- Support for other platforms (e.g. stargate)
- Support for high data rate apps (e.g. acoustic
beamforming) - Interoperability with other software frameworks
and languages - Limited visibility
- Debugging
- Intra-node fault tolerance
- Robustness solved in the details of
implementation - nesC offers only some types of checking
45Em
- Software environment for sensor networks built
from Linux-class devices - Claimed features
- Simulation and emulation tools
- Modular, but not strictly layered architecture
- Robust, autonomous, remote operation
- Fault tolerance within node and between nodes
- Reactivity to dynamics in environment and task
- High visibility into system interactive access
to all services
46Contrasting Emstar and TinyOS
- Similar design choices
- programming framework
- Component-based design
- Wiring together modules into an application
- event-driven
- reactive to sudden sensor events or triggers
- robustness
- Nodes/system components can fail
- Differences
- hardware platform-dependent constraints
- Emstar Develop without optimization
- TinyOS Develop under severe resource-constraints
- operating system and language choices
- Emstar easy to use C language, tightly coupled
to linux (devfs,redhat,)
47Em Transparently Trades-off Scale vs. Reality
- Em code runs transparently at many degrees of
reality high visibility debugging
before low-visibility deployment
48Em Modularity
- Dependency DAG
- Each module (service)
- Manages a resource resolves contention
- Has a well defined interface
- Has a well scoped task
- Encapsulates mechanism
- Exposes control of policy
- Minimizes work done by client library
- Application has same structure as services
49Em Robustness
- Fault isolation via multiple processes
- Active process management (EmRun)
- Auto-reconnect built into libraries
- Crashproofing prevents cascading failure
- Soft state design style
- Services periodically refresh clients
- Avoid diff protocols
scheduling
path_plan
depth map
EmRun
motor_x
motor_y
camera
50Em Reactivity
- Event-driven software structure
- React to asynchronous notification
- e.g. reaction to change in neighbor list
- Notification through the layers
- Events percolate up
- Domain-specific filtering at every level
- e.g.
- neighbor list membership hysteresis
- time synchronization linear fit and outlier
rejection
scheduling
path_plan
notify filter
motor_y
51EmStar Components
- Tools
- EmRun
- EmProxy/EmView
- EmTOS
- Standard IPC
- FUSD
- Device patterns
- Common Services
- NeighborDiscovery
- TimeSync
- Routing
52EmRun Manages Services
- Designed to start, stop, and monitor services
- EmRun config file specifies service dependencies
- Starting and stopping the system
- Starts up services in correct order
- Can detect and restart unresponsive services
- Respawns services that die
- Notifies services before shutdown, enabling
graceful shutdown and persistent state - Error/Debug Logging
- Per-process logging to in-memory ring buffers
- Configurable log levels, at run time
53EmSim/EmCee
- Em supports a variety of types of simulation and
emulation, from simulated radio channel and
sensors to emulated radio and sensor channels
(ceiling array) - In all cases, the code is identical
- Multiple emulated nodes run in their own spaces,
on the same physical machine
54EmView/EmProxy Visualization
emview
55Inter-module IPC FUSD
- Creates device file interfaces
- Text/Binary on same file
- Standard interface
- Language independent
- No client library required
User
Kernel
56Device Patterns
- FUSD can support virtually any semantics
- What happens when client calls read()?
- But many interfaces fall into certain patterns
- Device Patterns
- encapsulate specific semantics
- take the form of a library
- objects, with method calls and callback functions
- priority ease of use
57Status Device
- Designed to report current state
- no queuing clients not guaranteed to see every
intermediate state - Supports multiple clients
- Interactive and programmatic interface
- ASCII output via cat
- binary output to programs
- Supports client notification
- notification via select()
- Client configurable
- client can write command string
- server parses it to enable per-client behavior
58Packet Device
- Designed for message streams
- Supports multiple clients
- Supports queuing
- Round-robin service of output queues
- Delivery of messages to all/ specific clients
- Client-configurable
- Input and output queue lengths
- Input filters
- Optional loopback of outputs to other clients
(for snooping)
59Device Files vs Regular Files
- Regular files
- Require locking semantics to prevent race
conditions between readers and writers - Support status semantics but not queuing
- No support for notification, polling only
- Device files
- Leverage kernel for serialization no locking
needed - Arbitrary control of semantics
- queuing, text/binary, per client configuration
- Immediate action, like an function call
- system call on device triggers immediate response
from service, rather than setting a request and
waiting for service to poll
60Interacting With em
- Text/Binary on same device file
- Text mode enables interaction from shell and
scripts - Binary mode enables easy programmatic access to
data as C structures, etc. - EmStar device patterns support multiple
concurrent clients - IPC channels used internally can be viewed
concurrently for debugging - Live state can be viewed in the shell (echocat
w) or using emview
61SOS Motivation and Key Feature
- Post-deployment software updates are necessary to
- customize the system to the environment
- upgrade features
- remove bugs
- re-task system
- Remote reprogramming is desirable
- Approach Remotely insert binary modules into
running kernel - software reconfiguration without interrupting
system operation - no stop and re-boot unlike differential patching
- Performance should be superior to virtual
machines
62Architecture Overview
- Static Kernel
- Provides hardware abstraction common services
- Maintains data structures to enable module
loading - Costly to modify after deployment
- Dynamic Modules
- Drivers, protocols, and applications
- Inexpensive to modify after deployment
- Position independent
63SOS Kernel
- Hardware Abstraction Layer (HAL)
- Clock, UART, ADC, SPI, etc.
- Low layer device drivers interface with HAL
- Timer, serial framer, communications stack, etc.
- Kernel services
- Dynamic memory management
- Scheduling
- Function control blocks
64Kernel Services Memory Management
- Fixed-partition dynamic memory allocation
- Constant allocation time
- Low overhead
- Memory management features
- Guard bytes for run-time memory overflow checks
- Ownership tracking
- Garbage collection on completion
- pkt (uint8_t)ker_malloc(hdr_size
sizeof(SurgeMsg), SURGE_MOD_PID)
65Kernel Services Scheduling
- SOS implements non-preemptive priority scheduling
via priority queues - Event served when there is no higher priority
event - Low priority queue for scheduling most events
- High priority queue for time critical events,
e.g., h/w interrupts sensitive timers - Prevents execution in interrupt contexts
- post_long(TREE_ROUTING_PID, SURGE_MOD_PID,
MSG_SEND_PACKET, - hdr_size sizeof(SurgeMsg), (void)packet,
SOS_MSG_DYM_MANAGED)
66Modules
- Each module is uniquely identified by its ID or
pid - Has private state
- Represented by a message handler has prototype
- int8_t handler(void private_state, Message
msg) - Return value follows errno
- SOS_OK for success. -EINVAL, -ENOMEM, etc. for
failure
67Kernel Services Module Linking
- Orthogonal to module distribution protocol
- Kernel stores new module in free block located in
program memory - and critical information about module in the
module table - Kernel calls initialization routine for module
- Publish functions for other parts of the system
to use - char tmp_string 'C', 'v', 'v', 0
- ker_register_fn(TREE_ROUTING_PID,
MOD_GET_HDR_SIZE, tmp_string, (fn_ptr_t)tr_get_hea
der_size) - Subscribe to functions supplied by other modules
- char tmp_string 'C', 'v', 'v', 0
- s-gtget_hdr_size (func_u8_t)ker_get_handle(TREE_
ROUTING_PID, MOD_GET_HDR_SIZE, tmp_string) - Set initial timers and schedule events
68ModuletoKernel Communication
High Priority Message Buffer
- Kernel provides system services and access to
hardware - ker_timer_start(s-gtpid, 0, TIMER_REPEAT, 500)
- ker_led(LED_YELLOW_OFF)
- Kernel jump table re-directs system calls to
handlers - upgrade kernel independent of the module
- Interrupts messages from kernel dispatched by a
high priority message buffer - low latency
- concurrency safe operation
69Inter-Module Communication
- Inter-Module Message Passing
- Asynchronous communication
- Messages dispatched by a two-level priority
scheduler - Suited for services with long latency
- Type safe binding through publish / subscribe
interface
- Inter-Module Function Calls
- Synchronous communication
- Kernel stores pointers to functions registered by
modules - Blocking calls with low latency
- Type-safe runtime function binding
70Synchronous Communication
- Module can register function for low latency
blocking call (1) - Modules which need such function can subscribe to
it by getting function pointer pointer (i.e.
func) (2) - When service is needed, module dereferences the
function pointer pointer (3)
71Asynchronous Communication
- Module is active when it is handling the message
(2)(4) - Message handling runs to completion and can only
be interrupted by hardware interrupts - Module can send message to another module (3) or
send message to the network (5) - Message can come from both network (1) and local
host (3)
72Module Safety
- Problem Modules can be remotely added, removed,
modified on deployed nodes - Accessing a module
- If module doesn't exist, kernel catches messages
sent to it handles dynamically allocated memory - If module exists but can't handle the message,
then module's default handler gets message
kernel handles dynamically allocated memory - Subscribing to a modules function
- Publishing a function includes a type description
that is stored in a function control block (FCB)
table - Subscription attempts include type checks against
corresponding FCB - Type changes/removal of published functions
result in subscribers being redirected to system
stub handler function specific to that type - Updates to functions w/ same type assumed to have
same semantics
73Module Library
- Some applications created by combining already
written and tested modules - SOS kernel facilitates loosely coupled modules
- Passing of memory ownership
- Efficient function and messaging interfaces
Surge Application with Debugging
74Module Design
include ltmodule.hgt typedef struct uint8_t
pid uint8_t led_on app_state DECL_MOD_STAT
E(app_state) DECL_MOD_ID(BLINK_ID) int8_t
module(void state, Message msg) app_state s
(app_state)state switch (msg-gttype)
case MSG_INIT s-gtpid msg-gtdid
s-gtled_on 0 ker_timer_start(s-gtpid, 0,
TIMER_REPEAT, 500) break case
MSG_FINAL ker_timer_stop(s-gtpid, 0)
break case MSG_TIMER_TIMEOUT
if(s-gtled_on 1) ker_led(LED_YELLOW_O
N) else ker_led(LED_YELLOW_OFF)
s-gtled_on if(s-gtled_on gt
1) s-gtled_on 0 break default
return -EINVAL return SOS_OK
- Uses standard C
- Programs created by wiring modules together
75Sensor Manager
- Enables sharing of sensor data between multiple
modules - Presents uniform data access API to diverse
sensors - Underlying device specific drivers register with
the sensor manager - Device specific sensor drivers control
- Calibration
- Data interpolation
- Sensor drivers are loadable enables
- post-deployment configuration of sensors
- hot-swapping of sensors on a running node
76Application Level Performance
Comparison of application performance in SOS,
TinyOS, and MateVM
Surge Forwarding Delay
Surge Tree Formation Latency
Surge Packet Delivery Ratio
Memory footprint for base operating system with
the ability to distribute and update node
programs.
CPU active time for surge application.
77Reconfiguration Performance
Energy cost of light sensor driver update
Module size and energy profile for installing
surge under SOS
Energy cost of surge application update
- Energy trade offs
- SOS has slightly higher base operating cost
- TinyOS has significantly higher update cost
- SOS is more energy efficient when the system is
updated one or more times a week
78Platform Support
- Supported micro controllers
- Atmel Atmega128
- 4 Kb RAM
- 128 Kb FLASH
- Oki ARM
- 32 Kb RAM
- 256 Kb FLASH
- Supported radio stacks
- Chipcon CC1000
- BMAC
- Chipcon CC2420
- IEEE 802.15.4 MAC (NDA required)
79Simulation Support
- Source code level network simulation
- Pthread simulates hardware concurrency
- UDP simulates perfect radio channel
- Supports user defined topology heterogeneous
software configuration - Useful for verifying the functional correctness
- Instruction level simulation with Avrora
- Instruction cycle accurate simulation
- Simple perfect radio channel
- Useful for verifying timing information
- See http//compilers.cs.ucla.edu/avrora/
- EmStar integration under development
80Network Capable Messages
- typedef struct
- sos_pid_t did // destination module ID
- sos_pid_t sid // source module ID
- uint16_t daddr // destination node
- uint16_t saddr // source node
- uint8_t type // message type
- uint8_t len // message length
- uint8_t data // payload
- uint8_t flag // options
- Message
- Messages are best-effort by default
- No senddone and Low priority
- Can be changed via flag in runtime
- Messages are filtered when received
- CRC Check and Non-promiscuous mode
- Can turn off filter in runtime
81Contiki
- Dynamic loading of programs (vs. static)
- Multi-threaded concurrency managed execution
(in addition to event driven) - Available on MSP430, AVR, HC12, Z80, 6502, x86,
... - Simulation environment available for
BSD/Linux/Windows
82Key ideas
- Dynamic loading of programs
- Selective reprogramming
- Static/pre-linking (early work EmNets)
- Dynamic linking (recent work SENSYS)
- Key difference from SOS
- no assumption of position independence
- Concurrency management mechanisms
- Events and threads
- Trade-offs preemption, size
83Loadable programs
- One-way dependencies
- Core resident in memory
- Language run-time, communication
- If programs know the core
- Can be statically linked
- And call core functions and reference core
variables freely - Individual programs can be loaded/unloaded
- Need to register their variable and function
information with core
Core
84(No Transcript)
85Loadable programs (contd.)
- Programs can be loaded from anywhere
- Radio (multi-hop, single-hop), EEPROM, etc.
- During software development, usually change only
one module
Core
86Core Symbol Table
- Registry of names and addresses of
- all externally visible variables and functions
- of core modules and run-time libraries
- Offers API to linker to search registry and to
update registry - Created when Contiki core binary image is
compiled - multiple pass process
87Linking and relocating a module
- Parse payload into code, data, symbol table,
- and list of relocation entries which
- correspond to an instruction or address in code
or data that needs to be updated with a new
address - consist of
- a pointer to a symbol, such as a variable name or
a function name or a pointer to a place in the
code or data - address of the symbol
- a relocation type which specifies how the data or
code should be updated - Allocate memory for code data is flash ROM and
RAM - Link and relocate code and data segments
- for each relocation entry, search core symbol
table and module symbol table - if relocation is relative than calculate absolute
address - Write code to flash ROM and data to RAM
88Contiki size (bytes)
- Module
- Kernel
- Program loader
- Multi-threading library
- Timer library
- Memory manager
- Event log replicator
- µIP TCP/IP stack
- Code AVR
- 1044
- -
- 678
- 90
- 226
- 1934
- 5218
Code MSP430 810 658 582 60 170 1656 4146
RAM 10 e p 8 8 s 0 0 200 18 b
89How well does it work?
- Works well
- Program typically much smaller than entire system
image (1-10) - Much quicker to transfer over the radio
- Reprogramming takes seconds
- Static linking can be a problem
- Small differences in core means module cannot be
run - Implementation of dynamic linker is in SENSYS
paper
90Revisiting Multi-threaded Computation
Thread
Thread
Thread
- Threads blocked, waiting for events
- Kernel unblocks threads when event occurs
- Thread runs until next blocking statement
- Each thread requires its own stack
- Larger memory usage
Kernel
91Event-driven vs multi-threaded
- Multi-threaded
- wait() statements
- Preemption possible
- Sequential code flow
- - Larger code overhead
- - Locking problematic
- - Larger memory requirements
- Event-driven
- - No wait() statements
- - No preemption
- - State machines
- Compact code
- Locking less of a problem
- Memory efficient
How to combine them?
92Contiki event-based kernel with threads
- Kernel is event-based
- Most programs run directly on top of the kernel
- Multi-threading implemented as a library
- Threads only used if explicitly needed
- Long running computations, ...
- Preemption possible
- Responsive system with running computations
93Responsiveness
94Threads implemented atop an event-based kernel
Event
Thread
Thread
Event
Kernel
Event
Event
95Implementing preemptive threads 1
Thread
Event handler
96Implementing preemptive threads 2
Event handler
yield()
97Memory management
- Memory allocated when module is loaded
- Both ROM and RAM
- Fixed block memory allocator
- Code relocation made by module loader
- Exercises flash ROM evenly
98Protothreads light-weight stackless threads
- Protothreads mixture between event-driven and
threaded - A third concurrency mechanism
- Allows blocked waiting
- Requires per-thread no stack
- Each protothread runs inside a single C function
- 2 bytes of per-protothread state
99Mate A Virtual Machine for Sensor Networks
- Why VM?
- Large number (100s to 1000s) of nodes in a
coverage area - Some nodes will fail during operation
- Change of function during the mission
- Related Work
- PicoJava
- assumes Java bytecode execution hardware
- K Virtual Machine
- requires 160 512 KB of memory
- XML
- too complex and not enough RAM
- Scylla
- VM for mobile embedded system
100Mate features
- Small (16KB instruction memory, 1KB RAM)
- Concise (limited memory bandwidth)
- Resilience (memory protection)
- Efficient (bandwidth)
- Tailorable (user defined instructions)
101Mate in a Nutshell
- Stack architecture
- Three concurrent execution contexts
- Execution triggered by predefined events
- Tiny code capsules self-propagate into network
- Built in communication and sensing instructions
102When is Mate Preferable?
- For small number of executions
- GDI example
- Bytecode version is preferable for a program
running less than 5 days - In energy constrained domains
- Use Mate capsule as a general RPC engine
103Mate Architecture
- Stack based architecture
- Single shared variable
- gets/sets
- Three events
- Clock timer
- Message reception
- Message send
- Hides asynchrony
- Simplifies programming
- Less prone to bugs
104Instruction Set
- One byte per instruction
- Three classes basic, s-type, x-type
- basic arithmetic, halting, LED operation
- s-type messaging system
- x-type pushc, blez
- 8 instructions reserved for users to define
- Instruction polymorphism
- e.g. add(data, message, sensing)
105Code Example(1)
106Code Capsules
- One capsule 24 instructions
- Fits into single TOS packet
- Atomic reception
- Code Capsule
- Type and version information
- Type send, receive, timer, subroutine
107Viral Code
- Capsule transmission forw
- Forwarding other installed capsule forwo (use
within clock capsule) - Mate checks on version number on reception of a
capsule - -gt if it is newer, install it
- Versioning 32bit counter
- Disseminates new code over the network
108Component Breakdown
- Mate runs on mica with 7286 bytes code, 603 bytes
RAM
109Network Infection Rate
- 42 node network in 3 by 14 grid
- Radio transmission 3 hop network
- Cell size 15 to 30 motes
- Every mote runs its clock capsule every 20
seconds - Self-forwarding clock capsule
110Bytecodes vs. Native Code
- Mate IPS 10,000
- Overhead Every instruction executed as separate
TOS task
111Installation Costs
- Bytecodes have computational overhead
- But this can be compensated by using small
packets on upload (to some extent)
112Customizing Mate
- Mate is general architecture user can build
customized VM - User can select bytecodes and execution events
- Issues
- Flexibility vs. Efficiency
- Customizing increases efficiency w/ cost of
changing requirements - Javas solution
- General computational VM class libraries
- Mates approach
- More customizable solution -gt let user decide
113How to
- Select a language
- -gt defines VM bytecodes
- Select execution events
- -gt execution context, code image
- Select primitives
- -gt beyond language functionality
114Constructing a Mate VM
- This generates
- a set of files
- -gt which are
- used to build
- TOS application
- and
- to configure
- script program
115Compiling and Running a Program
Send it over the network to a VM
VM-specific binary code
Write programs in the scripter
116Bombilla Architecture
- Once context perform operations that only need
single execution - 16 word heap sharing among the context
- setvar, getvar
- Buffer holds up to ten values
- bhead, byank, bsorta
117Bombilla Instruction Set
- basic arithmetic, halt, sensing
- m-class access message header
- v-class 16 word heap access
- j-class two jump instructions
- x-class pushc
118Enhanced Features of Bombilla
- Capsule Injector programming environment
- Synchronization 16-word shared heap locking
scheme - Provide synchronization model handler,
invocations, resources, scheduling points,
sequences - Resource management prevent deadlock
- Random and selective capsule forwarding
- Error State
119Discussion
- Comparing to traditional VM concept, is Mate
platform independent? Can we have it run on
heterogeneous hardware? - Security issues
- How can we trust the received capsule? Is there
a way to prevent version number race with
adversary? - In viral programming, is there a way to forward
messages other than flooding? After a certain
number of nodes are infected by new version
capsule, can we forward based on need? - Bombilla has some sophisticated OS features. What
is the size of the program? Does sensor node need
all those features?