Title: Software Based Memory Protection For Sensor Nodes
1Software Based Memory Protection For Sensor Nodes
- Ram Kumar,
- Eddie Kohler, Mani Srivastava
- (ram_at_ee.ucla.edu)
- CENS Technical Seminar Series
2Memory Corruption
0x0200
- Single address space CPU
- Shared by apps., drivers and OS
- Most bugs in deployed systems come from memory
corruption - Corrupted nodes trigger network-wide failures
Run-time Stack
Globals and Heap (Apps., drivers, OS) No
Protection
0x0000
Sensor Node Address Space
Memory protection is an enabling technology for
building robust software for motes
3Why is Memory Protection hard ?
- No MMU in embedded micro-controllers
- MMU hardware requires lot of RAM
- Increases area and power consumption
Core MMU Cache Area (mm2) 0.13u Tech. mW per MHz
ARM7-TDMI No No 0.25 0.1
ARM720T Yes 8 Kb 2.40 (10x) 0.2 (2x)
4Software-based Approaches
- Software-based Fault Isolation (Sandbox)
- Coarse grained protection
- Check all memory accesses at run-time
- Introduce low-overhead inline checks
- Application Specific Virtual Machine (ASVM)
- Interpreted code is safe and efficient
- ASVM instructions are not type-safe
5Software-based Approaches
- Type safe languages
- Language semantics prevent illegal memory
accesses - Fine grained memory protection
- Challenge is to interface with non type-safe
software - Ignores large existing code-base
- Output of type-safe compiler is harder to verify
- Especially with performance optimizations
- Ccured - Type-safe retrofitting of C code
- Combines static analysis and run-time checks
- Provides fine-grained memory safety
- Difficult to interface to pre-compiled libraries
- Different representation of pointer types
6Overview
- Ideal Combination of software based approaches
- For e.g. Sandbox for ASVM instructions
- Software-based Fault Isolation (SFI)
- Building block for providing coarse-grained
protection - Enhanced using other approaches (e.g. static
analysis) - Memory Map Manager
- Ensure integrity of memory accesses
- Control Flow Manager
- Ensure integrity of control flow
7SOS Operating System
8Design Goals
- Provide coarse-grained memory protection
- Protect OS from applications
- Protect applications from one another
- Targeted for resource constrained systems
- Low RAM usage
- Acceptable performance overhead
- Memory safety verifiable on the node
9Outline
- Introduction
- System Components
- Memory Map
- Control Flow Manager
- Binary Re-Writer
- Binary Verifier
- Evaluation
10System Overview
Desktop
Sensor Node
11System Components
- Re-writer
- Introduce run-time checks
- Verifier
- Scans for unsafe operations before admission
- Memory Map Manager
- Tracks fine-grained memory layout and ownership
info. - Control Flow Manager
- Handles context switch within single address space
12Classical SFI (Sandboxing)
- Partition address space of a process into
contiguous domains - Applications extensions loaded onto separate
domains - Run-time checks force memory accesses to own
domain - Checks have very low overhead
Run-time Stack
13Challenges - SFI on a mote
- Partitioning address space is impractical
- Total available memory is severely limited
- Static partitioning further reduces memory
- Our approach
- Permit arbitrary memory layout
- But maintain a fine-grained map of layout
- Verify valid accesses through run-time checks
14Memory Map
0x0200
Fine-grained layout and ownership information
Partition address space into blocks
Allocate memory in segments (Set of contiguous
blocks)
- Encoded information per block
- Ownership - Kernel/Free or User
- Layout - Start of segment bit
0x0000
User Domain
Kernel Domain
15Memmap in Action User-Kernel Protection
User
Kernel
Free
- Blocks Size on Mica2 - 8 bytes
- Efficiently encoded using 2 bits per block
- 00 - Free / Start of Kernel Allocated Segment
- 01 - Later portion of Kernel Allocated Segment
- 10 - Start of User Allocated Segment
- 11 - Later Portion of User Allocated Segment
16Memmap API
memmap_set(Blk_ID, Num_blk, Dom_ID)
- Updates Memory Map
- Blk_ID ID of starting block in a segment
- Num_blk Number of blocks in a segment
- Dom_ID Domain ID of owner (e.g. USER / KERN)
Dom_ID memmap_get(Blk_ID)
- Returns domain ID of owner for a memory block
- API accessible only from trusted domain (e.g.
Kernel) - Property verified before loading
17Using memory map for protection
- Protection Model
- Write access to a block is granted only to its
owner - Systems using memory map need to ensure
- Ownership information in memory map is current
- Only block owner can free/transfer ownership
- Single trusted domain has access to memory map
API - Store memory map in protected memory
- Easy to incorporate into existing systems
- Modify dynamic memory allocator - malloc, free
- Track function calls that pass memory from one
domain to other - Changes to SOS Kernel 1
- 103 lines in SOS memory manager
- 12720 lines in kernel
18Memmap Checker
- Enforce a protection model
- Checker invoked before EVERY write access
- Protection Model
- Write access to block granted only to owner
- Checker Operations
- Lookup memory map based on write address
- Verify current executing domain is block owner
19Address ? Memory Map Lookup
1 Byte has 4 memmap records
20Optimizing Memmap Checker
- Minimize performance overhead of checks
- Address ? Memory Map Lookup
- Requires multiple complex bit-shift operations
- Micro-controllers support single bit-shift
operations - Use FLASH based look-up table
- 4x Speed up - From 32 to 8 clock cycles
- Overall overhead of a check - 66 cycles
21Memory Map is Tunable
- Number of memmap bits per block
- More Bits ? Multiple protection domains
- Address range of protected memory
- Protect only a small portion of total memory
- Block size
- Match block size to size of memory objects
- Mica2 - 8 bytes, Cyclops - 128 bytes
Addr. Range Bits / Block 0B - 4096B 256B - 3072B
2 128 B (3) 88 B (2)
4 256 B (6) 176 B (4)
Memory Map Overhead - 8 Byte Blocks
22Outline
- Introduction
- System Components
- Memory Map
- Control Flow Manager
- Binary Re-Writer
- Binary Verifier
- Evaluation
23What about Control Flow ?
- State within domain can become corrupt
- Memory map protects one domain from other
- Function pointers in data memory
- Calls to arbitrary locations in code memory
- Return Address on Stack
- Single stack for entire system
- Returns to arbitrary locations in code memory
24Control Flow Manager
DOMAIN A call foo
- Ensure control flow integrity
- Control flow enters domain at designated entry
point - Control flow leaves domain to correct return
address - Track current active domain
- Required for memmap checker
- Require Binary Modularity
- Program memory is partitioned
- Only one domain per partition
DOMAIN B foo call local_fn ret
Program Memory
25Ensuring control flow integrity
- Check all CALL and RETURN instructions
- CALL Check
- If address within bounds of current domain then
CALL - Else transfer to Cross Domain Call Handler
- RETURN Check
- If address on stack within bounds of current
domain then RETURN - Else transfer to Cross Domain Return Handler
- Checks are optimized for performance
26Cross Domain Control Flow
- Function call from one domain to other
- Determine callee domain identity
- Verify valid entry point in callee domain
- Save current return address
27Cross Domain Call
Domain A call fooJT
Domain B foo ret
Jump Table
fooJTjmp foo
Program Memory
28Cross Domain Return
call foo
Cross Domain Return Stub
- Verify return address
- Restore caller domain ID
- Restore prev. return addr
- Return
foo ret
Program Memory
29Stack Protection
Single stack shared by all domains
Stack Bounds
- Stack bound set at cross domain calls and returns
- Protection Model
- No writes beyond latest stack bound
- Limits corruption to current stack frame
- Enforced by memmap_checker
- Check all write address
Stack Grows Down
Data Memory
User
Kernel
30Outline
- Introduction
- System Components
- Memory Map
- Control Flow Manager
- Binary Re-Writer
- Binary Verifier
- Evaluation
31Binary Re-Writer
PC
- Re-writer is a C program running on PC
- Input is raw binary output by cross-compiler
- Performs basic block analysis
- Insert inline checks e.g. Memory Accesses
- Preserve original control flow e.g. Branch targets
32Memory Write Checks
st Z, Rsrc
push X push R0 movw X, Z mov R0, Rsrc call
memmap_checker pop R0 pop X
- Actual sequence depends upon addressing mode
- Sequence is re-entrant, works in presence of
interrupts - Can be improved by using dedicated registers
33Control Flow Checks
Return Instruction
ret
jmp ret_checker
Direct Call Instruction
call foo
ldi Z, foo call call_checker
In-Direct Call Instruction
icall
call call_checker
34Outline
- Introduction
- System Components
- Memory Map
- Control Flow Manager
- Binary Re-Writer
- Binary Verifier
- Evaluation
35Binary Verifier
- Verification done at every node
- Correctness of scheme depends upon correctness of
verifier - Verifier is very simple to implement
- Single in-order pass over instr. sequence
- No state maintained by verifier
- Verifier Line Count 205 lines
- Re-Writer Line Count 3037 lines
36Verified Properties
- All store instructions to data memory are
sandboxed - Store instructions to program memory are not
permitted - Static jump/call/branch targets lie within domain
bounds - Indirect jumps and calls are sandboxed
- All return instructions are sandboxed
37Outline
- Introduction
- System Components
- Memory Map
- Cross Domain Calls
- Binary Re-Writer
- Binary Verifier
- Evaluation
38Resource Utilization
Type Normal Protected Overhead
Prog. Mem 41690 B 47232 B 13
Data Mem 2892 B 3040 B 5
- Implemented scheme in SOS operating system
- Compiling blank SOS kernel for Mica2 sensor
platform - Size of Memory Map - 128 Bytes
- Additional memory used for storing parameters
- Stack Bound, Return Address etc.
39Memory Map Overhead
- API modification overhead (CPU cycles)
API Normal Protected Increase
ker_malloc 363 622 82
ker_free 138 446 238
change_own 55 270 418
- Overhead of setting and clearing memory map bits
40Control Flow Checks and Transfers
OPERATION CYCLES
cross_domain_call 38 (9x)
cross_domain_ret 38 (9x)
ker_ret_check 14
ker_icall_check 8
- Inline checks occur most frequently
- ker_ret_check Push and Pop of return address
- Module verification 175 ms for 2600 byte module
41Impact on Module Size
Name Normal Protect Inc. Instr.
Blink 246 B 342 B 39 10
Surge 688 B 1170 B 70 41
Tree Routing 2616 B 4584 B 75 136
Time Sync 1070 B 1992 B 86 53
- Code size increase due to inline checks
- Can be reduced if performance is not critical
- True for most sensor network apps
- Increased cost for module distribution
- No change in data memory used
42Performance Impact
- Experiment Setup
- 3-hop linear network simulated in Avrora
- Simulation executed for 30 minutes
- Tree Routing and Surge modules inserted into
network - Data pkts. transmitted every 4 seconds
- Control packets transmitted every 20 seconds
- 1.7 increase in relative CPU utilization
- Absolute increase in CPU - 8.41 to 8.56
- 164 run-time checks introduced
- Checks executed 20000 times
- Can be reduced by introducing fewer checks
43Deployment Experience
- Run-time checker signaled violation in Surge
- Offending source code in Surge
-
- hdr_size SOS_CALL(s-gtget_hdr_size, proto)
- s-gtsmsg (SurgeMsg)(pkt hdr_size)
- s-gtsmsg-gttype SURGE_TYPE_SENSORREADING
- SOS_CALL fails in some conditions, returns -1
- Unchecked return value used as buffer offset
- Protection mechanism prevents such corruption
44Conclusion
- Software-based Memory Protection
- Enabling technology for reliable software systems
- Memory Map and Cross Domain Calls
- Building blocks for software based fault
isolation - Low resource utilization
- Minimal performance overhead
- Widely applicable
- SOS kernel with dynamic modules
- TinyOS components using dynamic memory
- Natively implemented ASVM instructions
45Future Work
- Explore CPU architecture extensions
- Prototype AVR implementation in progress
- Static analysis of binary
- Reduce number of inline checks
- Improve overall system performance
- Increase complexity of verifier
46Thank You !http//nesl.ee.ucla.edu/projects/sos-1
.x
- Ram Kumar
- CENS Seminar
- October 20, 2006
47SOS Memory Layout
0x0200
- Static Kernel State
- Accessed only by kernel
- Heap
- Dynamically allocated
- Shared by kernel and applications
- Stack
- Shared by kernel and applications
Run-time Stack
Dynamically Allocated Heap
Static Kernel State
0x0000
48Reliable Sensor Networks
- Reliability is a broad and challenging goal
- Data Integrity
- How do we trust data from our sensors ?
- Network Integrity
- How to make network resilient to failures ?
- System Integrity
- How to develop robust software for sensors ?