Title: Flush: A Reliable Bulk Transport Protocol for Multihop Wireless Networks
1Flush A Reliable Bulk Transport Protocol for
Multihop Wireless Networks
- Sukun Kim, Rodrigo Fonseca, Prabal Dutta,
Arsalan Tavakoli, David Culler, Philip Levis,
Scott Shenker, and Ion Stoica
University of California at Berkeley
International Computer Science Institute
Samsung Electronics
Stanford University
SenSys 2007
2Motivating Example
Sausalito (north)
SF (south)
500 ft
1125 ft
4200 ft
246 ft
56 nodes
8 nodes
Structural Health Monitoring of the Golden Gate
Bridge
- All data from all nodes are needed
- As quickly as possible
- Collecting data from one node at a time is
completely acceptable - Over 46 hop network !
3Introduction
- Target applications
- Structural health monitoring, volcanic activity
monitoring, bulk data collection - One flow at a time
- Remove inter-path interference
- Easier to optimize for intra-path interference
- Built on top of MAC layer
- No merging with MAC layer for easy porting
4Table of Contents
- Introduction
- Algorithm
- Implementation
- Evaluation
- Discussion
- Related Work
- Conclusion
5Flush Algorithm Overview
- Receiver-initiated
- Selective-NACK
- Hop-by-hop Rate Control
- Empirically Measure Interference Range
6Rate Control
8
6
5
4
8
6
4
3
7
5
Interferer
6
5
4
1 / Rate Packet Interval d8 d7 d6 d5
dX Packet transmission time at node X
7Interference Range gt Reception Range
However,
Jammer
Vulnerable to Jammer
No problem to Jammer
Signal Strength
Noise Floor
Noise Floor SNR Threshold
Noise Floor 2 X SNR Threshold
SNR Threshold minimum SNR to decode a packet
Jammer a node which can conflict with the
transmission, but cannot be heard
8Identifying the Interference Set
CDF of the difference between the received signal
strength from a predecessor and the local noise
floor A large fraction of interferers are
detectable and avoidable
Fraction of Links
9Implementation Control Information
- Control information is snooped
- d packet transmission time, 1 byte
- f sum of ds of interfering nodes, 1 byte
- D Packet Interval 1 / Rate, 1 byte
- d, f, D are put into packet header, and exchanged
through snooping
10Implementation Rate-limited Queue
- 16-deep Rate-limited Queue
- Enforces departure delay D(i)
- When a node becomes congested (depth 5), it
doubles the delay advertised to its descendants - But continues to drain its own queue with the
smaller delay until it is no longer congested
11Table of Contents
- Introduction
- Algorithm
- Implementation
- Evaluation
- Discussion
- Related Work
- Conclusion
100 MicaZ nodes Purple nodes Diameter of 67
hops Mirage Testbed in Intel Research Berkeley
Sink
12Packet Throughput of Different Fixed Rates
Effective Throughput (pkt/s)
Packet throughput of fixed rate streams over
different hop counts No fixed rate is always
optimal
13Packet Throughput of Flush
Overall Throughput (pkt/s)
Effective packet throughput of Flush compared to
the best fixed rate at each hop Flush tracks the
best fixed packet rate
14Bandwidth of Flush
Overall Bandwidth (B/s)
Effective bandwidth of Flush compared to the best
fixed rate at each hop Flushs protocol overhead
reduces the effective data rate
15Fraction of Data Transferred in Different Phases
- Fraction of data transferred from the 6th hop
during the transfer phase and acknowledgment
phase - Greedy best-effort routing is unreliable, and
exhibits a loss rate of 43.5. A higher than
sustainable rate leads to a high loss rate
16Amount of Time Spent in Different Phases
- Fraction of time spent in different stages
- A retransmission during the acknowledgment phase
is expensive, and leads to a poor throughput
17Packet Throughput at Transfer Phase
Transfer Phase Throughput (pkt/s)
Effective goodput during the transfer
phase Flush provides comparable goodput at a
lower loss rate which reduces the time spent in
the expensive acknowledgment phase, which
increases the effective bandwidth
18Packet Rate over Time for a Source
Flush-e2e has no in-network rate control
- Source is 7 hops away, Data is smoothed by
averaging 16 values - Flush approximates the best fixed rate with the
least variance
19Maximum Queue Occupancy across All Nodes for Each
Packet
- Flush exhibits more stable queue occupancies
than Flush-e2e
20Sending Rate at Lossy Link
6
3
4
5
2
1
0
Packets were dropped from hop 3 to hop 2 with 50
probability between 7 and 17 seconds
Both Flush and Flush-e2e adapt while the fixed
rate overflows its queue
21Queue Length at Lossy Link
Flush and Flush-e2e adapt while the fixed rate
overflows its queue
22Route Change Experiment
4
5
- We started a transfer over a 5 hop path
- Approximately 21 seconds into the experiment
forced the node 4 hops from the sink to switch
its next hop - Node 4s next hop is changed, changing all nodes
in the subpath to the root - No packets were lost, and Flush adapted quickly
to the change
3a
3b
2a
1a
2b
1b
0
23Scalability Test
Effective bandwidth from the real-world outdoor
scalability test where 79 nodes formed 48 hop
network with 35B payload size Flush closely
tracks or exceeds the best possible fixed rate
across at all hop distances that we tested
Overall Throughput (B/s)
24Table of Contents
- Introduction
- Algorithm
- Implementation
- Evaluation
- Discussion
- Related Work
- Conclusion
25Discussion
- High-power node
- reduces hop count and interference
- Not an option on many structural health
monitoring due to power and space problems - Interactions with Routing
- Link estimator of a routing layer breaks down
under heavy traffic
26Related Work
- Li et al capacity of a chain of nodes limited
by interference using 802.11 - ATP, W-TCP rate-based transmission in the
Internet - Wisden concurrent transmission without a
mechanism for a congestion control - Fetch single flow, selective-NACK, no mention
about rate control
27Conclusion
- Rate-based flow control
- Directly measure intra-path interference at each
hop - Control rate based on interference information
- Light-weight solution reduces complexity
- Overhearing is used to measure interference and
to exchange information - Two rules to determine a rate
- At each node, Flush attempts to send as fast as
possible without causing interference at the next
hop along the flow - A nodes sending rate cannot exceed the sending
rate of its successor - In combination, Flush provides as good bandwidth
as fixed rate, and also gives a better
adaptability
28Questions
29(No Transcript)
30Reliability
Source
Sink
2, 4, 5
2
4
5
4, 9
4, 9
4
9
31Rate Control Conceptual Model
Assuming disk model N Number of nodes, I
Interference range
Rate
32Rate Control
1. At each node, Flush attempts to send as fast
as possible without causing interference at the
next hop along the flow 2. A nodes sending rate
cannot exceed the sending rate of its successor
8
6
5
4
d8 d8 H7 d8 d7 f7 d8 d7 d6 d5
8
7
6
5
4
3
33Implementation
- RSSI is measured by snooping
- Information is also snooped
- d, f, D are put into packet header, and exchanged
through snooping - d, f, D take 1 byte each, 3 bytes total
- Cutoff
- A node i considers a successor node (i- j) an
interferer of node i1 if, for any j gt 1,
rssi(i1) - rssi(i-j) lt 10 dBm - The threshold of 10 dBm was chosen after
empirically evaluating a range of values
34Implementation
- 16-deep Rate-limited Queue
- Enforces departure delay D(i)
- When a node becomes congested (depth 5), it
doubles the delay advertised to its descendants - But continues to drain its own queue with the
smaller delay until it is no longer congested - Protocol Overhead
- Out of 22B provided by Routing layer, 2B sequence
number 3B control field 17B payload
35Test Methodology
- Mirage testbed in Intel Research Berkeley,
consists of 100 MicaZ - -11 dBm
- Diameter of 67 hops
- Average of 4 runs
36Bottom line performance
- High-power node
- reduces hop count and interference
- Not an option on the Golden Gate Bridge due to
power and space problems - Interactions with Routing
- Link estimator of a routing layer breaks down
under heavy traffic - Bottom line performance???
37Average Number of Transmissions per node for
sending 1,000 packets
38Bandwidth at Transfer Phase
Transfer Phase Throughput (B/s)
Effective goodput during the transfer
phase Effective goodput is computed as the
number of unique packets received over the
duration of the transfer phase
39Details of Queue Length for Flush-e2e
40Memory and Code Footprint
41(No Transcript)
42(No Transcript)
436
3
4
5
2
1
0
444
5
3a
3b
2a
1a
2b
1b
0
45Motivating Example
- Every data from every node is needed
- Partial data has no or little value
- Should work over 46 hops