Title: Multiple Controlled Mobile Elements Data Mules for Data Collection in Sensor Networks
1Multiple Controlled Mobile Elements (Data Mules)
for Data Collection in Sensor Networks
- David Jea
- Arun Somasundara
- Mani Srivastava
2Sensor Networks
Event Tracking
Habitat Monitoring
3Data Collection in Sensor NetworksStatic
Multihop Routing
- More burden on certain nodes
- Need to form a connected network
4Data Collection in Sensor NetworksMobile Base
Station
- Solves both problems of static multihop routing
- No over-burdened nodes (Increased Lifetime)
- Need not form a connected network
5Mobility alternatives for Base Station
6Controlled Mobility
- Control in Time Domain
- Fixed trail
- Adaptively decide the speed with which mobile
moves to maximize data collection - MobiSys 2004
- Control in Space Domain
- Decide where the mobile goes
- RTSS 2004
7This paper
- Multiple Controlled Mobile Elements
- Decide the number of mobiles
- Load balancing (given number of mobiles)
Controlled Mobile Element Data Mule
8Recap Single Data Mule
- Given
- Single Data Mule
- Fixed path
- Goal
- Schedule of Data Mule to maximize data collection
- Design
- Network Algorithms
- Adaptive Motion control
9Recap Single Data MuleNetwork Algorithms
- Initialization
- Set up of routing trees
10Recap Single Data MuleNetwork Algorithms
- Initialization
- Set up of routing trees
- Local multihops
- Nodes not on path send data to on path nodes
11Recap Single Data MuleNetwork Algorithms
- Initialization
- Set up of routing trees
- Local multihops
- Nodes not on path send data to on path nodes
- Data Collection by Data Mule
12Recap Single Data MuleMotion Control Algorithms
- Given RTT (Round Trip Time)
- Move at constant speed s (Trail length/RTT)
- Change the speed adaptively to maximize data
collection - Stop at all nodes to clear its buffer
- RTT would depend on number of nodes
13Scalability
- More number of nodes
- Given RTT
- Data to be collected by more nodes in same time.
- Stop at all nodes
- Longer time to complete a round
- Buffer overflow at the next visit
14SolutionMultiple Data Mules
- Divide the area into equal parts, having one Data
Mule in each. - Each Data Mule and the corresponding nodes run
the single Data Mule algorithm - Works if nodes are uniformly randomly distributed
- Each Data Mule services approximately same number
of nodes. - 2 issues
- Number of Data Mules
- Handling of nodes shared by Data Mules
15Multiple Data Mules(a) Number of Data Mules
- 2nd form of motion control
- Data Mule stops at each node
- buffer_fill_time Time to fill a node's buffer
- service_time Time for the data mule to empty a
node's buffer - RTT Round trip time for the data mule
- (mule_travel_time) (num_nodes x service_time)
- If RTT ? buffer_fill_time,
- 1 data mule
- Otherwise,
- ceil(RTT/buffer_fill_time) data mules
16Multiple Data Mules(b) Common Nodes
- Nodes will be serviced by closer data mule
- N1, N2 by M1
- N4, N5 by M2
- Ties broken randomly
- N3 can be serviced by either
17Necessity ofLoad Balancing
- Multiple Data Mule solution works
- If nodes are uniformly randomly distributed
- In practice
- Nodes will be placed by field experts and lead to
non-uniform distribution. - It may not be feasible to have data mule paths
anywhere we want. - Problem
- Each Data Mule will serve different number of
nodes.
18Problem Statement
- Assumption Each node at 1 hop from
- At least 1 data mule
- At most 2 data mules
- non_shareable nodes can only be served by single
data mule. - shareable nodes can be attached to either of the
mules.
- Find data mule assignment for shareable nodes, so
that each mule services more or less same number
of nodes.
19Example 1
- 50 nodes, 2 data mules M1 and M2.
- M1 has 25 non_shareable nodes
- M2 has 5 non_shareable nodes
- 20 shareable nodes
- Average Load, 25 nodes per mule
25
20
5
20Example 2
- 50 nodes, 2 data mules M1 and M2.
- M1 has 35 non_shareable nodes
- M2 has 5 non_shareable nodes
- 10 shareable nodes
- Average Load, 25 nodes per mule
35
10
5
21Load Balancing Algorithm
- Initially, all data mules in one group.
- Try make the load of each mule equal to the
average load of that group. - Split into two groups when any mule must take
less or more loads. - Recalculate average loads of the two groups.
- Try to balance the load of each group
recursively. - Recursion terminated when reach last mule of the
group.
22Group Split Condition
- If the minimum load that has to be assigned to
the mule under consideration is more than group
average. - If the maximum load that can be assigned to the
mule is less than the group average.
23Illustration A
- Condition 1
Group Split!!
1
2
3
4
5
6
24illustration A
Group 1Increasing Average Load should carry
Group 2Decreasing Average Load should carry
1
2
3
4
5
6
25illustration B
- Condition 2
Group Split!!
1
2
3
4
5
6
26illustration B
- Condition 1
Group 2Increasing Average Load should carry
Group 1Decreasing Average Load should carry
Group Split!!
1
2
3
4
5
6
27illustration B
Group 2Increasing Average Load should carry
Group 2Decreasing Average Load should carry
Group 1
1
2
3
4
5
6
28Multiple Data Mules System
- Initialization data mules collect a list of
nodes on its path. - Leader election mules select one leader and
tranmit own information to it. - Load Balancing decide number of shareable nodes
each mule should serve. - Assignment assign data mule for nodes based on
above result. - Data Collection mules collect data from the
designated nodes.
29Simulation Methodology
- Evaluated on TinyOS/TOSSIM
- Tython used to simulate mobility
- 3 schemes for sharing the load
- First Come First Serve
- Equal sharing
- Load balancing algorithm
30Simulation Topology
31Simulation Results
Average of packets per node per round
FCFS
EQUAL
Load Balance
Data Mule Id
32Conclusion Future Work
- Controlled mobile elements to collect data in
wireless sensor networks. - A load balancing algorithm to determine number of
nodes each Data mule should serve. - Remove the assumption that each node can talk to
at least one mule and at most two mules. - Consider costs of multi-hop.
- Mobile element can be added or removed during
system runtime.
33 34(No Transcript)