Wireless Mesh with Mobility - PowerPoint PPT Presentation

1 / 37
About This Presentation
Title:

Wireless Mesh with Mobility

Description:

0. Penn State, 4-7-08. Wireless Mesh with Mobility ... Accipiter: engineering and consulting. Platform. Custom (small) robot. Gumstick Linux processors ... – PowerPoint PPT presentation

Number of Views:26
Avg rating:3.0/5.0
Slides: 38
Provided by: ravis9
Category:

less

Transcript and Presenter's Notes

Title: Wireless Mesh with Mobility


1
Wireless Mesh with Mobility
  • Thomas F. La Porta (tlp_at_cse.psu.edu) Guohong
    Cao (gcao_at_cse.psu.edu)
  • The Pennsylvania State University
  • Students Hosam Rowaihy, Mike Lin, Tim Bolbrock,
    Qinghua Li
  • Wireless Mesh with Mobility
  • Executive Summary
  • Schedule
  • Centralized
  • Distributed
  • Status

2
Wireless Mesh with Mobility Executive Summary
  • Network example 1 Large retail back-room
  • central server acts as database
  • mobile readers (automated and with personnel)
    keep data fresh and respond real-time
  • generalize to large warehouses with wifi
  • Network example 2 Make-shift large warehouse
  • no central server use distributed cache
  • multi-hop communication optimized for inventory
    system
  • Problems
  • scheduling robot movement to meet delay
    constraints
  • locating inventory with no central controller
  • Benefits to Vendors
  • faster customer response inventory aggressively
    updated
  • less expensive infrastructure mobile readers
    cover large areas

3
Schedule
  • Milestones
  • Q1 Querying algs for multiple robots defined,
    centralized cache implemented
  • Q2 Mobile mesh implemented, CacheData
    implemented
  • Q3 Querying algs implemented, sim results,
    CachePath implemented, caching policies
  • Q4 Measurements
  • Cost Share
  • CISCO consulting
  • Vocollect equipment and consulting
  • Accipiter engineering and consulting
  • Platform
  • Custom (small) robot
  • Gumstick Linux processors
  • RFID readers from Vocollect

4
Centralized Architecture
  • Query algorithms
  • Naïve
  • Return to center
  • Area of Responsibility
  • Flexible Grid

5
Area of Responsibility
Heavy load, Small area
Resting circle
  • Areas of responsibility
  • Change dynamically according to queries served
    (weighted moving average)
  • If no readers covers a crate, closest serves it
  • Resting circle
  • Mobile reader can reach any location within area
    of responsibility in lt tseconds
  • other basic scheme return to center

6
Area of Responsibility
1
1
1
  • Scenario query arrives for tag located outside
    all areas of responsibility
  • Mobile RFID reader 1 calculates that it should
    move
  • Mobile RFID reader 1 moves
  • New AR is calculated

7
Rest point
  • Readers must reside on or within circumference of
    rest circle
  • Center will reposition based on movement

8
Flexible Grid
  • Area of responsibility center remains constant
  • Circumference changes based on movement
  • Leads to stable data distribution

9
Centralized Architecture Evaluation
  • Consider both skewed and uniform queries
  • Skewed queries are distributed using a
    burstiness algorithm to model temporal locality
    of queries and the Zipf distribution to model
    popular items
  • 1,000,000 sq. ft. warehouse with 10,000
    uniformly distributed RFID tags
  • 1000 queries to 4 and 16 mobile readers
  • Skewed and uniform results are similar

10
Centralized Algorithm Delay Results
4 robots
16 robots
  • Naïve solution is the best

11
Centralized Algorithm Distance Results
  • Naïve results are the best

12
Distributed Architecture
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Readers 1. Receive queries 2. Locate server 3.
Return answer 4. Local cache
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
Crate
  • Multi-hop network may become disconnected due to
    mobility
  • Algorithm updates required
  • Connected readers run algorithm search for
    others while moving
  • Results returned to query point (similar
    process)
  • Implications
  • Pre-positioning may help maintain connectivity
  • Limiting movement may help maintain connectivity

13
Distributed Architecture Example and Analysis
Total delay, T For centralized For fully
connected network
Definitions Alg RP (rest point) or Naïve
(N) Net multi-hop (MH) or centralized (C) Type
non-reader (nr), or reader (r)
14
Distributed Architecture vs. Centralized
  • More realistic case network has some partitions
  • Naïve algorithm
  • AR algorithms

(we may or may not pick the optimal reader)
(we may or may not pick the optimal reader)
(based on empirical data)
(based on empirical data)
15
Multihop Evaluation
  • 1,000,000 sq. ft. warehouse with 10,000 RFID
    tags
  • Skewed and uniform queries (results are similar)
  • Queries now originate from query sources on the
    edge of the warehouse
  • Wireless transmission range of 300 ft.

16
Multi-hop Results
  • Flexible grid performs the best, naive is one of
    the worst

17
Multi-hop Results
  • Flexible grid outperforms by a significant margin

18
Analysis
  • Flexible grid outperforms the other algorithms by
    a wide margin
  • Performance can be characterized by looking at
    the secondary distance travelled
  • Secondary distance is the total distance
    travelled to respond to a query by readers that
    were not the first reader to receive the query

19
Secondary Distance
  • Flexible grid has a very small secondary distance
    compared to other algorithms

20
Analysis
  • The forced structure of the flexible grid
    algorithm reduces the secondary distance
  • df - distance saved by forwarding query
  • dfFlex Grid is much higher relative to the
    overall distance travelled

21
Analysis
  • Although all algorithms begin on a grid, only the
    flexible grid algorithm retains the structure,
    which increases the efficiency of forwarding
    queries and reduces the average distance the
    reader must travel

22
Discussion
  • Centralized scheme will always be the best
  • Always choose optimal reader
  • No extra movement
  • BUT not always feasible
  • Flexible Grid scheme is best in a disconnected
    network
  • Network is more connected

23
Caching
  • Cache Path keep record of how to reach data
  • This is done in all mobile robots
  • Used to determine nearest robot
  • Cache Data keep copies of data that have been
    gathered or forwarded
  • Will greatly reduce query time
  • Improvement depends on
  • Cache hit/miss ratio
  • Cache time-out
  • Important factors
  • How much information is learned
  • Shortest path is not always the best for
    learning
  • Moving more robots may be better

24
Centralized Cache Policy
  • Basic Time-to-live
  • Data is considered useful if it has been
    refreshed within time T
  • Advanced Item-specific time-to-live
  • Hot items have a lower T
  • Inventory changes more frequently
  • Current set by manager

25
Centralized Basic Simulation Results (Naïve
Mobility)
  • Query latency reduced from non-caching case by up
    to 25 with 3600 second TTL

Random queries, 4 robots
Random queries, 16 robots
26
Centralized Advanced Policy Simulation Results
(Naïve mobility)
  • Query latency reduced from non-caching case by up
    to 35 when hotspots present
  • Hot items have lower TTL (POP_TTL on x-axis),
    but are queried more, resulting in updated data
    and cache hits
  • Cold items have long TTL, so also experience
    cache hits

Skewed queries, Cold item TTL 3600 second
27
Distributed Cache Policies
  • Active vs. Passive Caching
  • Passive cache only what is queried (typically a
    single data item)
  • Active cache everything read between starting
    point and query point
  • Asymmetric Caching
  • Use different paths for traveling to and from
    query point
  • Learn more when using active caching vs.
    traveling longer distance
  • Cache Exchange vs. Queried Item
  • Queried Item only item queried is cached in
    other readers
  • Cache Exchange readers that come in contact
    exchange all data
  • Overhead of transfer vs. information learned

28
Cache Performance Analysis Warehouse Model
29
Cache Performance Analysis
  • In general, response time is
    ?
  • Where f(T) is the probability of cache hit, T is
    the cache TTL, and N is the number of tags
  • where ? is the
    rate at which data enters the cache
  • For centralized
  • For distributed

30
Analytical Results

Single robot moving
All robots moving
Shows impact of learning rate more robots,
higher speed ? lower latency
31
Multihop Cache Simulation Results
  • Flexible Grid Algorithm Used
  • No concurrent queries (worst case)
  • Only a single robots moves at any instance
  • Reduce query latency by up to 25

4 robots, random queries
16 robots, random queries
32
Multihop Cache Simulation Results
  • Flexible Grid Algorithm Used
  • Skewed queries
  • No concurrent movement (worst case)
  • Reduce query latency by up to 35

16 robots, skewed queries
33
Multihop Cache Simulation Results
  • Concurrent queries allowed
  • Multiple robots move at once
  • More information being learned per unit time
  • Most realistic case
  • Reduce query latency by up to 70 over case with
    only a single query at a time

16 robots, skewed queries
34
Comparison with Goals
  • Scale to networks with 10s of robots and 1,000s
    of nodes
  • Simulations cover up to 16 robots and 10,000
    tags
  • Show response times in 15 second range
  • Extend RFID network lifetimes over active tag
    hierarchy by factor of 2
  • No active tags used, so RFID components have no
    lifetime constraints
  • Reduce search times by factor of 2 over pure RFID
    solution
  • Pure RFID equivalent to single robot case
    (person reader)
  • We show greater than factor of 2 reduction when
    we go from 4 to 16 robots without caching
  • We show an addition factor of 3 reduction with
    cache data and concurrent queries

35
Testbed
RFID tags
Robots
36
Measurements
  • Tim fill in

37
Status
  • Centralized Architecture
  • Architecture defined
  • Querying algorithms in place and simulated
  • Integration with robots and centralized cache
    complete
  • Distributed Architecture
  • Architecture defined
  • Mesh formation algorithms designed
  • Simulation complete
  • Integration with robots and distributed cache
    complete
  • Caching
  • Cache path in system
  • Cache data analyzed and implemented in simulator
  • Simulation complete
  • Porting to robots complete
  • Robots
  • Design and implementation complete
  • RFID equipment from Vocollect integrated
  • Integration with Querying and Caching complete
Write a Comment
User Comments (0)
About PowerShow.com