Distributed Dynamic Scheduling for EndtoEnd Rate Guarantees in Wireless Ad Hoc Networks Using Asynch - PowerPoint PPT Presentation

1 / 29
About This Presentation
Title:

Distributed Dynamic Scheduling for EndtoEnd Rate Guarantees in Wireless Ad Hoc Networks Using Asynch

Description:

They carry the current estimate of the MMF rate for that flow. ... As the estimated MMF rates change for each flow, the corresponding link demands ... – PowerPoint PPT presentation

Number of Views:60
Avg rating:3.0/5.0
Slides: 30
Provided by: jonathant5
Category:

less

Transcript and Presenter's Notes

Title: Distributed Dynamic Scheduling for EndtoEnd Rate Guarantees in Wireless Ad Hoc Networks Using Asynch


1
Distributed Dynamic Scheduling for End-to-End
Rate Guarantees in Wireless Ad Hoc
Networks(Using Asynchronous TDMA)
  • T. Salonidis L. Tassiulas
  • Presented by Jon Leighton
  • jtleight_at_udel.edu

2
What exactly does that mean?
  • There are no central algorithms, and no global
    network knowledge is required.
  • Changes in network topology (mobility), and
    session demand are supported.
  • Each flow over the network is assigned a
    guaranteed rate. (MMF)
  • Nodes communicate using TDMA, but without
    synchronizing their clocks.

3
Outline of Presentation
  • Asynchronous TDMA architecture
  • Dynamic Link Scheduling
  • Local feasibility conditions
  • Distributed coordination system
  • Distributed link scheduling algorithm
  • End-to-end Fairness and Rate Guarantees
  • Experiments and Results
  • Conclusions

4
Asynchronous TDMA
  • Each node communicates on at most one link at a
    time.
  • No hidden terminal problem
  • Use orthogonal channels
  • Use directional antennae
  • Each node uses its own clock to divide time into
    fixed size full duplex slots.
  • Nodes transmit and receive on their adjacent
    links according to their own local schedule of
    period Tsystem slots.
  • On each link, one node is the master and the
    other the slave.
  • Master node controls the reference time on a link.

5
Asynchronous TDMA
  • The network can be modeled as a directed graph
    G(N,E), with master nodes pointing to slaves.
  • Any distinct pair of linked nodes may communicate
    simultaneously.
  • A node may be the master on some of its links,
    and the slave on its other links.
  • Time slots on master and slave nodes will not, in
    general, be aligned.

6
Asynchronous TDMA
  • If the master node assigns tl consecutive time
    slots to link l, then the slave must assign tl
    1 time slots to accommodate the lack of
    synchronicity.

7
Asynchronous TDMA
  • Questions
  • What happens if the number of slots assigned to a
    particular link needs to change?
  • How do the endpoint nodes on a link find and
    agree on what slots to use for that link?
  • How can we be sure that no node is asked to
    create a schedule that is not feasible?

8
Dynamic Link Scheduling
  • Local feasibility conditions.
  • Given global knowledge, the question of whether
    or not a particular set of slot demands for each
    link is feasible, in a period of Tsystem slots,
    is NP-complete.
  • To ensure feasibility solely on the basis of
    local knowledge, you must either control the
    topology or under utilize the network.
  • For uncontrolled topologies, feasibility can be
    guaranteed if every node offers no more than
    ?Tsystem /3? slots to all its adjacent links.
  • For bipartite topologies, feasibility can be
    guaranteed if every node offers no more that
    ?Tsystem /2 ? slots to all its adjacent links.
  • For either topology, you will find that there are
    many cases where you can offer many more slots
    than allowed by these bounds, however, in general
    you can only guarantee an offer that meets these
    bounds.

9
Dynamic Link Scheduling
  • Local feasibility conditions (cont.)
  • What is the best we can hope to do? That is, how
    many of the Tsystem slots can we hope to offer in
    the best case?
  • We can offer enough slots that the busiest node
    will utilize all Tsystem slots.
  • In general, this cannot be accomplished.
  • It turns out that it can be accomplished for tree
    topologies.

where
10
Dynamic Link Scheduling
  • Summary of topology based conditions for local
    determination of feasibility.

where
11
Dynamic Link Scheduling
  • Distributed coordination mechanism
  • At any given time, several links may be
    asynchronously triggered for rate adjustment in
    parallel.
  • Link rate adjustment is triggered when an
    endpoint node is notified of a change in the link
    demand, or is notified of a schedule change by an
    adjacent node.
  • Each node can only be involved in one link rate
    adjustment at a time. Upon initiation, both
    nodes set their busy bit, exchange their
    current schedules, and one node determines the
    new schedule.
  • The updated schedule is stored but not
    implemented until notification has been sent to
    all affected neighbors and they have acknowledged
    the modification.
  • The busy bit is then cleared on both nodes.
  • During the rate adjustment process, the current
    TDMA schedule is maintained, and control packets
    are sent using these slots.

12
Dynamic Link Scheduling
  • Link Scheduling Algorithm (tree topologies)
  • The link scheduling algorithm is invoked in
    response to a need to adjust link rates.
  • Each node knows only its own parent/child
    relationships, and the current demand on each of
    its adjacent links.
  • The algorithm may start from any TDMA schedule,
    and will converge to a new schedule satisfying
    the current demand requirements.
  • Each node tries to satisfy all its child links.
  • Satisfaction means
  • Each child link is scheduled in a single window
    of time slots.
  • The size of the window exactly meets the demand.

13
Dynamic Link Scheduling
14
Dynamic Link Scheduling
  • Consider a node u
  • The window of slots required for the parent link
    is chosen by the parent node and enforced on node
    u.
  • Node u then selects and enforces a slot window
    for each of its unsatisfied child nodes.
  • Each child link is assigned a priority based on
    the order its slots occur in node us current
    schedule.
  • Node u checks its children for satisfaction in
    order of decreasing priority.
  • If all children are satisfied, the link
    scheduling algorithm terminates.
  • If a child is found not to be stabilized, it and
    all lower priority children are rescheduled.

t (2, 2, 2, 3, 3)
15
Dynamic Link Scheduling
  • Rescheduling a child of node u.
  • The schedule for us parent, and all children
    with higher priority than the current child, are
    considered fixed.
  • The minimum allocation needed to satisfy all
    lower priority children is determined based on
    the current demands, and is reserved at the end
    of the schedule.
  • The available window for scheduling this child is
    all remaining slots.
  • The child is scheduled in the window so that the
    fewest links are affected on node u and the child.

t (2, 2, 2, 6, 3)
16
End-to-End Fairness and Rate Guarantees
  • Finding Feasible Session Rates
  • Maximum radio transmission rate is 2R bps.
  • Rate for session i is ?i R. (half duplex)
  • Network must be able to allocate ti ?ri/R
    Tsystem? slots at every link on sessions path.
  • On intermediate nodes we must include slots for
    both the incoming and outgoing links. Thus, if
    F(u) is the set of all sessions passing through
    node u, then the local feasibility conditions for
    session rates are

where
17
End-to-End Fairness and Rate Guarantees
  • The link demands are found from the session
    demands by
  • Link demands which meet these conditions are
    enforceable by the link scheduling algorithm.

18
End-to-End Fairness and Rate Guarantees
  • How are the session rates ?i chosen?
  • There are many ways to choose which flow gets
    what portion of the available bandwidth.
  • In this case we use the max-min fairness (MMF)
    objective.
  • MMF in simple terms
  • Locate the node(s) with the most flows passing
    though it and evenly distribute the bandwidth
    among the flows. This is the enforced session
    rate for these flows.
  • Remove the node and its flows from the network,
    and reduce the bandwidth of all other nodes that
    these flows passed through by their session
    rates.
  • Repeat until all flows have been removed.

19
End-to-End Fairness and Rate Guarantees
Tsystem 50
20
End-to-End Fairness and Rate Guarantees
  • Finding the MMF rates
  • Note that some nodes cannot allocate all 50 slots
    due to their roll as slaves on some links. Their
    bandwidth is correspondingly reduced from the
    full link bandwidth. For example, while node A
    has the full bandwidth available, node G has only
    47/50 94 of the full bandwidth available.
  • Node A has the greatest number of sessions, at 8.
    Evenly sharing its bandwidth gives each flow 1/8
    of the full bandwidth.
  • Removing node A and all the flows through it,
    leaves node B with the greatest number of
    sessions, at 3. The full bandwidth of B is
    reduced by the bandwidth allocated to the 3 flows
    it had in common with node A, leaving 1 3/8
    5/8 of the full bandwidth. Evenly sharing this
    bandwidth gives the three remaining flows 5/24
    0.208 of the full bandwidth.
  • Removing node B and the three flows through it,
    leaves node G with the only remaining flow. The
    three flows removed with node B do not pass
    through node G, so node Gs bandwidth is reduced
    only by the five flows it shared with node A,
    giving an available bandwidth of 0.94 5/8
    0.315 of the full bandwidth. This is allocated
    to the one flow.
  • There are no flows remaining, so the MMF rates
    have been determined (as fractions of the full
    link rate), to be
  • S1 S2 S3 S7 0.125, S4 S6 0.208, S5
    0.315

21
End-to-End Fairness and Rate Guarantees
  • Some details of implementing distributed MMF
  • Algorithm is based on similar work done in ATM
    networks.
  • Control packets are injected and circulated along
    each session path. They carry the current
    estimate of the MMF rate for that flow.
  • Upon receiving a control packet, a node adjusts
    its estimate of the MMF rate for that flow based
    on an update algorithm, updates the packet
    information, and forwards the packet.
  • As the estimated MMF rates change for each flow,
    the corresponding link demands also change, and
    these changes are conveyed to the nodes adjacent
    to these links.

22
End-to-End Fairness and Rate Guarantees
  • It can be shown that the process will converge to
    the MMF rates in a finite number of iterations,
    for any topology.
  • While this process is converging towards the MMF
    rates, the Link Scheduling algorithm is trying to
    meet the changing link demands.
  • Once the MMF rates have stabilized, the link
    demands have also stabilize, and the link
    scheduling algorithm will now converge.

23
Experiments
  • Simulated a Bluetooth network using ns.
  • Bluetooth
  • Is a multi-channel asynchronous TDMA wireless
    network.
  • Uses orthogonal channels based on frequency
    hopping.
  • For each link it assigns master and slave rolls
    to the adjacent nodes.
  • Supports using two half duplex slots to simulate
    full duplex communication.
  • Supports a transmission rate of 172.8kbps in each
    direction.
  • Handles the issues of node discovery, frequency
    assignment, and time slot reference.

24
Experiments
  • Ran the simulation once with each node as the
    root of the tree.
  • Simulation duration was 25 sec.
  • Reported time for link demands to stabilize, DS,
    and time for link scheduling to converge once the
    link demands were stable, DL.

25
Results
26
Results (cont.)
27
Conclusions
  • Proposed an asynchronous distributed MMF
    algorithm applicable to any network topology.
  • Showed that tree topologies can achieve the
    maximum utilization possible given the underlying
    asynchronous TDMA architecture.
  • Introduced a link scheduling algorithm that
    enforces the MMF end-to-end rates for tree
    topologies.
  • Presented an implementation of this framework
    over Bluetooth.
  • A natural extension of this work is to
    investigate link scheduling algorithms for more
    general topologies than trees.

28
Questions?
29
Homework
19
B
S3
Tsystem 20
2
19
H
C
19
3
5
10
S2
18
11
1
7
6
S1
19
19
A
J
F
D
I
17
20
S6
8
4
9
S5
19
20
E
G
S4
  • Find the MMF session rates for sessions S1 S6
    in the network shown above, as a fraction of the
    max 1-way link bandwidth, R. The bold numbers
    are the link numbers, and the numbers next to
    each node indicate the number of time slots
    available for scheduling in a period of length
    Tsystem.
  • Briefly answer the three questions on slide 7
    (in the context of distributed MMF rate
    guarantees and tree topologies).
Write a Comment
User Comments (0)
About PowerShow.com