Dynamic Scheduling Algorithms for OutputBuffered Switches : Analysis, Design and Implementation - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

Dynamic Scheduling Algorithms for OutputBuffered Switches : Analysis, Design and Implementation

Description:

Dynamic Scheduling Algorithms for Output-Buffered Switches : Analysis, Design ... amount of time (i.e. the sojourn time) having a mean ... – PowerPoint PPT presentation

Number of Views:87
Avg rating:3.0/5.0
Slides: 66
Provided by: shamalasu
Category:

less

Transcript and Presenter's Notes

Title: Dynamic Scheduling Algorithms for OutputBuffered Switches : Analysis, Design and Implementation


1
Dynamic Scheduling Algorithms for Output-Buffered
Switches Analysis, Design and Implementation
  • Presented by
  • Shamala Subramaniam
  • Dept. of Communication Technology Networks
  • Faculty of Computer Science Information
    Technology
  • Universiti Putra Malaysia

2
Contents
  • Objective of Presentation
  • Environment of Discourse (EoD)
  • Scheduler Criterions
  • Related Work
  • Network Model
  • OCcuPancy_Adjusting (OCP_A)
  • ACcElerated (ACE) Scheduling Policies
  • Analytical Modeling
  • Semi-Markov Process (SMP)
  • Discrete Event Simulation (DES)
  • Results Discussions

3
Presentation Objective
Quality of Service (QoS )
Conventional Internet
Offered single class of best-effort service
Congestion Control
No admission control
Packet Delay Packet Losses High Tolerance
Todays Internet 2002
No assurance about when, or even if, packets will
be delivered
Traditional Data (e.g. Telnet)
Real-Time
Non Real-Time
4
What is Congestion ?
Environment of Discourse
  • A network is said to be congested from the
    perspective of user I if the utility of I
    decreases due to an increase in network load.
  • Srinivasan Keshav

5
Environment of Discourse (EoD)
Survey of congestion control techniques
  • the congestion control mechanism is to make
    reservations of network resources so that
    resource availability is deterministically
    guaranteed
  • allows much more flexibility in the allocation
    of resources.
  • Resources can be statistically multiplexed as
    users are not guaranteed a level of utility

6
Environment of Discourse (EoD)
Time scales of Control
7
EoD Congestion Control
Real-time applications
Internet Architecture re-design
Traditional Internet Platform
In-adequately treated
Improving Implementations aspects
Variations in delay are too extreme too many
dropped packets
Do not back-off in the presence of congestion
Scheduling Algorithms
8
Packet Switch Overviews
9
Packet Switch Overview
First Generation Switches
2nd Generation Switches
10
Packet Switch Overview (cont.)
3rd Generation Switches
Parallel Forwarding Engines
11
Packet Switch Overview (cont.)
Input-versus-Output Queueing
12
Scheduler Criterion
13
Scheduler Design Criterion
The Conservation Law By Leonard Kleinrock
  • Consider a set of N connections at a scheduler,
    such that traffic arrives from connection i at
    a mean rate ?i and the mean service time for a
    packet from connection i is x.
  • Let be the mean utilization of a
    link due to connection I.
  • Let connection Is mean waiting time at the
  • scheduler be qi

14
Scheduler Design Criterion
The Max-Min Fair Share
  • Consider a set of sources 1,,n that have
    resources demand x1,x2,,xn.
  • Order the source demands so that x1?x2 ? ?xn.
    Let the server have a capacity C.
  • Then we initially give C/n of the resource to the
    source with the smallest demand, x1.
  • This may be more than what source 1 wants, so
    that C/n x1 of the resources is still available
    as unused excess.
  • We distribute this excess evenly to the remaining
    n-1 sources, so that each of them gets

15
Related Work
16
Related Work
QoS Control
Dynamic resource allocation
Rate-Based
Priority
Latency
SCFQ
MLT
HoL Priority
Fair Queuing
ACE
MLT with priority
Based on QoS Feedback
HoL with Priority Jumping
Virtual Clock
OCP_A
Based on Traffic Change
Static QoS Control
Dynamic (Adaptive) QoS Control
17
Virtual Clock
Related Work
Principles
  • Emulates the Time Division Multiplexing (TDM)
  • Virtual transmission time assignments.
  • Packets are transmitted in an increasing order
    of the virtual transmission time.

18
Self-Clocked Fair Queuing (SCFQ)
Related Work (cont.)
Principles
  • Two tags are associated with a packet, a start
    tag and a finish tag.
  • The finish tag is defined as

19
Jitter-Earliest-Due-Date (Jitter-EDD)
Related Work (cont.)
Delay - bound
Pre-Ahead
Arrival
Departure
Deadline
Delay Bound
Holding
Arrival
Eligible
Deadline
20
Network Model

21
Network Model
  • The Switch has 2 links
  • an outgoing incoming link
  • The incoming link is assumed to be non-blocking

22
OCcuPancy_Adjusting (OCP_A)
23
OCcuPancy_Adjusting(OCP_A)
  • Yin Bao Adarspal Sethi, Newark University,
    USA
  • Argues on the following issues
  • The dynamic resource allocation introduced
    previously are executed using specific traffic
    modeling , requires that the source model needs
    to be constantly remodeled due to the bursty
    characteristics.
  • Tail probability Transient packets are dropped
    due to expired delay
  • Classifying traffic into various classes would
    restrict the applications to be directed into
    certain classes. Although these classifications
    are able to provide different degrees of QoS in a
    statistical way.
  • The algorithm is deployed with the source and
    resource models specifications

24
OCP_A (cont.)
  • Resource Model
  • resources taken into consideration are the buffer
    bandwidth resources pertaining to an outgoing
    link of a network node.
  • A Current Service Environment (CSE) is used to
    represent the allocated resources to a particular
    flow. The CSE consist of Bi, ?i which
    represents the allocated buffer and bandwidth (in
    terms of packets per second)
  • There are a sequence of CSE during the flows
    transmission
  • CSEi (t1), CSEi (t2), , CSE (tk),, CSEi (tn)
  • The time between two CSE (known as the CSE
    interval) is a critical issue, as it determines
    the monitoring interval.
  • The inter-relation of the parameters are

25
OCP_A (cont.)
  • Algorithm
  • Start with an initial reference CSE Bi, ?i
    where Bi Biref, ?Ii ?iref
  • serve packets from this flow under the current
    CSE
  • for each packet arrival
  • if (the allocated buffer is full and losing this
    arrival will cause the loss ratio performance to
    be worse than Li)
  • increase Bi to an amount so that the new
    occupancy is OCPu
  • perform ?i lt-- Bi / Di
  • after the transmission of each packet
  • if transient packet D gt Di drop packet
  • if (the current buffer occupancy is below OCPl)
  • decrease Bi by an amount ? but make it no lower
    than Bimin
  • perform ?I lt-- Bi / Di when there are no
    transition packets

Analysis via Simulation only
26
Scheduler Criterion
  • Allocation of Bandwidth Buffer in A Fair Manner
  • Simplicity
  • Isolation Among Flows
  • Fairness
  • Elasticity

27
ACcElerated (ACE) Scheduling Algorithm
28
ACE Ideas And Principles
  • The proposed scheduling algorithm comprises of
    three
  • major components
  • Admission Controller
  • Resource Management
  • Scheduling Algorithm
  • Details
  • Justification of packet admission in contexts of
    reduction
  • of
  • Propagation Delay
  • Blocking Probability
  • The resource management implementations
  • Predictive Service Model Scheme
  • A Multi-class Dynamic Resource
  • Management
  • The scheduling algorithm aims at
  • Index Derivation To Indicate Insufficiency of
  • Resource
  • Integrate A Delay Bound into The Virtual

29
ACE Mechanism
  • An IP switch considered in this research is
    connected to an adjacent switch by two links an
    outgoing link and an incoming link.
  • The switch is a non-blocking outgoing link, and
    is integrated with a FIFO and scheduling
    algorithms at the outgoing link, associated with
    each priority class.
  • Buffers are assumed to have a finite capacity.
  • The traffic source is classified by its priority
    class, where each class have equivalent QoS
    requirements.

30
ACE Admission Control
31
ACE Admission Controller
  • The Algorithm
  • Let the ith packet from flow n of size Pin arrive
    at a switch at time t. Let (?n) denote the total
    delay experience by the packets transmitted from
    flow n. Let (?n) denote the number of transient
    packets from flow i in buffer cln and (?n)
    denote the number of packets departed from flow
    n.
  • Then we compute the packets estimated delay as
    follows

32
ACE Admission Controller (cont.)
  • The Correlation Factors

Thus, the correlations are
33
ACE Resource Management
34
ACE Multi-Class Dynamic Resource Management
  • Several questions may arise
  • How can a scheme capture the insufficiency of
    resources in a swift manner ?
  • How can we increase the cross-sharing of
    resources allocated based on the philosophy of
    differentiated services ?
  • What are the dynamics of multi-tier resource
    management algorithm with dynamically adjusted
    service rate buffers?

35
ACE Multi-Class Dynamic Resource
Management(cont.)
  • Dynamically adjusted service rate (?) and buffer
    (?) allocation. Formulas are derived, to ensure
    that each class converges to its desired
    operating point.
  • Class and respective priority level
  • Class 1 Sensitive to both loss and delay
  • Class 2 Sensitive to packet delay but
    insensitive to packet loss.
  • Class 3 Sensitive to packet loss but
    insensitive to packet delay
  • Class 4 Insensitive to packet loss and packet
    delay

36
ACE Multi-Class Dynamic Resource
Management(cont.)
  • Cross-Sharing of resources

37
ACE Scheduler
  • Limitations of the Virtual Clock, the calculation
    is based on the rate parameter only.
  • This introduces the problem of coupling between
    the allocation of the delay bound and bandwidth.

38
ACE Scheduler(cont.)
39
Performance Analysis
The moving power of mathematics is not reasoning
but imagination.
 
Augustus De Morgan
40
Selection of Techniques
Performance Analysis
  • Analytical Modeling

Discrete Event Simulation
Semi-Markov Process (SMP)
41
Why SMP ?
Excellent references by Research Group Dr. A.
K. Ramani and Dr. Selvakennedy on Media Access
Protocol By Dr. Krishnamoorthy Sivalingam in
the field of WDM network.
University of Maryland Baltimore County
42
  • A SMP is
  • A stochastic process that can be in any of k
    states 1,2,,k.
  • Each time it enters a state i (1? i ? k), it
    remains there for a
  • random
  • amount of time (i.e. the sojourn time) having a
    mean ?i
  • and then makes into state j (1? j ? k) with
    probability pi,j.  
  • The steady-state probability of being in state i,
    denoted by Pi,
  • can be expressed as follows
  •  
  • The rate of leaving state i, denoted by ?i, is
    defined as the
  • reciprocal of the average time elapsed between
    two consecutive
  • departures from state I.
  • The rate can be obtained using the following
    equation
  •  

43
System Assumptions
A1 All nodes are assumed to posses
independent and identical behavior and can be
modeled as identical stochastic processes. A2
Packet generation at each node follows a
Poisson process with an arrival rate of ?
packets per unit time per node. A3 At
most one new packet can arrive at each node per
slot. The sources may generate a packet at any
slot.
44
System Assumptions
A4 Each source notifies the network the
requested service performance, in the form of a
QoS tuple ltDn,Lngt. A5 Each source is
allocated initially Bref and ?ref. A6
Finite transmitter queue capacity where each
queue has the capacity to hold Bmax packets.
OCP_A has a single class queue of Bmax
capacity. A7 Bref is incremented if the
probability of observed packet loss ratio (?)
indicates breach of the requested Ln.
45
System Assumptions
A8 Transient packets that breach the Dn
threshold are discarded prior to transmission.
A9 The period at each state is normalized to
a slot time. The slot time is defined as the
transmission time of packet. A10 The node
will initially be in an idle state. At the end
of the idle period, it will generate a request
for packet transmission. Packets originating
from the same source are independent of each
other. A11 In the event of an idle data
sink, the node will schedule the packet for
transmission on a FCFS basis. The transmission
will lasts for one slot. A12 The buffering
period of a node will be an integer number slots
and is dependent upon the packet arrival and the
first access of the node.  
46
State Definitions
S0
Node is in idle state.
Node is in transmit state, encompasses the task
of servicing the packet.
S1
S2i0,,2iBref
Node is in buffered state i packets buffered are
within the initially allocated queue capacity, 1
lt i lt Bref.
Conditional transition
 
S2iBref1? 2iBref omega ? 2iBmax)
-
Node is in reconfigured buffered state i packets
buffered exceeds the initially allocated
resource, Bref lt i lt Brefomega. The increments
are performed based on the comparative analysis
between the ltDn, Lngt and the QoS achieved.
Omega is the incrimination threshold. The
threshold correlates to the replication of the
CSEn(t1), CSEn(t2),,CSEn(tk),
CSEn(tk1),CSEn(tn) feature of the OCP_A.
47
Transition Probabilities
  • Probability of the server being idle

PY(0) ? ?n ? ?n
 
  • Probability of packet loss

48
Transition Probabilities (cont.)
  • Probability of reallocating resources

 
  • Probability of successful packet
  • transmission

49
(No Transcript)
50
Limiting Probabilities
 
51
Limiting Probabilities (cont.)
 
52
Iterative Algorithm
  • Choose an initial value for ? (0 ? ? ? 1)
  • Compute the transition probabilities using the
    above value of ?
  • An improved estimate of ? is computed using
    expression given the Limiting Probabilities
    equation
  • Repeat steps (2)-(3) until ? has converged

 
53
Performance Analysis- Simulation Methodology
  • Discrete-Event Simulation
  • Each source is connected to the scheduler via an
    infinitely fast link
  • The is no blocking in the scheduler there are
    no delays in the incoming links
  • Important issues considered in the simulations
  • Event-list
  • Time advancing mechanisms
  • Traffic Models
  • Schedulers
  • Resource Reservation Mechanism

54
Results Discussions
N50
N15
Average delay vs. arrival rate for delay
sensitive traffic
55
Results Discussions (cont.)
N50
N15
Average delay vs. arrival rate for non delay
sensitive traffic
56
Results Discussions (cont.)
N50
N15
Packet loss ratio vs. arrival rate for delay
sensitive traffic
57
Results Discussions (cont.)
N50
N15
Packet loss ratio vs. arrival rate for non delay
sensitive traffic
58
Results Discussions (cont.)
N50
N15
Average buffer allocation vs. arrival rate for
delay sensitive traffic
59
Results Discussions (cont.)
ALC
CS
Average buffer allocation vs. arrival rate for
non delay sensitive traffic
60
Results Discussions (cont.)
N50
N15
Throughput vs. arrival rate for non delay
sensitive traffic
61
Results Discussions (cont.)
N50
N15
Throughput vs. arrival rate for non non-delay
sensitive traffic
62
Conclusion Future Research
  • The ACE scheduler has enabled an enhanced version
    of the Virtual Clock algorithm. The algorithm
    introduces a new dimension to rate-based
    scheduling algorithm via the incorporation of
    dynamic resource management as opposed to the
    static reservations.
  • Three SMP analytical models were developed. The
    models have inherited the ability to replicate
    the complex operations of the dynamic resource
    allocations. The model created, enables the
    incorporation of a dynamic growth in the SMP
    steady state derivations. The discrete-event
    simulators allow network resources and scheduling
    algorithms to be analysed in an intricate and
    comprehensive manner.

63
Conclusion Future Research (cont.)
  • The algorithms can be analysed in terms of its
    applicability to input-buffered switches.
  • The algorithms can be deployed in an environment
    of parallel processors. Thus, allowing the
    performance and dimensions of multi-processor
    switches to be analysed.  
  • Flow control and routing algorithms should be
    integrated into the research. Thus, increasing
    the feasibility in actual implementation of the
    algorithm.
  • Implementation of the dynamic resource algorithms
    with the incorporation of parallel schedulers.

64
Publications
1.  S.Shamala, M.Yazid, M.Othman and R.Johari,
Performance Analysis of A Pro-Active Multi-Tier
Dynamic Scheduling Algorithm for Output-Buffered
Packet Switches. Journal of Institute of Maths
and Computer Science (Comp. Sc.Ser.), vol. 12,
no.1(2001), pp.141-152, 2001. 2.  S.Shamala,
M.Yazid, M.Othman and R.Johari, Performance
Modeling of a Pro-active Multi-tier Dynamic
Scheduling Algorithm with Threshold Derivations",
International Journal of the Computer, the
Internet and Management, Vol. 93, Sept-Dec,2001.
 3. S.Shamala, M.Yazid, M.Othman and R.Johari,
Multi-Tier Propagation Delay and Blocking
Reduction Strategies for Output-Buffers in
Integrated Service, Chiang Mai Journal of
Science, 2001 4. S.Shamala, M.Yazid, M.Othman,
J.Rozita and A.K. Ramani, Performance Evaluation
of Dynamic Pro-Active Priority Oriented
Scheduling Algorithms, In Proc. of IASTED
International Conference on Applied Informatics
(AI 2001), Austria, February 2001. 5. 
S.Shamala, A.K.Ramani, M.Y.Saman and M.Othman,
Pro-Active QoS scheduling algorithms for
real-time multimedia applications in
communication system. In Proceedings of IEEE
TENCON 2000, Kuala Lumpur, September 2000. 6.
S.Shamala, M.Y. Saman, M.Othman and A.K. Ramani,
Service Disciplines with Integrated Estimation
Algorithms for Dynamic Resource Management. In.
Proc. of National Conference on
Telecommunications Technology (NCTT 2000). Johor,
November 2000. 7. S.Shamala, A.K. Ramani,
M.Y.Saman. D. Shyamala and M.Othman, Dynamic
Service Discipline for Future Integrated
Packet-Switched Networks. In INTEC 2000
Colloqium. Kuala Lumpur, September 2000.
65
"Logical reasoning brings you from a to b,
imagination brings you everywhere."
Albert Einstein
Thank You!
Write a Comment
User Comments (0)
About PowerShow.com