Traffic Management - PowerPoint PPT Presentation


PPT – Traffic Management PowerPoint presentation | free to download - id: 3f48fa-M2FiO


The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
About This Presentation

Traffic Management


Traffic Management & Traffic Engineering An example Executives participating in a worldwide videoconference Proceedings are videotaped and stored in an archive Edited ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 207
Provided by: s236
Learn more at:


Write a Comment
User Comments (0)
Transcript and Presenter's Notes

Title: Traffic Management

Traffic ManagementTraffic Engineering
An example
  • Executives participating in a worldwide
  • Proceedings are videotaped and stored in an
  • Edited and placed on a Web site
  • Accessed later by others
  • During conference
  • Sends email to an assistant
  • Breaks off to answer a voice call

What this requires
  • For video
  • sustained bandwidth of at least 64 kbps
  • low loss rate
  • For voice
  • sustained bandwidth of at least 8 kbps
  • low loss rate
  • For interactive communication
  • low delay (lt 100 ms one-way)
  • For playback
  • low delay jitter
  • For email and archiving
  • reliable bulk transport

What if
  • A million executives were simultaneously
    accessing the network?
  • What capacity should each trunk have?
  • How should packets be routed? (Can we spread load
    over alternate paths?)
  • How can different traffic types get different
    services from the network?
  • How should each endpoint regulate its load?
  • How should we price the network?
  • These types of questions lie at the heart of
    network design and operation, and form the basis
    for traffic management.

Traffic management
  • Set of policies and mechanisms that allow a
    network to efficiently satisfy a diverse range of
    service requests
  • The mechanisms and policies have to be deployed
    at both node level as well as network level
  • Tension is between diversity and efficiency
  • Traffic management is necessary for providing
    Quality of Service (QoS)
  • Subsumes congestion control (congestion loss
    of efficiency)

Traffic Engineering
  • Engineering of a given network so that the
    underlying network can support the services with
    requested quality
  • Encompasses
  • Network Design
  • Capacity Design (How many nodes, where)
  • Link Dimensioning (How many links, what capacity)
  • Path Provisioning (How much bandwidth end-to-end)
  • Multi-homing (Reliability for customer)
  • Protection for Reliability (Reliability in
  • Resource Allocation
  • Congestion Control
  • routing around failures
  • adding more capacity

Why is it important?
  • One of the most challenging open problems in
  • Commercially important
  • AOL burnout
  • Perceived reliability (necessary for
  • Capacity sizing directly affects the bottom line
  • At the heart of the next generation of data
  • Traffic management Connectivity Quality of

  • Economic principles
  • Traffic classes
  • Time scales
  • Mechanisms
  • Queueing
  • Scheduling
  • Congestion Control
  • Admission Control
  • Some open problems

Lets order Pizza for home delivery
  • Customer
  • calls a closest pizza outlet (what is selection
    based on??)
  • orders a pizza
  • Requirement specification
  • type, toppings (measurable quantities)
  • order arrives at home
  • Service Quality
  • How fast it arrived
  • Is the right pizza? Anything missing (quality
  • Customer Satisfaction (based on feeling!!, all
    parameters not measurable)
  • How was the service?
  • Is Pizza cold or hot? Is it fresh?
  • Pizza company
  • How many customers and how fast to serve
  • Customer Satisfaction Only through complaints
    (cannot really measure)
  • What they know only what customer ordered

Economics Basics utility function
  • Users are assumed to have a utility function that
    maps from a given quality of service to a level
    of satisfaction, or utility
  • Utility functions are private information
  • Cannot compare utility functions between users
  • Rational users take actions that maximize their
  • Can determine utility function by observing
  • Generally networks do not support signaling of
  • They only support signaling of requirements
    (bandwidth, delay)
  • Networks use resource allocation to make sure
    requirements are satisfied
  • Measurements and Service Level Agreements (SLAs)
    determine customer satisfaction!!

Example File Transfer
  • Let u(t) S - ? t
  • u(t) utility from file transfer
  • S satisfaction when transfer infinitely fast
  • t transfer time
  • ? rate at which satisfaction decreases with
  • As transfer time increases, utility decreases
  • If t gt S/ ?, user is worse off! (reflects time
  • Assumes linear decrease in utility
  • S and ? can be experimentally determined

Example Video Conference
  • Every packet must receive before a deadline
  • Otherwise, the packet is too late and cannot be
  • Model
  • u(t) if (t lt D) then S
  • else (-?)
  • t is the end to end delay experienced by a
  • D is the delay deadline
  • S is the satisfaction
  • -? is the cost (penalty) for missing deadline
  • causes performance degradation
  • Sophisticated Utility measures not only delay but
    packet loss too
  • u(?) S(1- ?) where ? is the packet loss

Social welfare
  • Suppose network manager knew the utility function
    of every user
  • Social Welfare is maximized when some combination
    of the utility functions (such as sum) is
    maximized while minimizing the infrastructure
  • An economy (network) is efficient when increasing
    the utility of one user must necessarily decrease
    the utility of another
  • An economy (network) is envy-free if no user
    would trade places with another (better
    performance also costs more)
  • Goal maximize social welfare
  • subject to efficiency, envy-freeness, and making
    a profit

  • Assume
  • Single switch, each user imposes load (?0.4)
  • As utility 4 - d
  • Bs utility 8 - 2d
  • Same delay (d) to both users
  • Conservation law ?(?idi) Constant
  • 0.4d 0.4d C gt d 1.25 C gt Sum of utilities
    12-3.75 C
  • If B wants lower delay say to 0.5C, then As
    delay 2C
  • Sum of utilities 12 - 3C (Larger than before)
  • By giving high priority to users that want lower
    delay, network can increase its utility
  • Increase in social welfare need not benefit
  • A loses utility, but may pay less for service

Some economic principles
  • A single network that provides heterogeneous QoS
    is better than separate networks for each QoS
  • unused capacity is available to others
  • Lowering delay of delay-sensitive traffic
    increases welfare
  • can increase welfare by matching service menu to
    user requirements
  • BUT need to know what users want (signaling)
  • For typical utility functions, welfare increases
    more than linearly with increase in capacity
  • individual users see smaller overall fluctuations
  • can increase welfare by increasing capacity

Principles applied
  • A single wire that carries both voice and data is
    more efficient than separate wires for voice and
  • ADSL
  • IP Phone
  • Moving from a 20 loaded 10 Mbps Ethernet to a
    20 loaded 100 Mbps Ethernet will still improve
    social welfare
  • increase capacity whenever possible
  • Better to give 5 of the traffic lower delay than
    all traffic low delay
  • should somehow mark and isolate low-delay traffic

The two camps
  • Can increase welfare either by
  • matching services to user requirements or
  • increasing capacity blindly
  • Which is cheaper?
  • no one is really sure!
  • small and smart vs. big and dumb
  • It seems that smarter ought to be better
  • otherwise, to get low delays for some traffic, we
    need to give all traffic low delay, even if it
    doesnt need it
  • But, perhaps, we can use the money spent on
    traffic management to increase capacity
  • We will study traffic management, assuming that
    it matters!

How useful are utility functions and economic
  • Do users really have such functions that can be
    expressed mathematically?
  • Practically no or less clear
  • Even if users cannot come up with a mathematical
    formula, they can express preference of one set
    of resources over other
  • These preferences can be codified as utility
  • Best way to think about utility functions is that
    they may allow us to come up with a mathematical
    formulation of the traffic management problem
    that gives some insight
  • Practical economic algorithms may never be
  • But policies and mechanisms based on these are
    still relevant

Network Types
  • Single-Service Networks
  • Provide services for single type of traffic
  • e.g., Telephone Networks (Voice), Cable Networks
    (Video), Internet (Best effort Data)
  • Multi-Service Networks
  • Provide services for multiple traffic types on
    the same network
  • e.g., Asynchronous Transfer Mode (CBR, VBR, ABR,
    UBR), Frame Relay, Differentiated Services
    (Diff-Serv), Integrated Services (Int-Serv), MPLS
    with Traffic Engineering
  • Application types need to match the service
  • Traffic models are used for the applications in
    order to match services, design, deploy the
    equipment and links.

Application Types
  • Elastic applications (Adjust bandwidth and take
    what they get)
  • Wide range of acceptable rates, although faster
    is better
  • E.g., data transfers such as FTP
  • Continuous media applications.
  • Lower and upper limit on acceptable performance
  • Sometimes called tolerant real-time since they
    can adapt to the performance of the network
  • E.g., changing frame rate of video stream
  • Network-aware applications
  • Hard real-time applications.
  • Require hard limits on performance intolerant
  • E.g., control applications

Traffic models
  • To align services, need to have some idea of how
    applications, users or aggregates of users behave
    traffic model
  • e.g. how long a user uses a modem
  • e.g. average size of a file transfer
  • Models change with network usage
  • We can only guess about the future
  • Two types of models
  • measurements
  • educated guesses

Telephone traffic models
  • How are calls placed?
  • call arrival model
  • studies show that time between calls is drawn
    from an exponential distribution
  • call arrival process is therefore Poisson
  • memoryless the fact that a certain amount of
    time has passed since the last call gives no
    information of time to next call
  • How long are calls held?
  • usually modeled as exponential
  • however, measurement studies show it to be heavy
  • means that a significant number of calls last a
    very long time
  • specially after usage of modems!!

Traffic Engineering for Voice Networks
  • For a switch with N trunks, and with large
    population of users (M??), the probability of
    blocking (i.e., a call is lost) is given by
    Erlang-B formula
  • ? is the call arrival rate (calls /sec)
  • 1/? is the call holding time (3 minutes)
  • Example (For A 12 Erlangs)
  • PB 1 for N 20 A/N 0.6
  • PB 8 for N 18 A/N 0.8
  • PB 30 for N 7 A/N 1.7

  • Long/heavy-tailed distributions
  • power law
  • PX gt x ? cx?a x??, a,c gt 0
  • Pareto
  • PX gt x c a x? a, x gt b
  • Exponential Distribution
  • PX gt x e-ax

Pareto distribution
  • 1lt?lt2 gt infinite variance

Power law decays more slowly than exponential ?
heavy tail
Internet traffic modeling
  • A few apps account for most of the traffic
  • WWW
  • FTP
  • telnet
  • A common approach is to model apps (this ignores
    distribution of destination!)
  • time between app invocations
  • connection duration
  • bytes transferred
  • packet inter-arrival distribution
  • Little consensus on models
  • But two important features

Internet traffic models features
  • LAN connections differ from WAN connections
  • Higher bandwidth (more bytes/call)
  • longer holding times
  • Many parameters are heavy-tailed
  • examples
  • bytes in call
  • call duration
  • means that a few calls are responsible for most
    of the traffic
  • these calls must be well-managed
  • also means that even aggregates with many calls
    not be smooth
  • can have long bursts
  • New models appear all the time, to account for
    rapidly changing traffic mix

  • Economic principles
  • Traffic classes
  • Time scales
  • Mechanisms
  • Some open problems

Traffic classes
  • Networks should match offered service to source
    requirements (corresponds to utility functions)
  • Example telnet requires low bandwidth and low
  • utility increases with decrease in delay
  • network should provide a low-delay service
  • or, telnet belongs to the low-delay traffic class
  • Traffic classes encompass both user requirements
    and network service offerings
  • Applications match the traffic to the service
  • Request resources from the network accordingly

Traffic classes - details
  • A basic division guaranteed service and best
  • like flying with reservation or standby
  • Guaranteed-service
  • utility is zero unless app gets a minimum level
    of service quality
  • bandwidth, delay, loss
  • open-loop flow control with admission control
  • e.g. telephony, remote sensing, interactive
    multiplayer games
  • Best-effort
  • send and pray
  • closed-loop flow control
  • e.g. email, net news

GS vs. BE (cont.)
  • Degree of synchrony
  • time scale at which peer endpoints interact
  • GS are typically synchronous or interactive
  • interact on the timescale of a round trip time
  • e.g. telephone conversation or telnet
  • BE are typically asynchronous or non-interactive
  • interact on longer time scales
  • e.g. Email
  • Sensitivity to time and delay
  • GS apps are real-time
  • performance depends on wall clock
  • BE apps are typically indifferent to real time
  • automatically scale back during overload

Best Effort (Flow Control)
  • Explicit
  • Network Tells at what rate the source should send
    the packets
  • Network elements may compute connection fair
    share based on Max-Min allocation (e.g, ABR in
    ATM Networks)
  • Or it can be based on 1 bit congestion indicator
    (e.g., EFCI in ABR of ATM Networks)
  • Implicit
  • Packet drop is detected by the source and adjusts
    the window transmission (e.g., TCP)
  • No flow control
  • Packets are dropped by the network nodes
  • Sources may not react (e.g, UDP, UBR)
  • Problems are caused if these two types are mixed!!

Traffic subclasses (roadmap)
  • ATM Forum
  • based on sensitivity to bandwidth
  • GS
  • CBR, VBR
  • BE
  • ABR, UBR
  • IETF
  • based on ToS
  • IETF based on RSVP
  • based on sensitivity to delay
  • GS
  • intolerant
  • tolerant
  • BE
  • interactive burst
  • interactive bulk
  • asynchronous bulk
  • IETF based in DiffServ
  • PHB
  • EF, 4 AFs and BE

ATM Basics
  • See the ATM Forum Presentation

ATM Basics
  • Logical or Virtual Connection
  • Connection is first established using signaling
  • Route from the source to the destination is
  • The same route is used for all cells (fixed size
    packets) of the connection
  • No routing decision for every cell (they are
    switched in the same path)

Virtual Circuits in ATM
  • Virtual Circuit Identifier is represented jointly
  • Virtual Channel Identifier (VCI)
  • Virtual Path Identifier (VPI)
  • Virtual Channel (VC)
  • Path for cell associated with a connection
  • Supports transportation of a data stream
  • Each VC is assigned a unique VCI on a link

Virtual Channels in ATM
  • Virtual Path (VP)
  • Grouping of virtual channels on a physical link
  • Switching can be performed on the path basis
  • reduced overheads
  • Each virtual path is assigned Virtual Path
    Identifier (VPI)

Virtual Path Switch (VP - Switch)
VP / VC Switch
ATM Network Example
Switch 2
Switch 1
Switch 3
  • Each connection has its own traffic descriptors
    such as PCR, SCR, MBS, CDVT, CLR, MCR
  • A Connection Admission Control algorithm (CAC)
    will check for the resources at queuing points to
    make a decision on admissibility
  • Network efficiency depends upon the CAC

ATM Forum GS subclasses
  • Constant Bit Rate (CBR)
  • constant, cell-smooth traffic
  • mean and peak rate are the same
  • e.g. telephone call evenly sampled and
  • constant bandwidth, variable quality
  • Variable Bit Rate (VBR)
  • long term average with occasional bursts
  • try to minimize delay
  • can tolerate loss and higher delays than CBR
  • e.g. compressed video or audio with constant
    quality, variable bandwidth

ATM Forum BE subclasses
  • Available Bit Rate (ABR)
  • users get whatever is available
  • zero loss if network signals (in RM cells) are
  • no guarantee on delay or bandwidth
  • Unspecified Bit Rate (UBR)
  • like ABR, but no feedback
  • no guarantee on loss
  • presumably cheaper
  • Guaranteed Frame Rate (GFR)
  • like UBR/ABR, expressed in terms of frame rate

ATM Attributes
  • How do we describe a flow (connection) of ATM
  • Service Category
  • Traffic Parameters or descriptors
  • QoS parameters
  • Congestion (for ABR)
  • Other (for UBR)
  • Cell Loss Priority (CLP0 or CLP01)
  • Connections are signaled with various parameters
  • A Connection Admission Control (CAC) procedure
    checks for resources in the network
  • If connection is accepted, a traffic contract
    is awarded to the user (Service Level Agreement)

Traffic Descriptors or Parameters
  • Connection Traffic Descriptor
  • Source Traffic Descriptor PCR, SCR, MBS, MCR,
  • Cell Delay Variation Tolerance (?) upper bound
    on amount of cell delay that is introduced by the
    network interface and the UNI (due to
    interleaving, physical layer overhead,
    multiplexing, etc.)
  • Conformance Definition unambiguous specification
    of conforming cells of a connection at the UNI (
    a policing function is used to check for
    conformance such as Generic Cell Rate Algorithm

Traffic Parameters (Source Traffic Descriptor)
  • Peak Cell Rate (PCR) upper bound on traffic
    submitted by source (PCR 1/T, where T minimum
    cell spacing
  • Sustainable Cell Rate (SCR) upper bound on
    average rate of traffic submitted by source
    (over a larger T)
  • Maximum Burst Size (MBS) maximum number of cells
    sent continuously at PCR
  • Minimum Cell Rate (MCR) used with ABR and GFR,
    minimum cell rate requested, access to unused
    capacity up to PCR (elastic capacity PCR-MCR)
  • Maximum Frame Size (MFS) maximum size of a frame
    in cells available for GFR service

Cell Rates
  • Peak Cell Rate (PCR), Line Cell Rate (LCR)

  • Sustained Cell Rate (SCR) PCR(Ton/TonToff)

Quality of Service
  • Cell Transfer Delay (CTD)
  • Cell Delay Variation (CDV)

Cell arrival pattern
Queuing point (e.g. mux, switch)
Cell departure pattern without CDV
Cell departure pattern with CDV
Switch transit delay
Positive CDV
Negative CDV
Cell Transfer Delay Probability Density
Variable component of delay, due to buffering
and cell scheduling.
QoS Parameters
  • Peak-to-peak cell delay variation (CDV)
    acceptable delay variation at destination. The
    peak-to-peak CDV is the (1 - ?) quantile of the
    CTD minus the fixed CTD that could be experienced
    by any delivered cell on a connection during the
    entire connection holding time.
  • Maximum Cell Transfer Delay (maxCTD) maximum
    time between transmission of first bit of a cell
    at the source UNI to receipt of its last bit at
    the destination UNI
  • Cell Loss Ratio ratio of lost cells to total
    transmitted cells on a connection Lost
    Cells/Total Transmitted Cells

Other Attributes
  • Congestion Control
  • defined only for ABR service category
  • uses network feedback controls
  • ABR flow control mechanism (more later)
  • Other Attributes (introduced July 2000)
  • Behavior class selector (BCS)
  • for IP differentiated services (DiffServ)
  • provides for different levels of service among
    UBR connections
  • implementation dependent, no guidance in specs
  • Minimum desired cell rate (MDCR)
  • UBR application minimum capacity objective

Attributes of Each Service Category
Service Paradigm
  • Quantitative Commitments
  • Sets explicit values
  • Ensures service quality through resource
    allocation and traffic policing
  • Qualitative Commitments
  • Relative measure and no explicit guarantees
  • Some unspecified level of quality through
    network engineering

Quantitative Commitments
  • Generally connection oriented transport
  • Network nodes maintain per-flow state info
  • QoS (or GOS) requirements of each connection is
    explicitly specified and signaled
  • Network enforces traffic regulation (policing,
    shaping) if necessary and allocates resources for
    each connection
  • Examples Voice networks (POTS), ATM, FR
  • Expensive and under-utilized

Qualitative Commitments
  • Generally connection less transport
  • no per-flow state info is maintained due to flow
  • QoS requirements are not explicitly specified
  • Network may not enforce traffic regulation
  • May allocate resources for logical groups (such
    as VPN)
  • Examples IP, LANs
  • Cheap and over-utilized

QoS Building Blocks
  • Backbone supporting QoS speed and scale
  • Packet / Service classification (sorting)
  • Bandwidth management and admission control
  • Queue management
  • Congestion management
  • Granular measurements

Functions Needed
  • Admission control - some way to limit usage
    relative to resources.
  • Packet scheduling - some way to treat different
    packets differently.
  • Classifier mechanism - some way to sort packets
    into different treatment groups.
  • Policies and rules for allocating resources.

  • Internet currently provides only single class of
    best-effort service.
  • No admission control and no assurances about
  • Existing applications are elastic.
  • Tolerate delays and losses
  • Can adapt to congestion
  • Future real-time applications may be inelastic.
  • Should we modify these applications to be more
    adaptive or should we modify the Internet to
    support inelastic behavior?

IETF ToS (1-byte Type-of-Service)
  • Bits 0-2 Precedence.
  • Bit 3 0 Normal Delay, 1 Low Delay.
  • Bits 4 0 Normal Throughput, 1 High
  • Bits 5 0 Normal Relibility, 1 High
  • Bit 6-7 Reserved for Future Use

IETF int-serv (Integrated Services)
  • Focus on per-flow QoS.
  • Support specific applications such as video
  • Based on mathematical guarantees.
  • Many concerns
  • Complexity
  • Scalability
  • Business model
  • Charging
  • Uses RSVP (Resource-Reservation Protocol)
  • To signal QoS requirements

IETF int-serv (Integrated Services)
  • Guaranteed service
  • Targets hard real-time applications.
  • User specifies traffic characteristics and a
    service requirement.
  • Requires admission control at each of the
  • Can mathematically guarantee bandwidth, delay,
    and jitter.
  • Controlled load.
  • Targets applications that can adapt to network
    conditions within a certain performance window.
  • User specifies traffic characteristics and
  • Requires admission control at each of the
  • Guarantee not as strong as with the guaranteed
  • e.g., measurement-based admission control.
  • Best effort

  1. Sender sends PATH message to network
  2. PATH leads data through the network
  3. Routers install per-flow state
  4. Receiver responds with RESV
  5. RESV follows PATH trail back towards sender
  6. Routers accept resource request (commit resources
    to flow) or reject resource request
  7. Data is handled in network elements

IETF GS subclasses
  • Tolerant GS
  • nominal mean delay, but can tolerate occasional
  • not specified what this means exactly
  • uses controlled-load service
  • even at high loads, admission control assures a
    source that its service does not suffer
  • it really is this imprecise!
  • Intolerant GS
  • need a worst case delay bound
  • equivalent to CBRVBR in ATM Forum model

IETF BE subclasses
  • Interactive burst
  • bounded asynchronous service, where bound is
    qualitative, but pretty tight
  • e.g. paging, messaging, email
  • Interactive bulk
  • bulk, but a human is waiting for the result
  • e.g. FTP
  • Asynchronous bulk
  • junk traffic
  • e.g netnews

IETF Diff-Serv (Differentiated Services)
  • Intended to address the following difficulties
    with Intserv and RSVP
  • Scalability maintaining states by routers in
    high speed networks is difficult due to the very
    large number of flows
  • Flexible Service Models Intserv has only two
    classes, want to provide more qualitative service
    classes want to provide relative service
    distinction (Platinum, Gold, Silver, )
  • Simpler signaling (than RSVP) many applications
    and users may only want to specify a more
    qualitative notion of service

Diffserv PHB (Per-Hop-Behavior)
  • Packet is marked in the Type of Service (TOS) in
    IPv4, and Traffic Class in IPv6.
  • 6 bits used for Differentiated Service Code Point
    (DSCP) and determine PHB that the packet will
  • EF, 4 classes of AF, each with 3 drop priorities
    (AF11, AF12, AF13, AF21, AF22, AF23, AF31, AF32,
    AF33, AF41, AF42, AF43)and Best-Effort (BE)
  • 2 bits are currently unused.

PHB Class Selector
  • Derived from IP Precedence values
  • 6 bit diff-serv code point (DSCP) determines
    per-hop behavior of packet treatment
  • Expedited Forwarding (EF) low loss and latency
  • Assured Forwarding (AF) 4 classes, 3 drop
  • Best Effort (BE) classical IP
  • No absolute guarantees

  1. Routers configured for certain PHBs (Per Hop
  2. Resources are allocated to PHBs
  3. Edge routers are configured to mark DSCP
    (requests PHB) based on classification
  4. Traffic arriving at edge router marked with DSCP
  5. Traffic in core routers go to PHB requested by

DSCP marked at edge
SLA defines capacity at each service level (DSCP)
Diff-Serv Network Architecture
Scalable Solutions Require Cooperative Edge
and Backbone Functions
  • Edge Functions
  • Packet classification
  • Bandwidth management
  • L3 metering
  • Security filtering
  • Access aggregation
  • Backbone Functions
  • High-speed switching and transport
  • QoS enforcement
  • QoS interworking

Packet Classification
  • Up to six traffic classes via ToS precedence bits
  • Classification by physical port, IP address,
    application, IP protocol, etc.
  • Network or external assignment

Multi-field Packet Classification
Field 1 Field 2 Field k Action
Rule 1 UDP A1
Rule 2 TCP A2

Packet Classification Find the action associated
with the highest priority rule matching an
incoming packet header.
Courtesy Nick McKeown_at_Stanford
Formal Problem Definition
  • Given a classifier C with N rules, Rj, 1 ? j ? N,
    where Rj consists of three entities
  • A regular expression Rji, 1 ? i ? d, on each of
    the d header fields,
  • A number, pri(Rj), indicating the priority of the
    rule in the classifier, and
  • An action, referred to as action(Rj).

For an incoming packet P with the header
considered as a d-tuple of points (P1, P2, ,
Pd), the d-dimensional packet classification
problem is to find the rule Rm with the highest
priority among all the rules Rj matching the
d-tuple i.e., pri(Rm) gt pri(Rj), ? j ? m, 1 ? j
? N, such that Pi matches Rji, 1 ? i ? d. We
call rule Rm the best matching rule for packet P.
Courtesy Nick McKeown_at_Stanford
Routing Lookup Instance of 1D Classification
  • One-dimension (destination address)
  • Forwarding table ? classifier
  • Routing table entry ? rule
  • Outgoing interface ? action
  • Prefix-length ? priority

Courtesy Nick McKeown_at_Stanford
Example 4D Classifier
Rule L3-DA L3-SA L4-DP L4-PROT Action
R1 Deny
R2 152.168.3/255.255.255 eq www udp Deny
R3 152.168.3/255.255.255 range 20-21 udp Permit
R4 152.168.3/255.255.255 eq www tcp Deny
R5 Deny
Courtesy Nick McKeown_at_Stanford
Example Classification Results
Pkt Hdr L3-DA L3-SA L4-DP L4-PROT Rule, Action
P1 www tcp R1, Deny
P2 www udp R2, Deny
Courtesy Nick McKeown_at_Stanford
Classification algorithms
  • Types
  • Linear search
  • Associative search
  • Trie-based techniques
  • Crossproducting
  • Heuristic algorithms
  • Algorithms So far
  • Good for two fields, but do not scale to more
    than two fields, OR
  • Good for very small classifiers (lt 50 rules)
    only, OR
  • Have non-deterministic classification time, OR
  • Either too slow or consume too much storage

Another Project Item
DiffServ Routers

DiffServ Edge Router

DiffServ Core Router
Select PHB
Local conditions
Extract DSCP
Packet treatment
Edge Router/Host Functions
  • Classification marks packets according to
    classification rules to be specified.
  • Metering checks whether the traffic falls within
    the negotiated profile.
  • Marking marks traffic that falls within profile.
  • Conditioning delays and then forwards, discards,
    or remarks other traffic.

Core Functions
  • Forwarding according to Per-Hop-Behavior or
    PHB specified for the particular packet class
    such PHB is strictly based on class marking (no
    other header fields can be used to influence
  • No state info to be maintained by routers!

Forwarding (PHB)
  • PHB results in a different observable
    (measurable) forwarding performance behavior.
  • PHB does not specify what mechanisms to use to
    ensure required PHB performance behavior.
  • Examples
  • Class A gets x of outgoing link bandwidth over
    time intervals of a specified length.
  • Class A packets leave first before packets from
    class B.

Forwarding (PHB)
  • Expedited Forwarding (EF)
  • Guarantees a certain minimum rate for the EF
  • Implies isolation guarantee for the EF traffic
    should not be influenced by the other traffic
  • Admitted based on peak rate.
  • Non-conformant traffic is dropped or shaped.
  • Possible service providing a virtual wire.

Forwarding (PHB)
  • Assured Forwarding (AF)
  • AF defines 4 classes with some bandwidth and
    buffers allocated to them.
  • The intent is that it will be used to implement
    services that differ relative to each other
    (e.g., gold, silver,).
  • Within each class, there are three drop
    priorities, which affect which packets will get
    dropped first if there is congestion.
  • Lots of studies on how these classes and drop
    priorities interact with TCP flow control.
  • Non-conformant traffic is remarked.

Example of EF A Virtual Leased Line Service
  • Service offers users a dedicated traffic pipe.
  • Guaranteed bandwidth between two points.
  • Very low latency and jitter since there should be
    no queuing delay (peak rate allocation).
  • Admission control makes sure that all links in
    the network core have sufficient EF bandwidth.
  • Simple case sum of all virtual link bandwidth is
    less than the capacity of the slowest link.
  • Traffic enforcement for EF traffic limits how
    much EF traffic enters the network.

Differentiated Services Issues
  • The key to making Diffserv work is bandwidth
    management in the network core.
  • Simple for simple services such as the virtual
    pipe, but it is much more challenging for complex
    service level agreements.
  • Notion of a bandwidth broker that manages the
    core network bandwidth.
  • Definition of end-to-end services for paths that
    cross networks with different forwarding
  • Some packets will be handled differently in
    different routers.
  • Some routers are not DiffServ capable.
  • Per-Domain Behavior (PDB)

Some points to ponder
  • The only thing out there is CBR and asynchronous
  • There are application requirements. There are
    also organizational requirements (link sharing)
  • Users needs QoS for other things too!
  • billing
  • privacy and security
  • reliability and availability

  • Economic principles
  • Traffic classes
  • Time scales
  • Mechanisms
  • Some open problems

Time scales
  • Some actions are taken once per call
  • tell network about traffic characterization and
    request resources
  • in ATM networks, finding a path from source to
  • Other actions are taken during the call, every
    few round trip times
  • feedback flow control
  • Still others are taken very rapidly,during the
    data transfer
  • scheduling
  • policing and regulation
  • Traffic management mechanisms must deal with a
    range of traffic classes at a range of time scales

Summary of mechanisms at each time scale
  • Less than one round-trip-time (cell or packet
  • Scheduling and buffer management
  • Regulation and policing
  • Policy routing (datagram networks)
  • One or more round-trip-times (burst-level)
  • Feedback flow control
  • Retransmission
  • Renegotiation

Summary (cont.)
  • Session (call-level)
  • Signaling
  • Admission control
  • Service pricing
  • Routing (connection-oriented networks)
  • Day
  • Peak load pricing
  • Weeks or months
  • Capacity planning

  • Economic principles
  • Traffic classes
  • Mechanisms at each time scale
  • Faster than one RTT
  • scheduling and buffer management
  • regulation and policing
  • policy routing
  • One RTT
  • Session
  • Day
  • Weeks to months
  • Some open problems

Faster than RTT
  • Scheduling and buffer management
  • Policing and Regulation
  • In separate set of slides

  • An option for guaranteed-service traffic
  • Static descriptors dont make sense for many real
    traffic sources
  • interactive video
  • Multiple-time-scale traffic
  • burst size B that lasts for time T
  • for zero loss, descriptors (P,0), (A, B)
  • P peak rate, A average B Burst Size
  • T large gt serving even slightly below P leads to
    large buffering requirements
  • one-shot descriptor is inadequate

Renegotiation (cont.)
  • Renegotiation matches service rate to traffic
  • Renegotiating service rate about once every ten
    seconds is sufficient to reduce bandwidth
    requirement nearly to average rate
  • works well in conjunction with optimal smoothing
  • Fast buffer reservation is similar
  • each burst of data preceded by a reservation
  • Renegotiation is not free
  • signaling overhead
  • call admission ?
  • perhaps measurement-based admission control

  • Extreme viewpoint
  • All traffic sent as CBR
  • Renegotiate CBR rate if necessary
  • No need for complicated scheduling!
  • Buffers at edge of network
  • much cheaper
  • Easy to price
  • Open questions
  • when to renegotiate?
  • how much to ask for?
  • admission control
  • what to do on renegotiation failure

  • Economic principles
  • Traffic classes
  • Mechanisms at each time scale
  • Faster than one RTT
  • One RTT
  • Session
  • Signaling
  • Admission control
  • Day
  • Weeks to months
  • Some open problems

  • How a source tells the network its utility
    function or resource requirements
  • Two parts
  • how to carry the message (transport)
  • how to interpret it (semantics)
  • Useful to separate these mechanisms

Signaling semantics
  • Classic scheme sender initiated
  • Admission control
  • Tentative resource reservation and confirmation
  • Simplex and duplex setup
  • Doesnt work for multicast

Resource translation
  • Application asks for end-to-end quality
  • How to translate to per-hop requirements?
  • E.g. end-to-delay bound of 100 ms
  • What should be bound at each hop?

Signaling transport
  • Telephone network uses Signaling System 7 (SS7)
  • Carried on Common Channel Interoffice Signaling
    (CCIS) network
  • CCIS is a datagram network
  • SS7 protocol stack is loosely modeled on ISO (but
    predates it)
  • Signaling in ATM networks uses Q.2931 standard
  • part of User Network Interface (UNI)
  • complex
  • layered over Service Specific Connection Oriented
    Protocol SSCOP (a reliable transport protocol)
    and AAL5

Internet signaling transport RSVP
  • Main motivation is to efficiently support
    multipoint multicast with resource reservations
  • In unicast, a source communicates with only one
  • In multicast, a source communicates with more
    than one destination
  • Signalling Progression
  • Unicast
  • Naive multicast
  • Intelligent multicast
  • Naive multipoint multicast
  • RSVP

RSVP motivation
Multicast reservation styles
  • Naive multicast (source initiated)
  • source contacts each receiver in turn
  • wasted signaling messages
  • Intelligent multicast (merge replies)
  • two messages per link of spanning tree
  • source needs to know all receivers
  • and the rate they can absorb
  • doesnt scale
  • Naive multipoint multicast
  • two messages per source per link
  • cant share resources among multicast groups

  • Receiver initiated
  • Reservation state per group, instead of per
  • PATH and RESV messages
  • PATH sets up next hop towards source(s)
  • RESV makes reservation
  • Travel as far back up as necessary
  • how does receiver know of success?

Reservation Styles
  • How resource reservations are aggregated/merged
    for multiple receivers in the same multicast
  • Two options, specified in the receivers
    reservation requests
  • Reservation attribute reservation is shared over
    flows from multiple senders, or distinct for each
  • Sender selection explicit list or wildcard
  • Three reservation styles are defined

  • Allow receivers to separate reservations
  • Fixed filter
  • receive from exactly one source
  • Dynamic filter
  • dynamically choose which source is allowed to use
  • Fixed-Filter
  • Specifies a distinct reservation for each sender
    and an explicit list of senders
  • Symbolic representation FF(S1Q1, S2Q2, )
  • Shared-Explicit
  • Specifies that a single resource reservation is
    to be shared by an explicit list of senders
  • Symbolic representation SE(S1, S2, Q)
  • Wildcard-Filter
  • Specifies that a single resource reservation is
    to be shared by all senders to this address
  • Symbolic representation WF(Q)

Soft state
  • State in switch controllers (routers) is
    periodically refreshed
  • On a link failure, automatically find another
  • Transient!
  • But, probably better than with ATM

Why is signaling hard ?
  • Complex services
  • Feature interaction
  • call screening call forwarding
  • Tradeoff between performance and reliability
  • Extensibility and maintainability

  • Economic principles
  • Traffic classes
  • Mechanisms at each time scale
  • Faster than one RTT
  • One RTT
  • Session
  • Signaling
  • Admission control
  • Day
  • Weeks to months
  • Some open problems

Admission control
Connection Admission Control (CAC)
  • Can a call be admitted?
  • ? (bandwidth allocated for all connections) ?
    Link Rate
  • Otherwise call is inadmissible
  • What bandwidth to allocate to connections??
  • Depends upon the traffic, traffic model assumed
    and the Queueing methodology deployed and model
    used to estimate the required bandwidth
  • Procedure
  • Map the traffic descriptors associated with a
    connection onto a traffic model
  • Use this traffic model with an appropriate
    queuing model for each congestion point, to
    estimate whether there are enough system
    resources to admit the connection in order to
    guarantee the QoS at every congestion (or
    queuing) point.
  • Allocate resources if the connection is accepted.

CAC (continued ..)
  • Depending on the traffic models used, the CAC
    procedures can be too conservative by over
    allocating the resources.
  • This reduces the statistical gains
  • An efficient CAC is the one which produces
    maximum amount of statistical gain at a given
    congestion point without violating the QoS.
  • The efficiency of the CAC thus depends on how
    closely the two steps (traffic model and queuing
    model) above model reality.
  • Both the traffic and queuing models are well
    researched and widely published in the

CBR and UBR Admission Control
  • CBR admission control (Peak Rate Allocation)
  • simple
  • on failure try again, reroute, or hold
  • Best-effort admission control
  • trivial
  • if minimum bandwidth needed, use CBR test

CAC for CBR (with small jitter)
  • Given the buffer size B, the link capacity C and
    the peak cell rate of the connection PCRi,
    determine a load ? such that the probability of
    queue length exceeding B is less than ?, where ?
    is a small number such as 10-10
  • Using M/D/1 model
  • Using nD/D/1 model

Cell Loss Probability versus Buffer Size
  • ?0.9
  • M/D/1 is conservative
  • For large N, both give similar performance

VBR admission control
  • VBR
  • peak rate differs from average rate burstiness
  • if we reserve bandwidth at the peak rate, wastes
  • if we reserve at the average rate, may drop
    packets during peak
  • key decision how much to overbook
  • Four known approaches
  • peak rate admission control
  • worst-case admission control
  • admission control with statistical guarantees
  • measurement-based admission control

1. Peak-rate admission control
  • Reserve at a connections peak rate
  • Pros
  • simple (can use FIFO scheduling)
  • connections get negligible delay and loss
  • works well for a small number of sources
  • Cons
  • wastes bandwidth
  • peak rate may increase because of scheduling

2. Worst-case admission control
  • Characterize source by average rate and burst
    size (LBAP)
  • Use WFQ or rate-controlled discipline to reserve
    bandwidth at average rate
  • Pros
  • may use less bandwidth than with peak rate
  • can get an end-to-end delay guarantee
  • Cons
  • for low delay bound, need to reserve at more than
    peak rate!
  • implementation complexity

3. Admission with statistical guarantees
  • Key insight is that as number of calls increases,
    probability that multiple sources send a burst
  • sum of connection rates is increasingly smooth
  • With enough sources, traffic from each source can
    be assumed to arrive at its average rate
  • Put in enough buffers to make probability of loss
  • Theory of large deviations quantitatively bounds
    the overflow probability
  • By allowing a small loss, we can reduce the
    resources considerably

  • Consider an ensemble of 10 identical and
    independent sources, each of which is on with a
    probability 0.1. When on has a transmission
    rate of 1.0. What is the probability that they
    overflow a shared link of capacity 8?
  • The probability that n sources are on out of 10
    is given by

The probability of loss is less than 10-6 For
peak allocation we need a capacity of 10 By
allowing loss, we reduced resources by 20!!
3. Admission with statistical guarantees (contd.)
  • Assume that traffic from a source is sent to a
    buffer of size B which is drained at a constant
    rate R
  • If source sends a burst, its delay goes up
  • If the burst is too large, bits are lost
  • Equivalent bandwidth (EBW) of the source is the
    rate at which we need to drain this buffer so
    that the probability of loss is less than L (and
    the delay in leaving the buffer is less than d)
  • If many sources share a buffer, the equivalent
    bandwidth of each source decreases (why?)
  • Equivalent bandwidth of an ensemble of
    connections is the sum of their equivalent

3. Admission with statistical guarantees (contd.)
  • When a source arrives, use its performance
    requirements and current network state to assign
    it an equivalent bandwidth
  • Admission control sum of equivalent bandwidths
    at the link should be less than link capacity
  • Pros
  • can trade off a small loss probability for a
    large decrease in bandwidth reservation
  • mathematical treatment possible
  • can obtain delay bounds
  • Cons
  • assumes uncorrelated sources
  • hairy mathematics

Effective Bandwidth
  • This model maps each connections traffic
    parameters into a real number EBWi, called the
    Equivalent Bandwidth or Effective Bandwidth of
    the connection such that the QoS constraints are
  • Thus, the effective bandwidth is derived as a
    source property and with this mapping, the CAC
    rule becomes very simple
  • For a connection with an average rate SCRi and
    peak rate as PCRi, the effective bandwidth is a
    number between the SCRi and PCRi. That is,
  • There are many methods and models published in
    the literature

Properties of EBW
  • Additive Property Effective bandwidths are
    additive, i.e., the total effective bandwidth
    needed for N connections equals to the sum of
    effective bandwidth of each connection
  • Independence Property Effective bandwidth for a
    given connection is only a function of that
    connections parameters.
  • due to the independence property, the effective
    bandwidth method could be far more conservative
    than a method which considers the true
    statistical multiplexing (i.e., the method which
    considers the presence of other connections)
  • With the effective bandwidths method, the CAC
    function can add (or subtract) the effective
    bandwidth of the connection which is being set-up
    (or torn down) from the total effective
    bandwidth. This is not easily possible with any
    method which does not have the independence

EBW (First Approach by Roberts)
  • Assumes fluid sources and zero buffering (so that
    two simultaneously active sources would cause
    data loss)
  • Let each source has a peak rate P, mean rate m
    and link capacity is C and required cell loss is
    smaller than 10-9
  • The heuristic to estimate the EBW of a source is
  • EBW 1.2m 60m(P-m) / C
  • First term says EBW is 1.2 times of mean rate
  • Second term increases EBW in proportion to the
    gap between peak and mean (an indicator of source
    burstiness). This is mitigated by the large link
  • Expression is independent of cell loss!!

EBW (Second approach by Gibbens and Hunt)
  • on-off sources with exponentially distributed
    on and off periods
  • Let a source mean on period be and mean
    off period be . When the source is on,
    it is assumed to produce information at a
    constant rate
  • Let B be the buffer size CLR is the cell loss
    ratio required and
  • The Effective Bandwidth is given by

  • Let traffic descriptors are SCR, PCR100Mb/s,
    CLR10-7 and ABS (Average Burst Size)50 cells

EBW Observations
  • Equation implies that for large B, ??0 and EBW
    (ci ) equals to the mean rate of the source
  • For a small buffer B, ??-? and the effective
    bandwidth of the source will be , the peak
    information rate
  • The queue length distribution is assumed to be
    asymptotically exponential of form

EBW for Self-similar traffic (By Norros)
  • Let m is the mean bit rate of the traffic stream,
    a is the coefficient of variation, B is the
    buffer size, H is the Hurst parameter of the
    stream (0.5?H?1), CLR is the target cell loss
  • The EBW is given by
  • Note that this equation does not follow the
    asymptotic exponential queue length distribution

Multi-class CAC
  • In the real world, the traffic flow consists of
    multiple QoS classes, where, the services may be
    partitioned and queued separately
  • To guarantee QoS, a certain amount of bandwidth
    (or capacity) is reserved for each of the service
  • With effective bandwidth approach, this
    assignment becomes very simple.
  • Let Nj be the number of sources for class j and
    let ?j be the effective bandwidth of a source
    belonging to class j. Let there be K such
    classes. Then, the CAC for multi-class traffic
    should check that the total estimated capacity is
    less than the service rate. That is,

4. Measurement-based admission
  • For traffic that cannot describe itself
  • also renegotiated traffic
  • Measure real average load due to ensemble of
  • Users tell peak
  • If peak measured average load lt capacity, admit
  • Over time, new call becomes part of average
  • Problems
  • assumes that past behavior is indicative of the
  • how long to measure?
  • when to forget about the past?

  • Economic principles