Building Cisco Remote Access Networks - PowerPoint PPT Presentation

1 / 63
About This Presentation
Title:

Building Cisco Remote Access Networks

Description:

If there is no congestion on the WAN link, there is no reason to implement ... Predictor is memory intensive. Stacker is CPU intensive. 56 ... – PowerPoint PPT presentation

Number of Views:100
Avg rating:3.0/5.0
Slides: 64
Provided by: rick336
Category:

less

Transcript and Presenter's Notes

Title: Building Cisco Remote Access Networks


1
  • Building Cisco Remote Access Networks
  • Managing Network Performance

2
Queuing To do, or not to do
  • If there is no congestion on the WAN link, there
    is no reason to implement traffic prioritization.
  • Mark McGregor says If you have a T1 or higher,
    you shouldnt be messing with queuing. If you
    link is congested, buy more bandwidth.

3
Queuing
  • Routers can be configured to prioritize certain
    kinds of traffic based on protocol information,
    such as TCP port numbers
  • Such traffic prioritization ensures that packets
    carrying mission-critical data take precedence
    over less important traffic.

4
Queuing
  • The question of priority is most important on
    routers that maintain a slow WAN connection and
    thus experience frequent congestion.
  • If a remote site maintains a single 56-kbps link
    to the outside world, competition for that small
    WAN pipe could get fierce.
  • the router must buffer incoming packets and
    schedule them for transmission on the slower
    outgoing port.

5
Queuing
Because RTA can accept packets on a 10-Mbps LAN
interface much faster than it can send packets
out on the slow WAN link, an administrator may
configure a special queuing strategy on S0.
6
Queuing
  • In the previous example, if the router schedules
    these packets for transmission on a first-come,
    first-served basis, users could experience an
    unacceptable lack of responsiveness.
  • An end user sending delay-sensitive voice traffic
    may be forced to wait too long while the router
    empties its buffer of another users file upload
    of a long train of packets.

7
Queuing
  • The Cisco IOS addresses priority and
    responsiveness issues through queuing.
  • Queuing refers to the process the router uses to
    schedule packets for transmission during periods
    of congestion.
  • By using the queuing feature, you can configure a
    congested router to reorder packets so that
    mission-critical and delay-sensitive traffic gets
    sent out first--even if other low-priority
    packets arrive before them.

8
Queuing
  • The Cisco IOS supports four methods of queuing
  • first-in, first out (FIFO) queuing
  • priority queuing
  • custom queuing
  • weighted fair queuing (WFQ)
  • Only one of these queuing methods can be applied
    per interface, since each method handles traffic
    in a unique way.

9
Queuing
10
Queuing
  • Prioritization is most effective on WAN links in
    which the combination of bursty traffic and
    relatively lower data rates can cause temporary
    congestion.
  • Depending on the average packet size,
    prioritization is most effective when applied to
    links at T1/E1 bandwidth speeds or lower.

11
AGAIN To do, or not to do
  • If there is no congestion on the WAN link, there
    is no reason to implement traffic prioritization.
  • Mark McGregor says If you have a T1 or higher,
    you shouldnt be messing with queuing. If you
    link is congested, buy more bandwidth.

12
FIFO
  • As its name implies, FIFO queuing does not
    prioritize packets according to type of traffic
    or protocol.
  • FIFO merely queues packets for transmission
    according to the order in which they arrive.
  • Because of its simplicity, FIFO is the fastest of
    the four queuing methods, and should be used on
    all non-congested interfaces.
  • It is the Cisco default queuing mode for all
    interfaces faster than E1 (2.048 Mbps).

13
Priority Queuing Overview
  • You can use priority queuing to prioritize one
    type of traffic over others, so that the
    highest-priority traffic always gets dispatched
    before any other packets.
  • To implement priority queuing, you classify
    traffic according to various criteria, including
    protocol type, and then assigns that traffic to
    one of four output queues
  • high
  • medium
  • normal
  • low priority.

14
Priority Queuing Overview
15
Priority Queuing Operation
  • When the router is ready to send a packet, it
    checks the high queue first.
  • Once the high queue is empty, the router checks
    the medium queue, and so on.
  • This process is repeated every time the router is
    ready to send a packet.

16
Priority Queuing Operation
  • An incoming packet is compared with the priority
    list to select a queue.
  • If there is room, the packet is buffered in
    memory and waits to be dispatched after the queue
    is selected.
  • If the queue is full, the packet is dropped.
  • For this reason, controlling queue size is an
    important configuration task.

17
Priority Queuing Operation
18
Configuring PQ - Priority Lists
  • You can set a priority queue either by protocol
    type or by incoming interface type
  • Router(config)priority-list list-number protocol
    protocol-name high medium normal low
    queue-keyword keyword-value
  • Router(config)priority-list list-number
    interface interface-type interface-number high
    medium normal low
  • Use this command to set queuing priorities for
    all traffic arriving on an incoming interface

19
Configuring PQ - Default Queue
  • The default queue is normal, if you want
    something else
  • Router(config)priority-list list-number default
    high medium normal low

20
PQ - priority-list examples
  • This command assigns a high priority to traffic
    that matches IP access list 10
  • priority-list 1 protocol ip high list 10
  • This command assigns a medium priority level to
    Telnet packets
  • priority-list 4 protocol ip medium tcp 23
  • This command assigns a medium priority level to
    UDP Domain Name service packets
  • priority-list 4 protocol ip medium udp 53
  • This command assigns medum priority to traffic
    arriving in on ethernet 0
  • priority-list 1 interface ethernet 0 medium
  • NOTE When using multiple rules for a single
    protocol, remember that the order of the rules
    matters.

21
Default PQ Queue Sizes
22
Configuring PQ - Queue Sizes
  • You can specify the maximum number of allowable
    packets in each queue.
  • In general, it is recommended that the default
    queue sizes not be changed.
  • Router(config)priority-list list-number
    queue-limit high-limit medium-limit normal-limit
    low-limit

23
Implementing PQ
  • Add the priority-group command to an interface to
    implement a priority-list
  • Router(config-if)priority-group list

24
PQ Configuration Example
25
Custom Queuing
  • Whereas priority queuing may result in an
    unacceptable delay of low-priority traffic,
    custom queuing allows an administrator to reserve
    a minimum amount of bandwidth for every kind of
    traffic.
  • Each traffic type gets a share of the available
    bandwidth, although these amounts dont have to
    be equal.
  • delay-sensitive and mission-critical traffic can
    be assigned a large percentage of available
    bandwidth, while low-priority traffic receives a
    smaller portion.

26
Custom Queuing
  • An administrator can configure up to 16 queues,
    which makes it ideal for networks that must
    provide a minimum level of responsiveness for
    several different protocols.
  • Each queue is serviced sequentially until the
    number of bytes sent exceeds the configurable
    byte count or the queue is empty.

27
Custom Queuing
Note Bandwidth allocation not part of this
course.
28
Allocating bandwidth to different queues
  • This discussion is beyond the scope of this
    course, but for information on this and other
    topics related to queuing, go to
  • Congestion Management Overview
  • http//www.cisco.com/univercd/cc/td/doc/product/so
    ftware/ios121/121cgcr/qos_c/qcprt2/qcdconmg.htm20
    680

29
Allocating bandwidth to different queues
  • Basically, the more types of traffic sharing the
    same queue, the less bandwidth that traffic will
    receive.
  • If a type of traffic has a queue all to itself it
    will receive more bandwidth.
  • This will be even more true if fewer queues are
    used.
  • For example, if you only have two queues, with
    telnet (port 23) traffic in one and everything
    else in the other, the telnet traffic will
    receive 50 of the available bandwidth.

30
Queue 0
  • Queue 0 is a system queue that handles system
    packets such as keepalives.
  • Queue 0 is emptied before the other custom
    queues.

31
Custom Queuing
  • Traffic filteringThe forwarding application
    (e.g. IP) applies a set of filters or access-list
    entries to each message that it forwards. The
    messages are placed in queues, based on the
    filtering.
  • Queued message forwardingCQ uses a round-robin
    dispatching algorithm to forward traffic. Each
    queue continues to transmit packets until the
    configured byte limit is reached.
  • When the threshold of this queue is reached or
    the queue is empty, the queuing software services
    the next queue in sequence.

32
Custom Queuing
33
Configuring CQ - queue-list
  • Similar to using the priority-list command
  • Router(config)queue-list list-number protocol
    protocol-name queue-number queue-keyword
    keyword-value
  • Router(config)queue-list list-number interface
    interface-type interface-number queue-number

34
  • Protocol Type
  • The following example assigns traffic that
    matches IP access list 10 to queue number 1
  • queue-list 1 protocol ip 1 list 10
  • The following example assigns Telnet packets to
    queue number 2
  • queue-list 4 protocol ip 2 tcp 23
  • The following example assigns UDP Domain Name
    Service (DNS) packets to queue number 3
  • queue-list 4 protocol ip 3 udp 53
  • Interface Type
  • In this example, queue list 4 establishes
    queueing priorities for packets entering on
    serial interface 0. The queue number assigned is
    10.
  • queue-list 4 interface serial 0 10

35
Configuring CQ - Default queue
  • You must explicitly assign a queue for packets
    that were not specified in the queue list.
  • The default queue is 1, if you want something
    else
  • Router(config)queue-list list-number default
    queue-number

36
Configuring CQ - Queue Size
Queue
of packets
1-16 (all)
20
  • You can designate the maximum number of packets
    that a queue can contain.
  • To limit the length of a particular queue
  • Router(config) queue-list list-number queue
    queue-number limit limit-number
  • Example queue-list 3 queue 10 limit 40

37
Default CQ Queue Byte Count- Service Threshold
Queue
of bytes
1-16 (all)
1500
  • You can configure the minimum byte count
    transferred from a given queue at a time (service
    threshold).
  • This value is set on a per-queue basis
  • Router(config) queue-list list-number queue
    queue-number byte-count byte-count-number
  • Example queue-list 9 queue 10 byte-count 1400
  • NOTE Router will not split packet for the
    purpose of queuing. If a queue threshold is
    reached during the transmission of a packet, the
    whole packet is still transmitted.

38
Implementing CQ
  • Add the priority-group command to an interface to
    implement a priority-list
  • Router(config-if)custom-queue-list list

39
CQ Configuration Example
40
Weighted Fair Queuing
  • Custom queuing (like priority queuing) requires
    that an administrator predefine priorities and
    configure access lists
  • you have to know what the priorities will be in
    advance, and have to do some complex
    configuration on the router ahead of time.
  • If network conditions change, routers using these
    static queuing methods cant adapt to the
    changes.
  • For administrators looking for a dynamic, fair
    method to prioritize traffic on a congested
    interface, WFQ is the answer.
  • WFQ automatically allocates bandwidth to all
    types of network traffic, but prioritizes
    delay-sensitive packets so that high-volume
    conversations dont consume all of the available
    bandwidth.

41
Weighted Fair Queuing
  • By using a complex algorithm, WFQ sorts the
    packets that make up the different conversations
    on an interface, and then assigns a precedence,
    or weight, to traffic so that each conversation
    gets its fair share of bandwidth.
  • WFQ breaks up large trains of packets so that
    low-volume conversations dont get overrun by
    large file transfers or any other heavy traffic.
  • With WFQ, the router can prioritize traffic based
    on the actual network conditions at the time,
    without complex administration.
  • As the only dynamic queuing strategy of the four
    queuing methods, WFQ is the Cisco default on all
    E1 (2.048 Mbps) and slower interfaces.

42
(No Transcript)
43
  • From the earlier suggested Cisco web page
  • WFQ classifies traffic into different flows based
    on packet header addressing, including such
    characteristics as source and destination network
    or MAC address, protocol, source and destination
    port and socket numbers of the session, Frame
    Relay data-link connection identifier (DLCI)
    value, and type of service (ToS) value.
  • There are two categories of flows high-bandwidth
    sessions and low-bandwidth sessions.
  • Low-bandwidth traffic has effective priority over
    high-bandwidth traffic, and high-bandwidth
    traffic shares the transmission service
    proportionally according to assigned weights.
  • Low-bandwidth traffic streams, which comprise the
    majority of traffic, receive preferential
    service, allowing their entire offered loads to
    be sent in a timely fashion.
  • High-volume traffic streams share the remaining
    capacity proportionally among themselves.

44
Configuring WFQ
  • Router(config-if)fair-queue

Thats it.
(Almost.)
45
Configuring WFQ
  • Router(config-if)fair-queue
  • congestive-discard-threshold
  • The congestive-discard-threshold is the number of
    messages to queue for high-volume traffic.
  • In other words, the maximum packets in a
    conversation held in a queue before they are
    discarded.
  • Valid values are 1 to 512, inclusive.
  • The default is 64 messages.

46
Configuring WFQ
47
Verifying Queuing
48
Verifying Queuing
  • show queueing custom
  • show queueing priority
  • show queueing fair

49
Compression
  • Link compression (also known as per-interface
    compression)
  • Payload compression (also known as
    per-virtual-circuit compression)
  • TCP header compression
  • Microsoft Point-to-Point Compression (MPPC)

50
Compression Overview
51
Link Compression
  • Link compression (also known as per-interface
    compression) involves compressing both the header
    and payload sections of a data stream.
  • Unlike header compression, link compression is
    protocol independent.

52
Link Compression
  • The link-compression algorithm uses Predictor or
    STAC to compress the traffic in another link
    layer, such as PPP or LAPB, to ensure error
    correction and packet sequencing.
  • Cisco HDLC uses STAC compression only.

Hi, Im on the BCRAN test!
53
Predictor
  • Predictor --Predicts the next sequence of
    characters in the data stream by using an index
    to look up a sequence in a compression
    dictionary.
  • FYI
  • It then examines the next sequence in the data
    stream to see if it matches. If it does, that
    sequence replaces the looked-up sequence in a
    maintained dictionary.
  • If it does not, the algorithm locates the next
    character sequence in the index, and the process
    begins again.

54
STAC
  • STAC -- Developed by STAC Electronics, STAC is a
    Lempel-Ziv (LZ)-based, compression-based
    algorithm.
  • FYI
  • searches the input data stream for redundant
    strings and replaces them with what is called a
    token, which turns out to be shorter than the
    original redundant data string.

55
Good to know
  • Predictor is memory intensive.
  • Stacker is CPU intensive.

56
Link Compression
  • If the data flow transverses across a
    point-to-point connection, use link compression.
  • In a link-compression environment, the complete
    packet is compressed and the switching
    information in the header is not available for
    WAN-switching networks like Frame Relay.
  • The best applications for link compression are
    point-to-point environments with a limited hop
    path.
  • Typical examples are leased lines or ISDN.

57
Payload Compression
  • Payload compression (also known as
    per-virtual-circuit compression) compresses only
    the data portion of the data stream.
  • The header is left intact.
  • It uses the STAC compression method.

58
Payload Compression
59
Payload Compression
  • When designing an internetwork, the customer
    cannot assume that an application will go over
    point-to-point lines.
  • If link compression is used, the header may not
    be readable at a particular hop, however, the
    customer can use payload compression instead.
  • When using payload compression, the header is
    left unchanged and packets can be switched
    through a WAN packet network. Payload compression
    is appropriate for
  • X.25
  • Switched Multimegabit Data Service (SMDS)
  • Frame Relay
  • ATM

60
TCP Header Compression
  • The TCP header compression subscribes to the Van
    Jacobson Algorithm, which is defined in RFC 1144.
  • TCP/IP header compression lowers the overhead
    generated by the disproportionately large TCP/IP
    headers as they are transmitted across the WAN.

61
TCP Header Compression
  • TCP/IP header compression is protocol specific
    and compresses only the TCP/IP header, which
    leaves the Layer 2 header intact to allow a
    packet with a compressed TCP/IP header to travel
    across a WAN link.
  • It is beneficial on small packets with few bytes
    of data, such as Telnet.
  • Cisco header compression supports X.25, Frame
    Relay, and dial-on-demand WAN link protocols.
  • Because of processing overhead, header
    compression is generally used at lower speeds,
    such as 64-Kbps links.

62
MPPC
  • Microsoft Point-to-Point Compression (MPPC)
  • means of representing arbitrary PPP packets in a
    compressed form
  • a Windows 95 client connecting to an NT server
  • The Windows 95 client software supports both MPPC
    and LZS, whereas the Microsoft NT server supports
    only MPPC, which is negotiated during the
    Compression Control Protocol (CCP) process.

63
Compression Commands
  • Link Compression
  • Router(config-if)compress predictor stac
    mppc
  • Payload Compression
  • Router(config-if)frame-relay payload-compress
  • TCP/IP Header Compression
  • Router(config-if)ip tcp header-compression
  • Remember Never recompress data. Compressed data
    does not compress it actually expands.
Write a Comment
User Comments (0)
About PowerShow.com