Proactive network management models a network using network traffic as input to simulate current and future behavior of the network and predict the impact of the addition of new applications on the network performance - PowerPoint PPT Presentation

PPT – Proactive network management models a network using network traffic as input to simulate current and future behavior of the network and predict the impact of the addition of new applications on the network performance PowerPoint presentation | free to download - id: 7f1adc-MThkN

The Adobe Flash plugin is needed to view this content

Get the plugin now

View by Category
Title:

Proactive network management models a network using network traffic as input to simulate current and future behavior of the network and predict the impact of the addition of new applications on the network performance

Description:

Introduction Proactive network management models a network using network traffic as input to simulate current and future behavior of the network and predict the ... – PowerPoint PPT presentation

Number of Views:1312
Avg rating:3.0/5.0
Slides: 228
Provided by: Author619
Category:
Tags:
Transcript and Presenter's Notes

Title: Proactive network management models a network using network traffic as input to simulate current and future behavior of the network and predict the impact of the addition of new applications on the network performance

1
Introduction
• Proactive network management models a network
using network traffic as input to simulate
current and future behavior of the network and
predict the impact of the addition of new
applications on the network performance
• In a model network managers can change the
network model by adding new devices,
workstations, servers, and applications, or by
upgrading to higher speed network connections,
and performing what-if scenarios before the
changes

2
Introduction (cont.)
• A model can be used to
• evaluate various design alternatives or various
operational policies,
• explore the behavior of proposed,
systems/connections before actually building it,
• pre-test modifications

3
1. Types of SimulationSystems, Models,
Discrete-event Simulation
• Static and dynamic simulation models A static
model characterizes a system independently of
time. A dynamic model represents a system that
changes over time.
• Stochastic and deterministic models If a model
represents a system that includes random
elements, it is called a stochastic model.
Otherwise it is deterministic.
• Queueing systems, the underlying systems in
network models, contain random components, such
as arrival time of packets in a queue, service
time of packet queues, output of a switch port,
etc.

4
Types of Simulation (cont.)
• Discrete and continuous models
• A continuous model represents a system with state
variables changing continuously over time.
Examples are differential equations that define
the relationships for the extent of change of
some state variables according to the change of
time.
• A discrete model characterizes a system where the
state variables change instantaneously at
discrete points in time. At these discrete points
in time some event or events may occur, changing
the state of the system. For instance, the
arrival of a packet at a router at a certain time
is an event that changes the state of the port
buffer in the router.

5
2. Simulation vs. Emulation
• The purpose of emulation is to mimic the original
network and reproduce every event that happens in
every network element and application.
• In simulation, the goal is to generate
statistical results that represent the behavior
of certain network elements and their functions.
• In discrete event simulation, we want to observe
events as they happen over time, and collect
performance measures to draw conclusions on the
performance of the network, such as link
utilizations, response times, routers buffer
sizes, etc.

6
Simulation Objectives
• Performance modeling Obtain statistics for
various performance parameters of links, routers,
switches, buffers, response time, etc.
• Failure analysis Analyze the impacts of network
element failures.
• Network design Compare statistics about
alternative network designs to evaluate the
requirements of alternative design proposals.
• Network resource planning Measure the impact of
changes on the networks performance, such as
addition of new users, new applications, or new
network elements.

7
Reasons for Predicting Network Performance
• Keep user response time low
• Increase user productivity
• Provide for future growth
• Ensure successful deployment of new applications
• Validate response time goals of new network
designs and troubleshoot for bottlenecks
• Choose among competing network applications, and
the best network topology

8
Costs of poorly performing network
• Lost sales opportunities
• Low customer and user satisfaction
• Slipped schedules
• Low morale

9
Model Development Life Cycle (MDLC) Discussed
Later in More Details
• Identify all IT assets (Topology, devices, links,
routers, etc.), nonphysical (registered IP
networks)
• Raw utilization of each asset (CPU, buffer,
bandwidth utilization, up-time, for large
applications, response time)
• Raw utilization is broken down to garner usage
statistics for each host, segment, protocol, and
application, user.
• Evaluate statistics

10
Model Development Life Cycle (cont.)
• Build the baseline (Import real data from current
traffic) to tune and validate the model.
• Use existing applications or create an
application environment using a testbed similar
to the assumed future applications. Measure its
traffic, and/or
• Use experts estimates who knows the application
and user requirements.
• Import the traffic to the model.

11
Model Development Life Cycle (cont.)
• Build the model
• Run the model to simulate the effect of
anticipated changes and predict the behavior of
various parts of the network assuming certain
traffic growth.
• Gather further data as the network growths and
changes, and as we know more about the
applications.
• Repeat the same sequence.

12
3. Types of Communications NetworksModeling
Constructs
• LANS
• WANS
• Wireless

13
Transmission Technology
point-to-point networks
• In broadcast networks a single communication
channel is shared by every node. Nodes
communicate by sending packets or frames received
by all the other nodes. The address field of the
frame specifies the recipient or recipients of
the frame. Only the addressed recipient(s) will
process the frame. Broadcast technologies also
allow the addressing of a frame to all nodes by
dedicating it as a broadcast frame processed by
every node in the network. It is also possible to
address a frame to be sent to all or any members
of only a group of nodes. The operations are
called multicasting and any casting respectively.
• Point-to-point networks consist of many
connections between pairs of nodes. A packet or
frame sent from a source to a destination may
have to first traverse intermediate nodes where
they are stored and forwarded until it reaches
the final destination.

14
Physical Area Coverage
• Personal area networks (PANs) support a persons
needs. For instance, a wireless network of a
keyboard, a mouse, and a personal digital
assistant (PDA) can be considered as a PAN.
• Local area networks (LANs), typically owned by a
person, department, a smaller organization at
home, on a single floor or in a building, cover a
limited geographic area. LANs connect
workstations, servers, and shared resources.
• LANs can be further classified based on the
transmission technology, speed measured in bits
per second, and topology. Transmissions
technologies range from traditional 10Mbps LANs
to todays 10Gbps LANs. In terms of topology,
there are bus and ring networks and switched
LANs.

15
Physical Area Coverage (cont.)
• Metropolitan area networks (MANs) span a larger
area, such as a city or a suburb. A widely
deployed MAN is the cable television network
distributing not just one-way TV programs but
two-way Internet services as well in the unused
portion of the transmission spectrum.
• Wide area networks (WANs) cover a large
geographical area, a state, a country or even a
continent. A WAN consists of hosts (clients and
servers) connected by subnets owned by
communications service providers. The subnets
deliver messages from the source host to the
destination host. A subnet may contain several
transmission lines, each one connecting a pair of
specialized hardware devices called routers.
• Transmission lines are made of various media
copper wire, optical fiber, wireless links, etc.
When a message is to be sent to a destination
host or hosts, the sending host divides the
message into smaller chunks, called packets. When
a packet arrives on an incoming transmission
line, the router stores the packet before it
selects an outgoing line and forwards the packet
via that line.

16
Wireless Networks
• Wireless networks can be categorized as
short-range radio networks, wireless LANs, and
wireless WANs.
• In short range radio networks, for instance
Bluetooth, various components, digital cameras,
Global Positioning System (GPS) devices,
keyboards are connected via short-range radio
connections within 20-30 feet. The components
are in primary-secondary relation. The main
system unit, the primary component, controls the
operations of the secondary components. The
primary component determines what addresses the
secondary devices use, when and on what
frequencies they can transmit.
• A wireless LAN consists of computers and access
points equipped with a radio modem and an antenna
for sending and receiving. Computers communicate
with each other directly in a peer-to-peer
configuration or via the access point that
connects the computers to other networks. Typical
coverage area is around 300 feet. The wireless
LAN protocols are specified under the family of
IEEE 802.11 standards for a range of speed from
11 Mbps to 108 Mbps.

17
Wireless Networks (cont.)
• Wireless WANs comprise of low bandwidth and high
bandwidth networks. The low bandwidth radio
networks used for cellular telephones have
evolved through three generations. The first
generation was designed only for voice
communications utilizing analog signaling. The
second generation also transmitted only voice but
based on digital transmission technology. The
current third generation is digital and transmits
both voice and data at most 2Mbps. Fourth and
further generation cellular systems are under
development. High-bandwidth WANs provides
high-speed access from homes and businesses
bypassing the telephone systems. The emerging
IEEE 802.16 standard delivers services to
buildings, not mobile stations, as the IEEE
802.11 standards, and operates in much higher
10-66 GHz frequency range. The distance between
buildings can be several miles.

18
Wireless Networks (cont.)
• Wired or wireless home networking is getting more
and more popular connecting various devices
together that can be accessible via the Internet.
Home networks may consists of PCs, laptops, PDAs,
TVs, DVDs, camcorders, MP3 players, microwaves,
refrigerator, A/C, lights, alarms, utility
meters, etc. Many homes are already equipped with
high-speed Internet access (cable modem, DSL,
movies on demand.

19
Modeling Constructs
• The various components and types of
communications networks correspond to the
modeling constructs and the different steps of
building a simulation model.
• Typically, a network topology is built first,
followed by adding traffic sources, destinations,
workload, and setting the parameters for network
operation.
• The simulation control parameters determine the
experiment and the running of the simulation.
Prior to starting a simulation various statistics
reports can be activated for analysis during or
after the simulation.
• Statistical distributions are available to
represent specific parameterizations of built-in
analytic distributions. As the model is
developed, the modeler creates new model
libraries that can be reused in other models as
well.

20
3.a Introduction to Comnet
• COMNET III is a graphical, off-the-shelf
performance analysis tool for computer and
communication networks.
• Based on a description of a network, its control
algorithms and workload, COMNET III simulates the
operation of the network and provides measures of
network performance
• Building-block approach where the blocks are
objects you are familiar with in the real
world.
• Library of objects

21
Introduction to Comnet (cont.)
• What if scenarios
• Animated picture of the network configuration
• No programming required
• Comnet Predictor
• Baseliner
• Application Modeler
• Profiler

22
Introduction to Comnet (cont.)
• Simulation follows after network description
passes verify check
• Vital statistics collected during simulation in
over 130 reports which can be turned on and off
• Reports show message packet delays, queuing
statistics, node link utilization, I/O buffer
statistics, blocking probabilities, etc.
• Real-time and post-processed plots
• Object-oriented discrete event simulation
technology

23
Comnet Demo
• Open Comnet model (Demo.c3)
• Real-time plot
• Simulation, animation
• Snapshot measures color
• Response sources, statistics

24
Comnet Demo (cont.)
• View post processed message delay statistics in
response sources (Message delay on/off)
• Browse reports
• Select reports
• Snapshots
• Build new model

25
• City/Highway traffic

26
4. Performance targets for Simulation Purposes,
Performance Metrics
• Transmission protocols
• Routers
• Gateways
• Hubs
• Network switches
• Out-of-band management tools
• Enterprise management tools
• Benchmark tools
• System and performance analysis tools

27
Performance targets for modeling purposes (cont.)
• Computer platforms
• Operating systems
• Graphical user interfaces
• Network operating systems
• Application software
• User activities
• Network architecture
• Network configuration

28
Performance issues
• Channel capacity (similar to the traffic-carrying
capacity of a highway.)
• Function of the number of parallel lanes, the
traffic speed, interlane traffic shear and
interrelationships, and delays.
• Singular traffic events, such as accidents,
breakdowns, or inattentive drivers can affect the
channel capacity as well.

29
Bandwidth
• Data transmission signal speed, the wire speed,
channel bandwidth
• Protocol transmission speed For example, modem
speeds are 5600 bits/s, Token-Ring is typically 4
or 16 Mbits/s, Ethernet is 10, 100 or 1000
Mbits/s, and ATM ranges upward from 25 Mbits/s.
• A 100Mbps Ethernet supporting a client/server
load transports only 2.32 Mbits/s as a maximum
throughput (including the servers, NICs, and
workstations, not the channel bandwidth itself.)

30
Bandwidth
• Transmission signal speed The electronic or
optical signal used to send a message
• Channel signal speed is the bandwidth or the wire
speed. The signal propagation speed is the speed
at which the signal traverses the channel, and is
mostly a factor of the medium and the speed of
light

31
Throughput
• The volume of traffic measured in bits per second
• Protocol transmission speed
• Traffic a measure of network load (frames or
packets per second)

32
Routing
• LAN vs. WAN path
• A network connecting an enterprise comprises
multiple paths, multiple bridges, routers,
multiport hubs, spanning trees, parallel
• The route may be as simple as choosing a modem
connection or as complex as factoring multiple
channel speeds over connected segments, device
buffering rates and latencies, blocking times,
and the need to avoid saturated links.

33
• Data and messages are encapsulated inside cells,
frames, or packets. E.g., ATM cells are 53
bytes. The payload is also likely to include
other overhead as well when route and path
information, configuration and process control
• TCP/IP messages, and SNMP are transported as data
in cells

34
• The difference between the effective and the
actual data load is part of the protocol
indicators, polling, and status markers, in-band,
out-of-band (side-band) management.

35
Delay (latency)
• Data communications signal propagation speed is
finite, a significant portion of the speed of
light at 50 to 90 percent
• The latency station-to-station for VSAT is about
a quarter of a second, cross country it is about
100 ms, transglobal it is about an eighth of a
second,
• Nodes communicating on opposite ends of an
Ethernet will experience 9.6 microseconds

36
Delay (latency)
• Transitions reduces transmission speed (Data
encapsulated, translated, and buffered on data
networks.)
• There are multiple transitions and inherent
latencies at each transitional device between
channels and segments.
• Traffic that spans multiple segments is delayed
not only by the inherent signal speeds, but also
by interconnecting network devices.

37
Delay (cont.)
• Bridges, routers, gateways, and switches add
delays that range from 40 ms to several hundred
or thousand ms. These delays are at least as
significant as the delays caused by channel
signal propagation speed (1.6 ms for FDDI, 25 ms
for an average Token-Ring, 51.2 ms minimum after
an Ethernet collision).

38
Delay (cont.)
• Bad cells and packets force retransmission from
the initial source
• Signal loss, signal errors, CRC errors, bad data,
data out of sequence, device or channel failures,
device or channel overloads, and other events are
analogous to traffic accidents. Slowdowns cause
packets to overflow buffers.
• These packets are typically dropped with the
expectation of a later retransmission

39
Accidental Latency
• If a link fails, traffic on either side of that
pipe is halted, or at least detoured through
longer or slower links on the enterprise network.
• Frame relay lines fail, ISDN connections drop,
and modem connections wither with line noise.
• Network devices can saturate a line with
gibberish, or fail to relay or route data traffic
at all.
• Workstations, servers, and hosts fail outright or
experience processing problems that create
performance backlogs and stoppages.

40
Accidental Latency (cont.)
• Even when linkages are supported with alternate
or backup pipes, there are likely to be
significant delays while these routes are
switched on-line, enabled, and routers, gateways,
and switches are updated with new physical and
logical routing information.
• Cascading failure or network panicIf a server
slows down, client requests can saturate the
channel and prevent completion of ongoing tasks

41
Accidental Latency (cont.)
• Users rerequesting services
• SNA time-outs, failed status, and lack of
response create gridlock
• You may observe "normal" traffic levels and
bandwidth utilization until you parse the content
of the traffic and recognize that time-outs, IP
acknowledgments, duplicate messages, router
service broadcasts are flooding the network..

42
Accidental Latency (cont.)
• Router may be available but not actually
functioning while corrupted routing tables are
being rebuilt with new network information.
• Advanced tools (such as NetView, OpenView, and
UniCenter) try to provide status and qualitative
information so that you can monitor host, server,
router, gateway, and channel performance.

43
Peaks and Bursts
• Extreme cyclical or recurring workload exceeds
the capacity of network components, thus creating
a momentary bottleneck, a traffic jam or network
panic. In extreme cases, these peaks and bursts
collapse bridges, routers, LAN segments, and
processors.
• Sudden network traffic jam creates sluggish
performance, process backlogs, slowed response
time, decrease in actual work throughput.

44
Peaks and Bursts (cont.)
• Traffic always exhibits this pattern of peaks and
• Most network traffic has self-similar
characteristic of peaks and bursts.
• Peaks and bursts are statistically aggregated
infrastructure best characterized by fractal
mathematics of self-similarity.

45
Peaks and Bursts (cont.)
• Linear estimates do not scale to larger networks.
For example, if 20 clients create 1000 units of
identical clients will not realistically or
necessarily represent 2000 units of network
• Visualization and consequences of burstiness
• Mathematics Awareness Week

46
Frame Size
• Larger packets have a greater tendency to overrun
buffers in intermediate nodes.
• The processing effort for larger packets is the
same as for smaller packets, so larger packets
are more efficient. However, router queues are
more likely to fill with the large packets.

47
Frame Size (cont.)
• On the other hand, given two networks with the
same travel bandwidth utilization, the one with
more packets (that are obviously smaller) will
tend to have greater Ethernet collision rates or
longer token rotation times.
• Translations or encapsulations among different
from Ethernet may represent more than 40 ATM
cells after translation. The bandwidth does not
necessarily offset the increased latencies with
the cell streams.
• MTU size

48
Volume
• The load must be less than the capacity.
• Either reduce the load, or increase the capacity.
• If you alter the transmission bandwidth capacity,
you are likely to create performance pressures
elsewhere.

49
Data Leakages and Losses
• Misdelivery of information or incompletion of
delivery.
• Network leaks and losses occur at routers,
bridges, gateways, and switches. These
intermediate network nodes can become traffic
"black holes" due to configuration errors,
buffering, and bottlenecks.

50
4.a Comnet Message Traffic Generation
• Application sources using global and local
commands (later)
• Traffic sources message, response , and session
sources.
• Message Source
• The message source is a message generator which
is capable of sending messages to one or more
destinations (FTP, e-mail,etc.)

51
Response Source
• The response source is a message generator used
to send message replies upon receipt of a
message, and any type of message traffic which
would be triggered by the receipt of a message.
The message which is generated by a response
source is always sent to the node which generated
the message which triggered the response source
(Database queries, e-mail replies, etc.)

52
Session Source
• The session source is message generator which
first sets up a session with another node, and
then sends the message traffic.
• It is useful in modeling message sources, which
have bursty message arrival process as several
messages may be transmitted within one session.
• The session source is also used to model
connection-oriented traffic.

53
Common Features of All Sources
• unique name to the source
• scheduling method
• message priority
• routing class
• selection of a transport protocol
• setting of a packetizing delay
• selection of message size and text
• the choice of the traffic destination

54
Features
• Message Name
• The message name is a unique identifier given to
the source for identification purposes
• Message Scheduling
• by iteration time, by received message, or by
triggering event
• Interarrival time a fixed value, a user-defined
distribution, or any of the distributions
supported.
• First and last arrival

55
Features (cont.)
• Message Transport Protocol
• ATP, TCP/IP, Microsoft, TCP/IP, Sun, UDP/IP, Sun,
NCP/IPX
• Burst Mode
• Message Priority
• Message Routing Class
• Packetizing Delay

56
Message Size
• Any of the statistical distributions supported in
COMNET III may be used to model the size of the
message generated (Pearson distribution
functions)
• If received message scheduling is used, the size
of the message may be based on the size of the
message which triggered the traffic source
• Message Size Multiplier Received Message
Size Offset

57
Message Text
• Message text can then be used to trigger an
application or traffic source at the receiving
node.
• Use original message
• Copy message name
• Set message text

58
Message Destination
• Random Neighbor
• Random List
• Weighted List
• Multicast List
• Least Busy List
• Example

59
Computer and Communication Nodes
• The computer and communication (CC) node is a
generic node that is used for
• modeling of end systems such as computers,
printers, facsimile machines, or any
• general piece of network hardware
• The CC node may act as the origin or destination
for message traffic, run applications, or act as
a switching point within a network
may connect to this type of node for modeling a
network

60
Attributes
• Input buffers for each link connected to the node
for accepting packets transmitted to the node
• A processor for command execution and packet
processing
• Output buffers for each link connected to the
node through which the node may route packets
• Local disk storage for modeling disk read and
write processes

61
Attributes (cont.)
• A command list which defines how application
commands are to be executed on the node
• A pending application list of all applications
and traffic sources currently scheduled to run
• A prototype application list of all available
applications and traffic sources for the node
messages used to trigger an application or
traffic source
• A list of files which reside on the local disk

62
CC Node Parameters
• Computer and Communication Node Parameters
• Computer Group Nodes
• Ports
• Example

63

64
Routers, switches

65
Synonyms for Performance bottlenecks
• Some synonyms for an enterprise network
bottleneck
• Slow-down
• Stoppage,
• Congestion

66
Synonyms Performance bottlenecks (cont.)
• Traffic jam
• Gridlock
• Backup or backlog
• Processing overrun
• Resource limitation
• A slow or long route
• Inactive channel
• Interrupted pipeline
• Error

67
4.b Components of an Enterprise Network
• Infrastructure -design, architecture, protocols,
implementation
• People - skill levels, training, support
facilities, and experience
• Intermediate nodes- hubs, repeaters, bridges,
routers, switches, and gateways
• Organization- politics, goals, locations,
funding source, stability

68
Components of an Enterprise Network (cont.)
• Hosts - mainframes, minicomputers, and servers
• CPUs-processors, motherboard design, and
operating systems
• Network - protocol, transmission speeds, hops,
• Applications- NOS, operations, software, and
• Window/graphic interface- system, library,
accelerators, and device drivers
• Disk- controller speed, driver algorithms, cache

69
Components of an Enterprise Network (cont.)
• Memory- cache type, size, cost, speed
• Database- buffer sizes, lock time-outs, number
of users, and caches
• System kernel- base size, efficiency, buffer
size, paging, tuning, and configuration
• Executable code- runtime or compile, native or
interpreted, environment, file system, and
network operating system APIs, source code
algorithms

70
Performance Bottlenecks
• Design
• Wiring infrastructure
• Security
• Purpose
• Reliability
• Cost
• Speed
• Sophistication
• Environment
• Disk space
• Scalability
• Redundancy
• Integration
• Structural flaws
• Compatibility
• Performance
• Memory
• Functionality
• Network
• Platform independence

71
Performance Bottlenecks (cont.)
• Maintainability
• Time Life span
• Priorities
• Ease of use/complexity
• Organizational culture

72
Top Ten Performance Hits

73
Performance of Audio/Video Transmissions
• Sampling and Digitizing/Digital-to-analog
conversion (DAC)
• Different DAC techniques are used in different
properties, like bandwidth capacity and latency,
are factors in the selection of these techniques.
• Pulse code modulation (PCM) is the most common
sampling technique used to turn audible sounds
into digital signals.

74
It takes 50 frames to represent 1 second of
digitized sound.
75
Groups of telephony codecs
• PCM codecs, which are the basic 64 kbps codecs
and
• Vocoders, which are the codecs that go a step
beyond the essential PCM algorithm.
• The codecs youll see most often G.711
• 8-bit PCM digitization for 8 kHz linear audio
signals. G.711 is the least processor-intensive
codec
• No compression.
• mLaw and ALaw are two variations of the PCM
digitizing technique used in the G.711 codec. One
uses a logarithmic digitizing scale to grade
amplitude levels, while the other uses a linear
one.

76
Vocoders
• G.721, G.723, G.726, G.728, and G.729A
• These codecs enable significantly more economic
use of the network, permitting high-quality sound
reproduction at a bitrate of 8 to 32 kbps. Unlike
G.711, this group of codecs uses Adaptive
Differential Pulse Code Modulation (ADPCM) or
Code Excited Linear Prediction (CELP) algorithms
to reduce bandwidth requirements.
• ADPCM conserves bandwidth by measuring the
deviation of each sample from a predicted point
rather than from zero, allowing fewer bits to be
required to represent the historically 8-bit PCM
scales. CELP uses a newer variation of this
approach.

77
Vocoders
• G.722 Called a wideband codec because it uses
double the sampling rate (16 kHz rather than 8).
The effect is much higher sound quality than the
other VoIP codecs. Other than that, its
identical to G.711.
• GSM The global systems for mobile codec offers a
13 kbps bit stream that has its roots in the
mobile phone industry. Like many of the
ITU-recommended codecs, GSM uses a form of CELP
to achieve sample scale compression but is much
less
• processor intensive.

78
Vocodecs
• iLBC The Internet low-bitrate codec is a free,
proprietary audio codec that provides similar
bandwidth consumption and processing intensity to
G.729A, but with better resilience to packet
loss.
• Speex The Speex codec supports sampling rates of
8 to 32 kHz and a variable packet rate. Speex
also allows the bitrate to change in midstream
without a new call setup. This can be useful in
bursty congestion situations, but is unlikely to
matter ch to enterprise networks that have
quality-of-service measures and more reliability
than the Internet. Speex is free, and open source
implementations exist.

79
Codec packet rates
• The packet rate is the number of packets required
per second (pps) of sound transmitted.
• Besides the bits that represent data, all data
• Reducing overhead is crucial. One way to lower
overhead is to reduce the number of packets per
second used to transmit the sound.
• But this increases the impact of network errors
on the voice call.
• So there needs to be some balance between whats
resiliency to errors.
• Different codecs have different packet rates and

80
Packet interval
• The gap between transmitted packets is called the
packet interval or interarrival rate, and it is
expressed in inverse proportion to the packet
rate.
• The shorter the packet interval, the more packets
are required per second more overhead.
• Some of the codecs, especially those that use
very advanced CELP algorithms, can require a
longer duration of audio at a time (say, 30 ms
rather than 20 ms) in order to encode and decode.

81
(cont.)
• A G.711 call, which normally fits on a 64 kbps
channel, wont fit into a 64 kbps IP WAN
connection. This is because it is wrapped in RTP
and UDP packets, which are necessary overhead.

82
Lag (delay)
• The longer the packet interval, the longer the
lag will be between the time the sound is spoken
and the time it is encoded, transported, decoded,
and played back for the listener. An IP packet
isnt transmitted until it is completely
constructed, so a VoIP sound frame cant travel
across the network until its completely encoded.
A 30 ms sound frame takes a third longer to
encode than a 20 ms one, and inflicts 10 ms more
lag, too.

83
Lag vs. packet intervals
84
Lag
• Long packet intervals have another drawback the
greater the duration of sound carried by each
packet, the greater the chance that a listener
will notice a negative effect on the sound if a
packet is dropped due to congestion or a network
error.
• Dropping a packet carrying 20 ms of sound is
almost imperceptible with the G.711 codec, but
dropping a 60 ms packet is quite obtrusive. Since
VoIP sound frames are carried in unreliable UDP
datagrams, dropped packets arent retransmitted.

85
Example
• Consider that 8,000 samples per second are
required for a basic voice signal at 8 bits per
sample. Now, assuming a 20 ms packet interval
(1/50th of a second), it takes a minimum of 1,280
bits of G.711 data in each packet to adequately
carry the sound
• 64,000 bits per second / 50 1,280 bits per
packet

86
TCP/IP overhead for 20 ms packet interval
87
• Increasing the packet interval to 30 ms (1/33rd
of a second) results in a reduction in the number
of packets required per second, raising the bit
count per packet and reducing the amount of
overhead required to transmit the sound
• 64,000 bits per second / 33 1,940 bits per
packet

88
LAN/WAN
• Generally, on Ethernet-to-Ethernet calls, the use
of G.711 with a 20 ms packet interval is
encouraged, because a 100 mbps data link can
support hundreds of simultaneous 64 kbps calls
without congestion, and a dropped packet at 20 ms
interval is almost imperceptible.
• On calls that cross low-bandwidth links, its up
to the administrator to balance between latency,
possible reductions in sound quality incurred by
using a compression codec, and network congestion.

89
Different codecs have different bandwidth
requirements.
90
Network Performance Measurement Tools
• Based on http//dast.nlanr.net/Guides/GettingStar
ted/Performance.htmlwhat

91
Parameters to improve
• Measurement usually looks at one or more of the
following
• Bandwidth -- how much data can be transferred per
unit time -- is the most obvious.
• Delay -- how long it takes an individual piece of
data to traverse the network -- is important for
real-time applications like video conferencing
and remote instrument control, and also impacts
bandwidth.
• Packet loss -- when a piece of data disappears in
transmission -- affects both bandwidth and
real-time applications.
• A high-performance network is characterized by
high bandwidth, small delay, and low packet loss.

92
TCP
• handshaking mechanisms to establish a connection
between two machines
• capabilities for flow control (how much data can
be sent at a time)
• congestion control (what to do when packets are
dropped)
• polling for messages (how long to wait for an
incoming packet)
• retransmission of lost or corrupted data

93
Building a server
• initialize the socket
• bind the socket to a chosen port greater than
1024
• listen for incoming connections
• accept (or not) any incoming requests

94
Client side
• initialize a socket
• connect to the server

95
Maximum transmission unit (MTU)
• The largest packet size that can be sent across a
given network.
• For Ethernet, the MTU is 1500 bytes. The MTU may
vary from network to network the Path MTU is the
largest packet size that can be sent across an
entire network path.
• Path MTU Discovery is the algorithm used to find
the Path MTU.

96
Measurements
• Passive and active measurements
http//moat.nlanr.net/
• Simple network management protocol (SNMP) is used
to gather statistics from routers and switches,
including the number and size of IP packets,
total bytes, router CPU utilization, and

97
MRTG
• Multirouter traffic grapher (MRTG) is used to
display bandwidth usage and other information
over time. Web accessible charts are updated
every 5 minutes and show data summarized from the
past day, week, month, and year. Abilene
maintains an MRTG page.

98
OCXmon Passive Monitoring
• OC3mon and OC12mon machines are used to passively
examine network traffic without introducing any
traffic of their own.
• The machines tap into the OC3 and OC12 (optical
carrier level) fiber optic line connecting to the
wide-area network and analyze flows traversing
the network.
• These machines currently work only with ATM
(asynchronous transfer mode) networks, but POS
(packet over SONET) is in the works.

99
Active Measurement Program
• The NLANR Active Measurement Program (AMP) tests
bandwidth and delay between participating
institutions.
• Tests are run between each pair of machines,
forming a full mesh.
• A ping test, measuring round-trip time delay, is
run every minute.
• AMP also runs traceroute tests every 10 minutes
to show what networks are used between
institutions.
• Maximum TCP (transmission control protocol)
bandwidth tests can be run on demand.

100
Surveyor
• One-way delays are measured between Surveyor
machines at participating institutions.
• GPS (Global Positioning System) is used to
synchronize the machine clocks so one-way delays
can be computed accurately to within 50
microseconds.
• Using one-way delays, asymmetries in the network
are revealed that normal round-trip time delays
do not.

101
End-to-End Performance
• Treno emulates the TCP protocol stack using UDP
(user datagram protocol).
• We can use this to compare an operating system's
TCP implementation with a modern TCP
implementation that includes such improvements as
SACK (selective acknowledgement), FACK (forward
acknowledgement), and Path MTU Discovery.
• Treno also allows targeting individual routers
along the path, to discover what links are
problematic.

102
mping
• stresses the network, intentionally flooding the
router queues to test queuing properties.
• Using mping, we can find, for example, the
bandwidth and packet loss as the TCP window size
increases. Again, this tool is intended for
network engineers skilled at interpreting the
output.

103
traceroute, ping
• traceroute is used to find the path your data
takes through the network.
• ping will repetitively find round-trip time
measurements to a particular machine or router.
The repetition helps to reveal changes in delay.
Both ping and traceroute are standard UNIX
commands, often found in /sbin, /usr/sbin, /etc,
or /usr/etc.

104
mtr
• mtr ("Matt's TraceRoute") is a program that
combines the functionality of traceroute and ping
and presents the output data in an easy-to-read
tabular format.
• It repetitively pings each router along the path,
showing delay and packet loss.

105
tcpdump, tcptrace, xplot
• tcpdump is a standard UNIX utility to examine or
"sniff" the traffic on the network.
• tcptrace can be used to analyze the output from
tcpdump, and xplot will show the packets
graphically.
• xplot is helping to reveal "pathological" network
behaviors.

106
Application Performance
• Real-time Transport Protocol is used by many
real-time applications (e.g., video conferencing)
to monitor and respond to network conditions. It
detects delay, delay jitter, and packet loss. RTP
uses RTCP (RTP control protocol) to give
performance feedback to the sending application.
• overview of RTP

107
TCP Window Size
• The TCP window size is by far the most important
parameter to adjust for achieving maximum
bandwidth across high-performance networks.
• Properly setting the TCP window size can often
more than double the achieved bandwidth. See the
User's Guide to TCP Windows for details.

108
MTU
• A small MTU wastes time processing many small
packets instead of fewer large ones.
• The system administrator can enable Path MTU
Discovery if the operating system implements it.
• Use Iperf with the -m (print MSS) option to check
the MTU. (See below)

109
Windows XP
• The primary TCP tuning parameters appear in the
registry under HKEY_LOCAL_MACHINE\SYSTEM\CurrentCo
ntrolSet\Services\Tcpip\Parameters.
• To enable high performance TCP you must turn on
RFC1323 features (create REG_DWORD key
"Tcp1323Opts" with value 3) and set the maximum
TCP buffersize (create REG_DWORD key
"GlobalMaxTcpWindowSize" with an appropriate
value such as 4000000, decimal).
• If you want to set the system wide default buffer
size create REG_DWORD key "TcpWindowSize" with an
appropriate value. This parameter can also be set
per interface at HKEY_LOCAL_MACHINE\SYSTEM\Current
ControlSet\Services\Tcpip\Parameters\Interface\int
erfaceGUID, which may help to protect interactive
applications that are using different interfaces
from th

110
TCP tuning
• For applications that intentionally send small
chunks of data immediately, setting the TCP no
delay option may improve performance.
• Interactive applications, such as telnet, often
fall into this category.
• Normally TCP queues small writes to send out
larger packets.
• This queueing can sometimes be undesirable.
Imagine if you had to type 1000 characters in
telnet before a single one was displayed!

111
TCP no delay option
• To set TCP no delay, use setsockopt int nodelay
1 int error setsockopt( socket, IPPROTO_TCP,
TCP_NODELAY, nodelay, sizeof(nodelay))
• Note the TCP no delay option does not apply to
UDP-based applications, which always send data
immediately.

112
Tuning UDP Application Performance
• For UDP applications, the burstiness of the
traffic can be an issue.
• An application may send a large burst of packets
back-to-back, followed by some idle time. The
average bandwidth looks reasonable, but the
bandwidth during the burst is excessive.
• Burstiness can be the result of two different
parameters how much time occurs between writes
and how large individual datagram writes are.

113
Time between writes
• If there is very little delay between writes, the
application may create a burst causing too much
stress on the network.
• Spacing out writes -- by putting in a sleep for
instance -- reduces this stress and thus reduces
packet loss.

114
Large UDP size
• If the datagram size is large, it will be broken
up into separate packets which will then be sent
as a single burst.
• This is actually worse than not spacing out
writes, because if a single packet of the
datagram is lost, the entire datagram must be
• So not only is there a burst that causes network
stress, but the effects of the resulting packet
loss are also magnified.
• Delay and jitter are also increased, because more
time is spent fragmenting the datagram into
separate packets and reassembling it again.

115
IPerf
• Documentation http//dast.nlanr.net/Projects/Iper
f/iperfdocs_1.7.0.html
• Features
• TCP
• Measure bandwidth
• Report MSS/MTU size and observed read sizes.
• Support for TCP window size via socket buffers.
• Client can create UDP streams of specified
bandwidth.
• Measure packet loss
• Measure delay jitter
• Multicast capable

116
IPerf
• Where appropriate, options can be specified with
K (kilo-) and M (mega-) suffices. So 128K instead
of 131072 bytes.
• Can run for specified time, rather than a set
amount of data to transfer.
• Picks the best units for the size of data being
reported.
• Server handles multiple connections, rather than
quitting after a single test.
• Print periodic, intermediate bandwidth, jitter,
and loss reports at specified intervals.
• Run the server as a daemon (Check out Nettest for
running it as a secure daemon).
• Run the server as a Windows Service
• Use representative streams to test out how link
bandwidth.
• A library of useful functions and C classes.

117
Tuning a TCP connection
• The primary goal of Iperf is to help in tuning
TCP connections over a particular path.
• The most fundamental tuning issue for TCP is the
TCP window size, which controls how much data can
be in the network at any one point.
• If it is too small, the sender will be idle at
times and get poor performance.
• The theoretical value to use for the TCP window
size is the bandwidth delay product, bottleneck
bandwidth round trip

118
Example
• The link is a 45 Mbit/sec DS3 link and the round
trip time measured with ping is 42 ms. The
bandwidth delay product is 45 Mbit/sec 42 ms
(45e6) (42e-3) 1890000 bits 230 Kbyte
• A starting point for figuring the best window
size setting it higher or lower may produce
better results.

119
Examples
• Connect client to server
• iperf -c hostname -i2 -t 20 N, nodelay
• servergt iperf -s
• Server listening on TCP port 5001 TCP window
size 60.0 KByte (default)
• clientgt iperf -c server
• Client connecting to server, TCP port 5001 TCP
window size 59.9 KByte (default)
•   3 local ltIP Addr node1gt port 2357 connected
with ltIP Addr node2gt port 5001 ID
Interval       Transfer     Bandwidth   3
0.0-10.0 sec   6.5 MBytes   5.2 Mbits/sec

120
Improve bandwidth performance using proper TCP
window sizes
• servergt iperf -s -w 130k
• server listening on TCP port 5001 TCP window
size  130 Kbyte
• clientgt iperf -c node2 -w 130k
• Client connecting to node2, TCP port 5001 TCP
window size  129 KByte (WARNING requested  130
KByte)

121
Maximum Transmission Unit (MTU)
• Both hosts should support Path MTU Discovery
(Maximum Segment Size (MSS) is equal to MTU minus
40)
• servergt iperf -s m
• Server listening on TCP port 5001 TCP window
size 60.0 KByte (default)
• WARNING Path MTU Discovery may not be enabled.
•   4 MSS size 536 bytes (MTU 576 bytes, minimum)

122
Multicast
• Use the -B option while starting the server to
bind it to a multicast address.E.g. iperf -s
-u -B 224.2.163.107 -i 1.
• This will have the Iperf server listening for
datagrams (-u) from the address 224.2.163.107(-B
224.2.163.107), with a periodic reporting
interval of 1s (-i 1).
• Now, start a client sending packets to this
multicast address, with a TTL depending on your
Network Topology (if you are unsure, use a high
TTL like 64 or higher).
• E.g. iperf -c 224.2.163.107 -u -T 32 -t 10 -i
1.
• This will have a UDP client (-u) connected to the
with a TTL of 32 (-T 32), sending data for 10
seconds (-t 10), with a periodic reporting
interval of 1s (-i 1).
• Start multiple clients as explained above,
sending data to the same multicast server. (If
you have multiple servers listening on the
multicast address, each of the servers will be
getting the data)

123
General Bandwidth and Latency Issues
• Bandwidth is the available transmission capacity
for any network device, channel, or linkage A
factor of the width and sustainable transmission
speed. (A four lane highway has greater bandwidth
than a one-way street with four lanes in the city
because the speed limits differ.)
• Latency is the time required to enter the street
or the highway, travel along it, and then exit at
the correct destination.

124
Bandwidth and Latency (cont.)
• Latency is the cumulative delay incurred as
packets pass through intermediate nodes, such as
repeaters, bridges, routers, gateways, and
switches, and the signal propagation time
point-to-point between the source and
destination.
• Latency is the round-trip time for a request to
be fulfilled and acknowledged over the enterprise
network.

125
Bandwidth and Latency (cont.)
• The latency for a single packet is the amount of
delay incurred from when a packet leaves the
source, passes through repeaters, bridges,
routers, gateways, switches, and the connecting
channels until it arrives at the destination.
• It also includes packet processing timethe time
to encode and encapsulate data into a packet, and
packet transfer time the time required to move
the packet to the network itself, and the time a
packet may sit in a router queue waiting to be
forwarded.

126
Bandwidth and Latency (cont.)
• Latency is often measured with single direction
(one-way) streams with the result that it is
negligible for most intermediate nodes. However,
when tested on a bi-directional or backplane
environment, latency becomes significant.
• Vendors cite latency figures for large packets,
but rarely for small packets (where latency is
more of a problem). It is possible, however, to
approximate packet processing rate as the inverse
of the packet/s rate for small packets, as such

127
Bandwidth and Latency (cont.)
• Packet processing time 1 / packet/s
• We can calculate the packet transfer time with
the vendor's single packet latency ratings for
large packets. The packet transfer rate is
approximately equal to packet size divided by
latency for that packet size, as shown
• Transfer time Processing time (packet size
/ packet transfer rate)

128
Bandwidth and Latency (cont.)
• Usually, packet latency is measured under
conditions of low loads. Actual latency through a
bridge, router, gateway, or switch usually
exceeds the single packet latency in the packet
transfer time
• A normal one-way enterprise network delivery time
could vary from 0.25 s to 3 s and provide
transaction response times as long as 7 seconds.

129
Bandwidth and Latency (cont.)
• A less formal treatment of network delay is
called the end-to-end latency, which is the
cumulative effect on throughput (for a task such
as file transfer) caused by devices between
source and destination.
• End-to-end latency is expressed as a percent of
the throughput measured without an intervening
bridge, router, gateway, or switch
• End-to-end latency system latency / (system
latency device latency)

130
Bandwidth and Latency (cont.)
• or
• End-to-end latency 1 / (1 (device latency /
system latency))
• It shows that performance is heavily weighted by
the intermediate connectivity device latency.

131
Transmission Latency
• Latency for ATM providing bandwidth of 622
Mbits/s on SONET is the same as a digital
connection rated DS0 providing only 56 Kbits/s in
bandwidth. Although these transmission mediums
are different, the signal propagation speed is
constant for most media.
• The ATM provides 10 times the traffic throughput,
but the transmission latencies on the wire are
identical.

132
Transmission Latency (cont.)
• The modem signal from point-to-point might
actually be faster because the intermediate and
terminating switching electronics of each ATM
switch has latencies of about 170 ms (for a total
of at least 340 ms), while the modem adds 100 ms
at each endpoint.

133
Transmission Latency (cont.)
• Latency is added into the transmission time by
any intermediate device. It is greater for
devices that buffer incoming transmissions and
hold them while earlier (or prioritized)
transmissions are forwarded.
• Print jobs and transfers with SNA protocols
tunneled into TCP/IP can hog so much bandwidth
that no interactive traffic can get through.

134
Latency (cont.)
• There are no clear rules for maximum
utilization
• An Ethernet with more than a few clients degrades
exponentially after 36 percent sustained
utilization.
• Token-Ring or FDDI networks will handle more
traffic, but with escalating increases in the
latency.
• Tools for measuring network latency (NFSWATCH,
NETSTAT, etc.) provide response time figures that
include the execution time on the server.
• Server statistics can be used to extract the
server component to calculate network delays.

135
Latency (cont.)
• Warning
• Divide a 50 percent utilized Ethernet into four
segments and we may get Ethernet subnets each
with a 40 percent load. The decreasing
contention for the network lets more packets onto
each subnet we may also create a subnet load
greater than the original.

136
Causes of Poor Application Performance
• Insufficient server capacity can cause degraded
server performance.This can cause the server to
be slow in processing client requests.
• It can also mean that the server cannot process
incoming traffic fast enough, resulting in a high
rate of dropped packets or retransmissions.

137
Causes of Poor Application Performance (cont.)
• Inefficient use of networking protocols can
contribute to network congestion and lowered
throughput.
• For example, although TCP/IP is basically an
efficient protocol, large numbers of small
packets and no use of windowing (meaning that
every packet must be acknowledged) can lead to
poor overall application performance.

138
Causes of Poor Application Performance (cont.)
• Poor application design can be responsible for
the inefficient use of network resources.
• For example, an application that does a lot of
Structured Query Language (SQL) operations over a
wide area network may not perform well, because
SQL is a relatively "chatty" protocol, sending
many small packets with high overhead.

139
Causes of Poor Application Performance (cont.)
• The design of individual client GUIs can have a
large effect on performance.
• For example, the more data items the user
interface accesses and displays for a given
transaction, the more demand each transaction
will put on the network.

140
4.c Enterprise applications
• Enterprise applications are a mixture They are
designed to run in a widely dispersed distributed
computing environment. Data and application logic
are typically centralized, but users are spread
throughout the organizationall over the world in
a multinational organization
• Resources are centralized and users must
communicate via the wide area network (WAN) to
run the applications and access corporate data.

141
Multiple tiers
• Client/server computing is implemented in
multiple tiers to allow separation of the
computing tasks, so that processing for a given
task is performed close to the resources required
for that task, minimizing the amount of network
communication required. Tasks are usually split

142
Multiple tiers (cont.)
• Clients communicate across the network to the
application server to request and receive data in
support of a task or process the user is
performing. The application server in turn
communicates with the database server to retrieve
or store data in the database.
• In a two-tier application architecture two of the
tiers are collapsed either database server and
application server reside on the same physical
workstation, or the application GUI
(presentation) and application server reside on
the client workstation.

143
Traffic Characterization Network Statistics
• Communications networks transmit data with random
properties. Measurements of network attributes
are statistical samples taken from random
processes, for instance, response time, link
utilization, interarrival time of messages, etc.
• In this section we review basic statistics that
are important in network modeling and performance
prediction.

144
Basic Stati