Control Protocols in Wireless Networks - PowerPoint PPT Presentation

1 / 50
About This Presentation
Title:

Control Protocols in Wireless Networks

Description:

Computer Networking protocols today were designed by computer people, and ... TCP has a proud history in controlling internet congestion, but ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 51
Provided by: tsak
Category:

less

Transcript and Presenter's Notes

Title: Control Protocols in Wireless Networks


1
Control Protocols in Wireless Networks
  • Stephen Hanly

2
Overview
  • The network protocol stack
  • Motivation
  • MAC layer control issues
  • IEEE 802.11
  • Research issues Physical layer / MAC interaction
  • Transport layer TCP control protocol
  • TCP control issues
  • Wireless/TCP interaction
  • Research from CUBIN wireless group
  • The CLAMP protocol improving TCP for wireless
    networks
  • Throughput analysis of IEEE 802.11 with TCP
    sources
  • Conclusions

Clustering Overview
2
3
Network Protocol Stack
  • Model channel,
  • design modulation/coding schemes

Most wireless researchers study the physical layer
Application
  • Forward error correction coding
  • Networking
  • ARQ, hybrid-ARQ
  • Medium access control
  • CSMA, CSMA-CA
  • Quality of service

Codes are implemented at the data link layer
Transport

Data link (MAC)
Physical
  • TCP flow control
  • Protocols designed by computer scientists

Overall throughput control occurs at Transport
layer
4
Motivation
  • Computer Networking protocols today were designed
    by computer people, and adapted for wireless
    networks
  • Eg. 802.11 is ethernet with collision avoidance
  • TCP at transport layer was designed for wireline
    networks
  • There is scope for improving these protocols,
    taking into account the physical layer issues
  • One can do experiments relatively simply
  • Protocols implemented in the network stack can be
    modified / new ones implemented using the Linux
    O/S
  • Ditto for some aspects of the MAC, (Linux device
    drivers)
  • Sensor network devices have programmable MACs
  • The idea is to improve the higher layers to suit
    the physical and link layers, and not vice versa

Clustering Overview
4
5
MAC layer control issues
  • Multiple nodes share common channel
  • Issues capacity, contention, utilization, delay
  • Some methods are
  • polling (802.11 pcf),
  • demand assignment (HDR)
  • Splitting (not applicable to wireless)
  • Traditional networking method is random access
    with binary exponential backoff

Clustering Overview
5
6
IEEE 802.11 DCF MAC
  • A distributed random access protocol using the
    CSMA/CA method of access.
  • A station listens to the shared channel to check
    that it is idle before transmitting a data
    packet.
  • Every transmitted data packet requires an ACK
    from the receiver.
  • After a successful transmission, there is a
    random backoff period before the next packet can
    be transmitted.

Clustering Overview
6
7
Binary exponential backoff
  • Backoff periods are measured in slots.
  • BO is drawn uniformly from 0,1,W-1, where W is
    the congestion window.
  • BO decremented by 1 for each idle slot.
  • Transmit when BO reaches zero
  • Collision occurs when 2 or more stations try to
    transmit in the same slot.
  • W is doubled after each collision (up to a
    maximum).

8
MAC layer
9
MAC layer illustration
10
WLAN measurements
11
802.11 Anomaly one bad user
Throughput (Bytes/sec)
Time
12
Further problems with 802.11 DCF
  • Still suffers from hidden and exposed terminal
    problems
  • Suffers an instability problem (unstable in
    infinite node model) (Aldous)
  • Trivializes the physical layer
  • Only allows one successful transmission at a time
  • Spread spectrum capabilities at physical layer
    not exploited for multiple access
  • Similarly, MIMO not fully exploitable for
    multiple access

hidden
exposed
13
Problems with 802.11 DCF
  • Still suffers from hidden and exposed terminal
    problems
  • Suffers an instability problem (unstable in
    infinite node model)
  • Trivializes the physical layer
  • Only allows one successful transmission at a time
  • Spread spectrum capabilities at physical layer
    not exploited for multiple access
  • Similarly, MIMO not fully exploitable for
    multiple access

hidden
exposed
14
Problems with 802.11 DCF (ctd)
  • Does not extend well to multihop networks (sensor
    networks)
  • Nodes in centre
  • Have highest load
  • Experience the most backoffs
  • Recent proposal (Gupta Walrand)
  • Decrease backoff on collisions
  • Rescale windows

15
Solutions?
  • How can we take advantage of smart physical
    layers (CDMA, MIMO, MUD)?
  • Multipacket reception
  • How do we control contention?
  • How do we minimize feedback required to stabilize
    the protocol?
  • Can we employ power control to solve network
    problem?
  • Controlled Aloha has a long history (70s and
    80s)
  • Feedback binary was slot empty or not ? Ghez,
    Schwartz, Verdu 89
  • We have recently shown
  • Improved binary was number of attempts
    (interference) gt threshold ? does much better
  • Feedback rate can be arbitrarily low

16
Transmission Control Protocol
Transport layer flow control
This is the layer responsible for end-end
delivery of packets
Application
Application


Transport
Transport


Data link (MAC)
Data link (MAC)
Internet
Physical
Physical
  • Transmission Control Protocol (TCP)
  • reliable delivery
  • flow control

17
Flow Control
  • Objectives
  • Operate bottleneck links at high utilization
  • Prevent high levels of packet loss
  • Avoid excessive delay
  • Provide fairness between different flows
  • Ideally, operate bottleneck link at knee

Loss
18
Flow Control TCP Reno
TCP/IP
TCP/IP servers
Clients
All-IP
Core
  • TCP Reno
  • window number of packets in the network
  • includes packets and ACKs
  • Probe network for bandwidth
  • Increase window , 1 pckt per RTT
  • Halve window on a loss
  • Additive increase/multiplicative decrease

d1
d2
d3
19
TCP Reno oscillations and fairness
  • Only truly fair if
  • single bottleneck link
  • identical propagation delays

20
TCP Reno problems
  • TCP has a proud history in controlling internet
    congestion, but
  • It fills network buffers, leading to
  • poor throughput for mice TCP flows,
  • delay and jitter for real-time services sharing
    network buffers
  • Unfairness towards flows with large propagation
    delays
  • Low utilization in high bandwidth delay product
    networks
  • Can interfere with lower-layer optimizations
  • Loss based does not operate at load/utilization
    knee
  • Specifically, concerning utilization
  • High utilization requires network buffers to
    store bandwidth.delay product packets
  • If not, buffer empties gt low throughput
  • If yes, gt delay and jitter

21
TCP Reno throughput vs delay
channel rate 1.5 Mbps propagation delay
0.41s Packet size 500 bytes
Delay 0.16 sec
22
Wireless-specific problems with TCP
  • Loss-based congestion control it is not suited
    to lossy wireless links
  • Delay and jitter problems worsened if wireless
    link is made reliable
  • Queueing fluctuations increase with link layer
    retransmissions
  • Time-outs can occur
  • TCPs queue fluctuations do not suit smart
    wireless schedulers operating at MAC layer

Existing Solutions
  • TCP reacts to loss, so wireless links are made
    (fairly) reliable
  • TCP cant control buffer occupancy
  • Solution Large buffers
  • Always have packets ready to send
  • Large delays, jitter

23
Existing Solutions (ctd.)
  • Performance Enhancing Proxies
  • Break connection, and do link layer
    retransmission at transport layer

24
Existing solutions (ctd)
  • Allocation Weighted round robin or other
    scheduling policy
  • Access point determines allocation policy
  • Implemented at MAC layer

25
TCP Vegas
  • Vegas does not react to loss, but round trip
    times
  • Tries to detect when pipe is full, keep a certain
    number of packets buffered in network
  • Vegas measures actual round-trip time
  • it also measures propagation delay
  • actual sending rate
  • expected sending rate

Vegas requires choice of two constants a and b
with altb
  • window is adjusted as follows
  • if expected rate actual rate gt b then decrease
  • if expected rate actual rate lt a then increase

26
Other sender-side solutions
  • TCP Westwood
  • TCP Veno attempts to solve the wireless loss
    problem by distinguishing wireless loss from
    network congestion-related loss. Does not address
    problems with reliable wireless link
  • TCP Vegas/ TCP Fast
  • attempts to solve network congestion by limiting
    window based on measurements of round trip delay.
  • MAY work over wireless, but not proven
  • Needs to adapt alpha parameter to particular
    characteristics of network. How?

27
Transmission Control Protocol
  • Transmission Control Protocol (TCP)
  • congestion window
  • Additive Increase/Multiplicative Decrease

CWND AIMD by sender
AWND receiver
A number of recent proposols using AWND for
control
window minimum
  • Spring, Cheshire, Berryman, Vahsaranaman,
    Anderson Bershad 2000

28
CLAMP receiver based flow control
fi
29
Motivation
  • A queue at the access router. Why?
  • Maximize access link utilization (Bandwidth
    precious)
  • Transient periods ? want queue
  • Sudden increase in capacity.
  • AQM (e.g. RED) requires a queue

Time for sources to compensate
30
Fluid Flow Analysis
q(t)
C(t)
R1
R2
Rk
d1
d2
dk
  • Derived system equations

CLAMP works with average quantities
31
The Algorithm AP (Router) Agent
  • For a packet that departs at time t,
  • Sample queue size q(t), calculate price p(q)
  • Insert into IP header (as IP option)

Price, p(q)
b
Queue size, q
a/b
32
The Algorithm User Agent
  • Target change in window (Dt interpacket arrival
    time)
  • d is required for stability of the fluid model
  • but does not require an accurate estimate of
    round trip time
  • New estimate obtained by
  • Run a moving average

33
Simulation Results over wireless channel
  • Two scenarios
  • Short and long flows sharing a link
  • Four long flows with channel-state-aware
    scheduling
  • Loss model
  • Packets broken into frames of fixed duration (3
    ms)
  • Packet size (in bytes) varies with data rate
  • Corrupted frames are retransmitted
  • After 3 attempts, packet is lost (transport layer
    retransmits)

34
Channel Model
  • Jakes fader
  • 8 generators
  • Doppler frequencies 0.6 Hz (mobile speed 0.1
    m/s), 60Hz (10m/s)
  • Rate matching based on SNR at start of the frame
  • B Bandwidth
  • m fade margin
  • Frame error declared if rate too high for SNR at
    end of the frame
  • Parameter m adaptively set to achieve packet
    error rate target
  • The margin allows channel quality to drop during
    frame

35
Scenario 1 Short and long flows
  • CLAMP provides mechanism for setting average
    queue size
  • i.e. for tuning the queueing to the wireless
    channel conditions
  • Amount of buffering needed depends on rate of
    channel variation
  • Buffer empties when channel is better than
    average
  • Benefit also depends on propagation delays
  • if delay is short, buffer refills as soon as it
    empties
  • but AP cannot know propagation delays
  • Short and long flows have different buffering
    requirements
  • Long flow would like a bigger queue for maximum
    throughput
  • Short flow gets better throughput if round trip
    time is reduced
  • Measure trade-off between needs of short and long
    flows as mean buffer occupancy changes
  • CLAMP can tune target queue size
  • TCP can only do this by varying buffer size

36
Simulation topology Short and long flows
  • 1 long flow, 25 short flows (5 packets, each 500
    bytes)
  • Random start times
  • Short flows may overlap
  • Duration of a short flow depends on its rate

37
Simulation results
  • Propagation delay short flows 25 ms, long flow
    28 ms
  • Typical inter-city delay
  • Packet error rate of

38
Simulation topology Scheduling
  • Shared wireless access point
  • Statistically identical channels,
  • adaptive rate modulation to meet packet error
    rate target
  • Channel-state-aware scheduling
  • Varying propagation delays 2.5280 ms

40 ms
120 ms
280 ms
2.5 ms
39
Simulation Results
Packet error rate of
  • CLAMP TCP

40
Conclusions regarding CLAMP
  • Demonstrated the need for significant queueing in
    wireless networks
  • Two reasons for this
  • Queue needed if a flow has large RTT and variable
    packet transmission times at the AP
  • Queue provides multi-user diversity to a
    scheduler
  • CLAMP improves the throughput of short flows
  • Small average queue, but seldom empty
  • Much better mice-elephant throughput tradeoff
    curve
  • Since the sender side is TCP Reno, needs a
    relaible link layer
  • Allows fairness between flows sharing a buffer
  • Compatible with existing TCP sending protocols

41
TCP over 802.11
  • Majority of traffic on WLANs consists of
    applications carried over TCP.
  • What is the impact of TCPs AIMD?
  • Experimental setup
  • IPerf client sending to 3 wireless servers
  • Ethereal capturing packets at client

Client
Ethernet cable
Wireless Connection
AP
Clustering Overview
41
42
TCP window fixed at 16 KByte
Throughput (Bytes/sec)
43
TCP window fixed at 64 KBytes
Throughput (Bytes/sec)
44
TCP over 802.11
  • Performance models for TCP throughput over 802.11
    needed.
  • Our scenario below

Clustering Overview
44
45
TCP over 802.11
  • We make assumptions so that the behaviour of TCP
    is simplified.
  • We assume
  • the advertised window (AWND) is fixed at a
    relatively small value.
  • no buffer overflow.
  • end-to-end propagation delays across the wired
    part of the network are short enough to be
    ignored.

Delays negligible
No buffer overflow
Clustering Overview
45
46
Throughput analysis for persistent TCP sources
  • Our model predicts TCP throughput for
  • one or more persistent upstream flows or
  • one or more persistent downstream flows.
  • We restrict attention to the case when there is
    one TCP flow per station, but this can be
    generalized.

Clustering Overview
46
47
Outline of analysis
AP
  • Assume AP transmit buffer never empties.
  • Assume an equilibrium state is reached
  • rate of successful pkts from AP
  • combined rate of successful pkts from
    stations.

TCP Data pkts
TCP ACK pkts
Clustering Overview
47
48
Outline of analysis (cntd)
  • tbc and ß are determined using ideas about
    collision probability from Bianchi (JSAC 2000).

Clustering Overview
48
49
Throughput results upstream flows802.11b
system, AWND 17 packets, ldata 8000 bits.
Clustering Overview
49
50
Queueing analysis for non-persistent TCP sources
  • Finite population of non-persistent TCP sources
    sources alternate between active and idle
    periods.
  • Develop an analytical model which can predict the
    state probabilities of the number of active
    sources.
  • We apply a CQN that was used by Berger and Kogan
    (2000) to model a generic bottleneck link shared
    by N TCP sources.
  • Miorandi, Kherani Altman (2004) use a
    generalized processor sharing queue.

S
Egalitarian Processor Sharing (PS) queue models
shared wireless link. Active sources
(transfering a file) are here.
Infinite Server (IS) queue. Idle sources are here.
Clustering Overview
50
51
State probability results Traffic scenario 2
Clustering Overview
51
52
Conclusions
  • Survey of two important control protocols 802.11
    and TCP
  • Control-theoretic issues associated with each
  • Weaknesses identified, and directions for further
    research suggested
  • Described a novel receiver-side flow control
    CLAMP
  • Better mice-elephant throughput tradeoff
  • Higher capacity and less undesirable interaction
    with MAC
  • Described a novel throughput analysis for
    multiple persistent TCP flows over an 802.11
    WLAN, when AWNDs are constrained such that there
    is zero buffer overflow.
  • Future work
  • Implement CLAMP on WLAN testbed
  • Generalize to multihop networks
  • test models for 802.11 on testbed.
  • New models for 802.15
  • Multipacket reception

Clustering Overview
52
Write a Comment
User Comments (0)
About PowerShow.com