802.17 presentations - PowerPoint PPT Presentation

About This Presentation
Title:

802.17 presentations

Description:

Topology collection. Send macAddress with ttl=255. Store into table ... BW allocation should include spatial reuse. Worst-cast hop-count OK for simple stations ... – PowerPoint PPT presentation

Number of Views:21
Avg rating:3.0/5.0
Slides: 71
Provided by: Sco7207
Learn more at: https://www.ieee802.org
Category:

less

Transcript and Presenter's Notes

Title: 802.17 presentations


1
802.17 presentations
  • Prepared for 802.17, March 2002
  • David V. James, PhDChief Architect3180 South
    CtPalo Alto, CA 94306Tel 1.650.494.0926Cell
    1.650.954.6906Fax 1.360.242.5508Email
    dvj_at_alum.mit.edu

2
Unidirectional flooding
S1
S2
S3
S4
S5
S1
S2
S3
S4
S5
unidirectional0
unidirectional1
S1
S2
S3
S4
S5
unidirectional
3
Bidirectional flooding
S1
S2
S3
S4
S5
S1
S2
S4
S5
S3
bidirectional1
bidirectional2
S1
S2
S4
S5
S3
bidirectional
4
Frame format summary
local
remote
fairness,idle
destinationAddress
ttl
ctrl
destinationAddress
ttl
ctrl
sourceStationID
ttl
ctrl
16
sourceStationID
sourceStationID
payload
FCS
base
ext
base
ext
sourceStationID
type
HEC
HEC
serviceDataUnit
destinationAddress
FCS
type
payload
FCS
5
Upper class guarantees
6
Frame format summary
  • ClassA0Clause 6 downstream shaperShould
    throttle STQ as well as addEliminate the
    redundant clause9 code
  • ClassA1Throttle the classB add as the STQ
    fillsExplicit upstream STQ-depth communication?
  • ClassBExplicit and timely precedence over
    classC??

7
Illustration context
8
ClassA0 concerns
9
ClassA0 problem illustration
Leadup conditions
classA1classB-CIR
classA0
a) high load
b) new load
c) small STQ
Failure scenario
d) full line rate
e) blocked A0
10
ClassA0 problem solution
Old Clause 6 shaper
classA0
a) 10 reserved
b) 10 reserved
c) hopepray
New Clause 6 shaper
d) sustained 10
e) live in peace
11
ClassA1 concerns
12
ClassA1 problem illustration
Leadup conditions
classA1classB-CIR
classA1
a) high load
b) new load
Failure scenario
c) classA1 blocked
13
ClassA1 problem solution
classA1classB-CIR
Current specification
classA1
a) high load
b) huge load
classA(only!)
New specification
c) depth warning
14
Current bandwidth limits
15
Proposal bandwidth limits
16
Depth feedback control
17
Queue depth ranges
18
Queue depth feedback
19
Normalized valuations
20
DSP perspectives
(0X3FFFFFFF)
1.0
localRate (1-e-t/T)
time
T
21
Simplified conservative
22
CRC calculations
23
CRC processing
  • Storeforward/Cut-through agnostic
  • Invalid data is effectively discarded
  • store-and-forward discards
  • cut-through stomps the CRC
  • Maximize error-logging accuracy
  • Separate headerdata CRCs
  • most corruptions hit the data

24
Separate header and data CRCs
header
headerCRC
payload
payloadCRC
25
Cut-through CRCs

node
crcB
datan
core
crcA
datan
core
  • Corrupted packet remains corrupted
  • Error logged when first detected
  • if (crcA!crc) errorCount
    (crcA!crcSTOMP) crcB crcSTOMP

26
Distinct CRCs reduces discards
  • Discard the corrupted data
  • Discard the corrupted packet

27
End-to-end CRC protected TTL
28
CRC equation examples
a c00d00 b c01d01c c02d02 d
c03d03 // ... s c14d14 t c15d15c00
a e gh mc01 b f hj
nc02 c g jk pc03 d
h km rc04 e j mn
sc05 f k np tc06
a e h prc07 b f
j rs
29
CRC processing
  • Storeforward/Cut-through agnostic
  • Invalid data is effectively discarded
  • store-and-forward discards
  • cut-through stomps the CRC
  • Maximize error-logging accuracy
  • Separate headerdata CRCs
  • most corruptions hit the data

30
Discovery Sequencing
31
Supported topologies
32
Topology collection

  • Send macAddress with ttl255
  • Store into table with index256-ttl

33
Topology Discovery

  • Retransmission after change detection
  • Retransmission at periodic intervals

34
Link failures wrap unwrap
35
Link failures splitjoin
36
Link failures subtract add
37
Discovery properties
  • During topology changes, chaos is inevitable
  • Cannot distinguish link failure or topology
    change
  • Periodicity with event-invoked trigger
  • Periodic transmission to neighbor
  • broadcast relies on DSID, which is unknown
  • broadcast implies owner, which is unknown
  • cumulative transmission is efficient robust
  • Common features, sent every millisecond
  • Heartbeat
  • Discovery
  • Flow control

38
Setting bandwidths
39
Negotiated bandwidths
40
Bandwidth negotiations
  • Sum of bandwidths lt link capacity
  • Independent accounts
  • class-A0 and class-A1 rates
  • class-B rates
  • class-C weightings
  • Accounts are distance dependent

41
Distance-dependent accounts
  • BW accounts have spatial dependencies
  • Available BW has monotonic decrease

42
Allocation update sequence
1) Sample cumulative accounts
2) Change
3) Distribute revised accounts
43
Concurrent conflicts
in.source lt me.source
in.source gt me.source
44
BW allocation conclusions
  • BW allocation is necessary to ensure consistency
  • BW allocation should include spatial
    reuseWorst-cast hop-count OK for simple stations
  • No central tables are required
  • MAC-identifier suffices for tie resolution
  • Some error-recovery details may be necessary
  • We need philosophy, not technical,
    agreement.(sufficient detail exists for
    first-round inclusion)

45
Flow control
46
Opposing arbitration
nodeA
nodeB
nodeC
packet
control
  • Data packets flow in one direction
  • Arbitration control flows in the other

47
Arbitration classes
A0 reactive
A1 proactive
provisioned bandwidth, low latency
Class-A
guaranteed
provisioned bandwidth, bounded latency
Class-B
datan
unprovisioned or unused provisioned
Class-C
48
Proactive class-A0 partitions
nodeA
nodeB
nodeC
  • Data packets go source-to-destination
  • Residue returns destination-to-sourceto provide
    subsistence for transmissions

49
Reactive class-A1 control
nodeA
nodeB
  • Transmission of packets causes
  • Backup of passBC FIFO that
  • Returns flow-control information that
  • Provides consumable idle packets

50
Arbitration components
queueA
gateA
queueA
queueB
gateB
queueB
queueC
gateC
queueC
client
MAC
policer
stage
transitA
gateD
transitBC
idles
depthBC
51
Small-to-large transmitBC
client
MAC
policer
stage
transitA
gateD
transitBC
idles
1) Small gt proactive classA1
52
MAC-Client interface signals
queueA
gateA
queueA
queueB
gateB
queueB
queueC
gateC
queueC
client
MAC
frames
rangeA
rangeB
rangeC
53
Class-A precedence
a) Stall B/C transmissions
policer
b) Stall B/C retransmissions
stage
transitA
gateD
transitBC
idles
depthBC
If (congested(depthBC0, depthBC1)) rate lt
ratedA0ratedA1else rate lt rateA0
54
Class-A send-queue gating
queueA
gateA
queueA
queueB
gateB
queueB
queueC
gateC
queueC
client
55
Class-B send-queue gating
queueA
queueA
gateA
queueB
queueB
gateB
queueC
gateC
queueC
client
56
Class-C send-queue gating
queueA
queueA
gateA
queueB
queueB
gateB
queueC
queueC
gateC
client
57
Class-C principles
58
Class-C fairness counts
leadingcount
trackingcount
59
Class-A flow control(proactive and reactive)
60
Class-A flow control
  • ProactiveMinimal (nonexistent?) passBC transit
    bufferLess available bandwidthEach station
    maintains constant classAp traffic
  • ReactiveSignificant passBC transit bufferFull
    bandwidth utilizationEach station
    responds/regenerates throttle messages
  • Interoperable?This is a bandwidth vs memory
    tradeoff

61
Proactive class-Acompatibility options
nodeA
nodeB
Reactive
  • Reactive node trickles class-A bandwidth
  • Reactive node recycles class-A bandwidthclass-A
    gt class-A, thus preserving BW

62
Reactive class-A compatibility
(4)
nodeA
nodeB

(3)
  • Flow control passes upstream
  • Proactive stations pass these indications

63
Limits of scalability
  • Geosynchronous
  • Terrestrial
  • The metro area

64
Lessons of the past
  • Flow control mandates 2-out-of-3
  • Low latency transmissions
  • Fair bandwidth allocation
  • High bandwidth utilization
  • Feedback control systems
  • Low latency signaling
  • Control can pass class-B/C packets
  • Separate class-A queue is utilized
  • Other observations
  • Local control gt global perversions
  • Fairness is inherently approximate
  • Strange beating sequences DO OCCUR

65
Arbitration summary
  • Dual levels
  • Class-A, pre-emptive low latency
  • Class-B, less latency sensitive
  • Jumbo frames
  • Affect asynchronous latencies
  • NO IMPACT on synchronous latency
  • Cut-through vs store-and-forward
  • Either should be allowed
  • Light-load latency DOES matter

66
Time-of-day synchronization
67
Time-of-day master and slaves
68
This is not bit-clock synchronization!
f0F(1p0)
f1 F(1p1)
f2F(1p2)
f3F(1p3)
69
This is time-of-day synchronization!
70
Precise duplex-link synchronization
Write a Comment
User Comments (0)
About PowerShow.com