Design of a Diversified Router: Lookup Block with All Associated Data in SRAM - PowerPoint PPT Presentation

1 / 98
About This Presentation
Title:

Design of a Diversified Router: Lookup Block with All Associated Data in SRAM

Description:

Design of a Diversified Router: Lookup Block with All Associated Data in SRAM John DeHart jdd_at_arl.wustl.edu http://www.arl.wustl.edu/arl – PowerPoint PPT presentation

Number of Views:134
Avg rating:3.0/5.0
Slides: 99
Provided by: KarenY5
Category:

less

Transcript and Presenter's Notes

Title: Design of a Diversified Router: Lookup Block with All Associated Data in SRAM


1
Design of aDiversified Router Lookup
BlockwithAll Associated Data in SRAM
John DeHartjdd_at_arl.wustl.edu http//www.arl.wust
l.edu/arl
2
Revision History
  • 5/23/06 (JDD)
  • Changes for all Associated Data in SRAM
  • 5/25/06 (JDD)
  • Put Port back in MR Results

3
Issues to investigate
  • Questions/Issues that came up 5/16/06
  • Negation bit
  • Match everything but this key
  • Exclusive/Non-exclusive Filters
  • GM filters for monitoring (makes a copy of
    packet)
  • Protocol field trick to shorten GM filter Key
  • 2 bits to define UDP, TCP, Other
  • Maybe even expand it to 4 bits.
  • For Other, full 8 bit protocol field overlaps a
    TCP/UDP Port field
  • 76 Bytes as minimum size frame for judging
    performance
  • 64 Byte minimum Ethernet Frame
  • 96 bit (12 byte) Ethernet inter-frame spacing.
  • To increase the lookup rate we might need to move
    one of the LC Associated Data storage to SRAM
  • Probably keep them both in TCAM AD for November
    and then look at modifying it in the next phase
    of Lookup block development.
  • Multicast
  • Separate Multicast DB
  • MHL on Multicast DB yielding 8 32-bit AD Results
  • Actually 29 useful bits per Result
  • Maximum of 232 bits

4
Overview
  • These slides are as much a definition of what is
    NOT in the Lookup Block as they are what is.
  • In defining what is not in the Lookup Block I am
    putting some requirements on other blocks.
  • These requirements have to do with where fields
    are added to frame headers.
  • Not everything can or needs to be kept in the
    TCAM.
  • There are also
  • Constants
  • Fields that have to be calculated for each frame
  • Fields that are configurable per Blade or per
    physical interface.
  • Etc.
  • Also, there is a lot of information about the
    TCAM here.
  • And, finally, a design for the Lookup Block(s).

5
Architecture Review
  • First lets review the architecture of the
    Promentum ATCA-7010 card which will be used to
    implement our LC and NP Blades
  • Two Intel IXP2850 NPs
  • 1.4 GHz Core
  • 700 MHz Xscale
  • Each NPU has
  • 3x256MB RDRAM, 533 MHz
  • 4 QDR II SRAM Channels
  • Channels 1, 2 and 3 populated with 8MB each
    running at 200 MHz
  • Channel 0
  • TCAM with an associated ZBT SRAM
  • 2 MB of QDR-II SRAM for EACH NPU
  • 16KB of Scratch Memory
  • 16 Microengines
  • Instruction Store 8K 40-bit wide instructions
  • Local Memory 640 32-bit words
  • TCAM Network Search Engine (NSE) on SRAM channel
    0
  • Each NPU has a separate LA-1 Interface
  • Part Number IDT75K72234
  • 18Mb TCAM

6
NP Blades
7
TCAM HW Details
  • CAM Size
  • Data 256K 72-bit entries
  • Organized into Segments.
  • Mask 256K 72-bit entries
  • Segments
  • Each Segment is 8k 72-bit entries
  • 32 Segments
  • Segments are not shared between Databases.
  • Minimum database size is therefore 8K 72-bit
    entries.
  • Databases wider than 72-bits use sequential
    entries in a segment to make up longer entries
  • 36b DB has 16K entries per segment
  • 72b DB has 8K entries per segment
  • 144b DB has 4K entries per segment
  • 288b DB has 2K entries per segment
  • 576b DB has 1K entries per segment
  • Segments can be dynamically added to a Database
    as it grows
  • More on this feature in a future issue of the IDT
    User Manual

8
TCAM HW Details
  • Number of Databases available 16
  • Database Core Sizes 36b, 72b, 144b, 288b, 576b
  • Core size implies how many CAM core entries are
    used per DB entry
  • Key/Entry size
  • Can be different for each Database.
  • Key/Entry size lt Database Core Size
  • Key/Entry size tells us how many memory access
    cycles it will take to get the Key into the TCAM
    across the 16-bit wide QDR II SRAM interface.
  • Result Type
  • Absolute Index relative to beginning of CAM
  • Database Relative Index relative to beginning of
    Database
  • Memory Pointer Translation based on database
    configuration registers
  • Base address
  • Result size
  • TCAM Associated Data of width 32, 64 or 128 bits

9
TCAM HW Details
  • Memory Usage
  • Results can be stored in TCAM Associated Data
    SRAM or IXP SRAM.
  • TCAM Associated Data
  • 512K x 36 bit ZBT SRAM (4 bits of parity)
  • Supports 256K 64-bit Results
  • If used for Ingress and Egress then 128K in each
    direction
  • Supports 128K 128-bit Results
  • If used for Ingress and Egress then 64K in each
    direction
  • Results deposited directly in Results Mailbox
  • IXP QDR II SRAM Channel
  • 2 x 2Mx18 (effective 4M x 18b)
  • 4 times as much as the TCAM ZBT SRAM.
  • Supports 1024K 64-bit Results
  • If used for Ingress and Egress then 512K in each
    direction
  • Supports 512K 128-bit Results
  • If used for Ingress and Egress then 256K in each
    direction
  • Read Results Mailbox to check Hit bit and to get
    Index or Memory Pointer
  • Then read SRAM for actual Result.

10
TCAM HW Details
  • Lookup commands supported
  • Direct Command is encoded in 2b Instruction
    field on Address bus
  • Indirect Instruction field 11b, Command
    encoded on Data bus.
  • Lookup (Direct)
  • 1 DB, 1 Result
  • Multi-Hit Lookup (Direct)
  • 1 DB, lt 8 Results
  • Simultaneous Multi-Database Lookup (Direct)
  • 2 DB, 1 Result Each
  • DBs must be consecutive!
  • Multi-Database Lookup (Indirect)
  • lt 8 DB, 1 Result Each
  • Simultaneous Multi-Database Lookup (Indirect)
  • 2 DB, 1 Result Each
  • Functionally same as Direct version but key
    presentation and DB selection are different.
  • DBs need not be consecutive.
  • Re-Issue Multi-Database Lookup (Indirect)
  • lt 8 DB, 1 Result Each
  • Search Key can be modified for each DB being
    searched.

11
TCAM HW Details
  • Mask Registers Notes (mostly for reference)
  • When are these used?
  • I think we will need one of these for each
    database that is to be used in a Multi Database
    Lookup (MDL), where the database entries do not
    actually use all the bits in the corresponding
    core size.
  • For example a 32-bit lookup would have a core
    size of 36 bits and so would need a GMR
    configured as 0xFFFFFFFF00 to mask off the low
    order 4 bits when it is used in a MDL where there
    are larger databases also being searched.
  • 64 72-bit Global Mask Registers (GMR)
  • Can be combined for different database sizes
  • 36-bit databases have access to 31 out of a total
    of 64 GMRs
  • A bit in the configuration for a database selects
    which half of the GMRs can be used
  • A field in each lookup command selects which
    specific GMR is to be used with the lookup key.
  • Value of 0x1F (31) is used in command to indicate
    no GMR is to be used. Hence, 36-bit lookups
    cannot use all 32 GMRs in its half.
  • 72-bit databases have access to 31 out of a total
    of 64 GMRs
  • A bit in the configuration for a database selects
    which half of the GMRs can be used
  • A field in each lookup command selects which
    specific GMR is to be used with the lookup key.
  • Value of 0x1F (31) is used in command to indicate
    no GMR is to be used. Hence, 72-bit lookups
    cannot use all 32 GMRs in its half.
  • 144-bit lookups have 32 GMRs available to it.
  • 288-bit lookups have 16 GMRs available to it.
  • 576-bit lookups have 8 GMRs available to it.
  • Each lookup command can have one GMR associated
    with it.

12
TCAM Usage Notes
  • Database Types are defined and managed by the IMS
    Software.
  • The Type of the Database is defined in the
    software only.
  • It tells the software how to define and use masks
    and priorities (weights).
  • Allows the software to provide to the user a more
    flexible way to specify entries.
  • Types of Databases
  • Longest Prefix Match (LPM)
  • Mask matches length of prefix
  • Exact Match (EM)
  • Mask matches full Entry size
  • Best/Range Match What we typically call General
    Match.
  • Mask is completely general.
  • Priority
  • Priority within a database is done by order of
    the entries.
  • Exact Match should not need priority within the
    database since only one Entry should match a
    supplied Key.
  • LPM and Best/Range Match do use priority within
    the databases.
  • So, the order in which the entries are stored in
    these databases is important.
  • For LPM DBs we would want to group prefixes by
    length in the TCAM.
  • And this is almost certainly what the IDT
    software does.
  • Changing priorities on existing entries may cause
    us some problems.

13
TCAM Performance
  • Three Factors that affect performance
  • Lookup Size (Entry/Key)
  • Associated Data Width (Result)
  • CAM Core Lookup Rate
  • IXP/TCAM LA-1 Interface
  • 16 bits wide
  • 200 MHz QDR II SRAM Interface
  • Effectively 32bits per clock tick
  • So getting Key in is 32bits/tick
  • Example 128b Key would take 4 ticks to get
    clocked into TCAM.
  • Max of 50 M Lookups/sec
  • Table on next slide shows some of the performance
    numbers for some Sizes that are of interest to
    us.
  • What well see a little later is that in the
    worst case, we need a TOTAL Lookup rate of 12.5
    M/sec (6.25 M/sec on each LA-1 interface)

14
TCAM Performance (Rates in M/sec)
Lookup Size LA-1 Words Core Size Assoc. Data Single LA-1 Max Rate Max Core Rate Avg Shared Rate (Each of 2 LA-1s)
32 1 36 32 50 50 25
64 50 25
128 25 12.5
36 2 36 32 50 50 25
64 50 25
128 25 12.5
64 2 72 32 100 100 50
64 50 25
128 25 12.5
72 3 72 32 67 100 50
64 50 25
128 25 12.5
128 4 144 32 50 100 50
64 50 25
128 25 12.5
144 5 144 32 40 100 40
64 50 25
128 25 12.5
160 5 288 32 40 50 40
64 50 25
128 25 12.5
LC_Egress
LC_Ingress
15
TCAM Software
  • Several software components exist, enough to be
    really confusing.
  • IDT Libraries
  • MicroEngine Libraries
  • NSE-QDR Data Plane Macro (DPM) API
  • Iipc.uc and Iipc.h
  • IIPC Integrated IP Co-processor
  • Microengine Lookup Library (MLL)
  • IipcMll.uc
  • 5 slightly higher level macros than Iipc.uc
  • XScale
  • Lookup Management Library (LML)
  • Control Plane
  • Initialization Management and Search (IMS)
    Library
  • Simulation
  • NSE with Dual QDR Interfaces IDT75K234SLAM
  • Intel Libraries
  • TCAM Classifier Library
  • Microengine and XScale support for using TCAM.
  • Requires installation of MLL and LML.

16
Lookup Block
  • Three Lookup Blocks Needed
  • All the Lookup Blocks will use the TCAM
  • LC-Ingress
  • All Databases for Ingress will be Exact Match
  • LC-Egress
  • All Databases for Egress will be Exact Match
  • MR
  • There will probably be multiple versions of this
  • Shared
  • Dedicated
  • IPv4
  • MPLS
  • But lets think of it as one for now and focus on
    IPv4.
  • Discussion later on what combination of the three
    types of DB we might use.
  • Base functionality and code should be the same
    for all three
  • Sizes of Keys and Results will differ.
  • LC-Ingress and LC-Egress will share a TCAM
  • ARP on the LC might need/want to use the TCAM.
  • The aging properties of the TCAM might be very
    useful for ARP.

17
MR Lookup Block
Control
TCAM
XScale
XScale
DeMux
Rx
Parse
DeMux
Rx
Parse
Lookup
Lookup
Tx
HeaderFormat
Tx
HeaderFormat
MR (NPUA)
MR (NPUB)
18
LC Lookup Block
Ingress (NPUB)
S W I T C H
R T M
Lookup
Switch Tx
QM/Schd
Hdr Format
Phy Int Rx
Key Extract
XScale
LC
TCAM
ARP
XScale
Lookup
Phy Int Tx
QM/Schd
Hdr Format
Key Extract
Switch Rx
Rate Monitor
Egress (NPUA)
19
Lookup Block Requirements
  • Average Number of Packets per second required to
    handle?
  • Line Rate 10Gb/s
  • Assume an average IP Packet Size of 200 Bytes
    (1600 bits)
  • (10Gb/s)/(1600 bits/pkt) 6.25 Mpkts/s
  • Ethernet Header of 14 Bytes
  • Average Frame Size of 214 Bytes (1712 bits)
  • (10Gb/s)/(1712 bits/pkt) 5.841 Mpkts/s
  • Ethernet Inter-Frame Spacing 96 bits
  • Average Frame Size with Inter-Frame Spacing 1808
    bits
  • (10Gb/s)/(1808 bits/pkt) 5.53 Mpkts/s
  • Ill use 6.25 Mpkts/s as a target.
  • Minimum Pkt size
  • Line Rate 10Gb/s
  • Minimum Ethernet Frame Size 64 Bytes (512 bits)
  • Ethernet Inter-Frame Spacing 96 bits (512 96
    608 bits)
  • (10Gb/s)/(608 bits/pkt) 16.45 Mpkts/s
  • Max Core rate for LC Ingress 50 M Lookups/s
  • 16.45/50 32.9
  • Max Core rate for LC Egress 25 M Lookups/s

20
Lookup Block Requirements
  • LC Number of Lookups per second required
  • 1 Ingress and 1 Egress lookup required per packet
  • If we assume 6.25 MPkts/sec then we need 12.5 M
    Lookups/sec.
  • MR/NPU Number of Lookups per second required
  • 5 Gb/s per MR/NPU 3.125 M Lookups/sec
  • Total of 6.25 M Lookups/sec
  • Total Number of Lookup Entries to be supported?
  • Dependent on Size of Entries
  • Size of Entries and Keys?
  • Dependent on type of Lookup MR, LC-Ingress,
    LC-Egress
  • Size of Results?
  • Dependent on type of Lookup MR, LC-Ingress,
    LC-Egress

21
Keys and Results for Ingress LC and Egress LC
  • Keys
  • Ingress (Link ? Router)
  • What fields in the External Frame Formats
    uniquely identify the MetaLink?
  • First we have to identify the Substrate Link Type
  • Then we can Identify the Substrate Link and
    MetaLink
  • Egress (Router ? Link)
  • What fields in the Internal Frame Format uniquely
    identify the MetaLink?
  • Results
  • We need to identify what fields are needed to
    build the appropriate frame headers.
  • The fields needed may consist of several parts
  • Constant fields
  • Ethertype in most cases
  • Calculated fields
  • Things like Checksums
  • Statically configured Fields that can be stored
    in Local Memory
  • Things like per physical interface or Blade
    Ethernet Src Addresses
  • ARP results for Ethernet DAddr on a Multi-Access
    link
  • Lookup Result from TCAM
  • Everything else

22
Field Sizes in Keys and Results
  • Field and Identifier sizes
  • MR id 16 bits (64K Meta Routers per Substrate
    Router)
  • MR ID VLAN (Defined locally on a Substrate
    Router)
  • Note We can probably shorten this to 12 bits
    since our switch only supports 4K VLANs which is
    12 bits.
  • MI id 16 bits (64K Meta Interfaces per Meta
    Router)
  • This seems like a lot. What level of flexibility
    do we need to support?
  • MLI 16 bits (64K Meta Links per Substrate
    Link)
  • This seems safe and should not changed.
  • Port 4 bits (16 Physical Interfaces per Line
    Card)
  • Note I originally had this defined as 8 bits but
    since the RTM only supports 10 physical
    interfaces, 4 bits is enough. There were some
    places where the extra 4 bits pushed us to a
    larger size.
  • QID 20 bits (QM_IDQueue_ID)
  • Queue_ID 17 bits (128K Queues per Queue
    Manager)
  • QM_ID 3 bits (8 Queue Managers per LC
    or PE.)
  • We probably can only support 4 QMs, which could
    be encoded in 2 bits.
  • (64 Q-Array Entries) / (16 CAM entries) ? 4 QMs
    per SRAM Controller.

23
LC Internal Frame Formats
Internal Frame Leaving Ingress LC
Internal Frame Arriving at Egress LC
Packet arriving On Port N
Packet leaving On Port M
LC
MR
Switch
Switch

IXP PE
24
LC External Frame Formats
P2P-DC (Configured)
Multi-Access
Legacy
25
LC TCAM Lookup Keys
Internal Frame Leaving Ingress LC
Internal Frame Arriving at Egress LC
Ingress LC
Egress LC
  • Blue Shading Determine SL Type
  • Black Outline Key Fields from pkt

P2P-DC (Configured)
P2P-VLAN0
P2P-Tunnel
Legacy
Multi-Access
26
LC TCAM Lookup Keys on Ingress
P2P-DC
24 bits
IPv4 Tunnel
72 bits
Legacy
24 bits
P2P-VLAN0
24 bits
MA
72 bits
DstAddr (6B)
  • Blue Shading Determine SL Type
  • Black Outline Key Fields from pkt

SrcAddr (6B)
Type802.1Q (2B)
TCI ? VLAN0 (2B)
TypeIP (2B)
Ver/HLen/Tos/Len (4B)
ID/Flags/FragOff (4B)
TTL (1B)
ProtocolSubstrate (1B)
Hdr Cksum (2B)
Src Addr (4B)
Dst Addr (4B)
MLI (2B)
LEN (2B)
Meta Frame
PAD (nB)
CRC (4B)
P2P-DC (Configured)
P2P-Tunnel
27
LC TCAM Lookup Keys on Ingress
P2P-DC
24 bits
IPv4 Tunnel
72 bits
Legacy
24 bits
  • Blue Shading Determine SL Type
  • Black Outline Key Fields from pkt

P2P-VLAN0
24 bits
MA
72 bits
DstAddr (6B)
SrcAddr (6B)
Type802.1Q (2B)
TCI ? VLAN0 (2B)
TypeIP (2B)
Ver/HLen/Tos/Len (4B)
ID/Flags/FragOff (4B)
TTL (1B)
ProtocolSubstrate (1B)
Hdr Cksum (2B)
Src Addr (4B)
Dst Addr (4B)
MLI (2B)
LEN (2B)
Meta Frame
PAD (nB)
CRC (4B)
P2P-DC (Configured)
P2P-Tunnel
28
LC TCAM Lookup Results on Ingress
  • We need the Ethernet Header fields to get the
    frame to the blade that is to process it next.
  • We also need a QID and RxMI
  • Ethernet header fields that are constants can be
    configured and do not need to be in the TCAM
    Lookup Result.
  • Ethernet Header fields
  • DAddr Depends on MetaLink
  • SAddr Can be constant and configured per LC
  • EtherType1 Can be a constant 802.1Q
  • VLAN(TCI) Different for each MR
  • EtherType2 Can be a constant Substrate
  • TCAM Lookup Result (76b)
  • VLAN (16b)
  • RxMI (16b)
  • DAddr (8b)
  • We can control the MAC Addresses of the Blades,
    so lets say that 40 of the 48 bits of DAddr are
    constant across all blades and 8 bits are
    assigned and stored in the Lookup Result.
  • Will 8 bits be enough to support multiple
    chasses?
  • We could go up to 12 bits and still use 64bit
    Associated Data
  • QID (20b)
  • Stats Index(16b)
  • What about Ingress ? Egress Pass Thru MetaLinks?

29
LC TCAM Lookup Results on Ingress
  • TCAM Lookup Result (76b)
  • VLAN (16b)
  • RxMI (16b)
  • DAddr (8b)
  • We can control the MAC Addresses of the Blades,
    so lets say that 40 of the 48 bits of DAddr are
    constant across all blades and 8 bits are
    assigned and stored in the Lookup Result.
  • Will 8 bits be enough to support multiple
    chasses?
  • We could go up to 12 bits and still use 64bit
    Associated Data
  • QID (20b)
  • Stats Index(16b)

0
7
15
23
31
VLAN(16b)
RxMi(16b)
QID(20b)
Rsv (12b)
DA (8b)
Stats(16b)
Rsv (8b)
30
Pass Thru MetaLinks and Multi-Access SLs
  • When going MR ? LC-Egress the MR may provide a
    Next Hop MN Address for the LC to use to map to a
    MAC address.
  • This is particularly used when the destination
    Substrate Link is Multi-Access and there may be
    multiple MAC addresses used on the same
    Multi-Access MetaLink.
  • When going LC-Ingress ? LC-Egress for a pass
    through MetaLink, do we need to do something
    similar?
  • This could arise when a MetaNet has hosts on a
    multi-access network but the first Substrate
    Router that these hosts have access to does not
    have a MR for that MN.
  • However, I contend that if there is no MR on that
    access SR, then there is nothing there to
    discriminate between the multiple MN addresses on
    the single MA MetaLink and hence it cannot be
    supported.

31
Pass Thru MetaLinks and Multi-Access SLs
Host1
No way to communicate Next Hop addresses from MR
to distant LC
Host2
Host3
Host4
LC
LC
LC
ARP
ML
MR
MA Network
Host5
P2P SL
MA SL
Host6
Substrate Router1
Substrate Router2
Host7
Host8

HostN
  • Implications
  • We will not extend MA links across Substrate
    Routers and other Substrate Links.
  • MetaNets must place a MR in the substrate router
    that terminates a MA Substrate Link on which they
    want to support hosts.

32
LC TCAM Lookups on Egress
  • Key
  • VLAN(16b)
  • TxMI(16b)
  • Result
  • The Lookup Result for Egress will consist of
    several parts
  • Lookup Result
  • Constant fields
  • Calculated fields
  • Fields that can be stored in Local Memory
  • Some of these are common across all SL Types
  • Other fields are specific to each SL Type
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b) (Physical Interface 1-10 on LC RTM)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)

33
LC TCAM Lookups on Egress
  • Key
  • VLAN(16b)
  • TxMI(16b)
  • Result
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry table in Egress Hdr Format)
  • SL Type Specific Headers
  • P2P-DC Hdr (64b)
  • Constant (16b) In Egress Hdr Format
  • EtherType (16b) Substrate
  • Calculated (0b)
  • From Result (48b)

34
LC TCAM Lookups on Egress
  • Key
  • VLAN(16b)
  • TxMI(16b)
  • Result
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry table in Egress Hdr Format)
  • SL Type Specific Headers
  • MA Hdr (64b)
  • Constant (16b) In Egress Hdr Format
  • EtherType (16b) Substrate
  • Calculated (0b)
  • ARP Lookup on NhAddr (Is ARP cache another
    database in TCAM?) (48b)

35
LC TCAM Lookups on Egress
  • Key
  • VLAN(16b)
  • TxMI(16b)
  • Result
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry table in Egress Hdr Format)
  • SL Type Specific Headers
  • MA with VLAN Hdr (96b)
  • Constant (32b) In Egress Hdr Format
  • EtherType1 (16b) 802.1Q
  • EtherType2 (16b) Substrate
  • Calculated (0b)

36
LC TCAM Lookups on Egress
  • Key
  • VLAN(16b)
  • TxMI(16b)
  • Result
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry table in Egress Hdr Format)
  • SL Type Specific Headers
  • P2P-VLAN0 Hdr (96b)
  • Constant (32b) In Egress Hdr Format
  • EtherType1 (16b) 802.1Q
  • EtherType2 (16b) Substrate
  • Calculated (0b)

37
LC TCAM Lookups on Egress
  • Result (continued)
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry tbl in Egress Hdr Format)
  • SL Type Specific Headers
  • P2P-Tunnel Hdr for IPv4 Tunnel without VLANs
    (224b)
  • Constant (48b) In Egress Hdr Format
  • Eth Hdr EtherType (16b) 0x0800
  • IPHdr Version(4b)/HLen(4b)/Tos(8b) (16b) All can
    be constant?
  • IP Hdr TTL (8b) Initialized to a contant when
    sending.
  • IP Hdr Proto (8b) Substrate
  • Calculated (64b) By Egress Hdr Format
  • IP Pkt Len(16b) Calculated for each packet.

38
LC TCAM Lookups on Egress
  • Result (continued)
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry tbl in Egress Hdr Format)
  • SL Type Specific Headers
  • P2P-Tunnel Hdr for IPv4 Tunnel with VLANs (256b)
  • Constant (64b) In Egress Hdr Format
  • First Eth Hdr EtherType (16b) 802.1QS
  • Second Eth Hdr EtherType (16b) 0x0800
  • IPHdr Version(4b)/HLen(4b)/Tos(8b) (16b) All can
    be constant?
  • IP Hdr TTL (8b) Initialized to a contant when
    sending.
  • IP Hdr Proto (8b) Substrate
  • Calculated (64b) By Egress Hdr Format

39
LC TCAM Lookups on Egress
  • Key
  • VLAN(16b)
  • TxMI(16b)
  • Result
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • Ignored for Legacy Traffic
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry tbl in Egress Hdr Format)
  • SL Type Specific Headers
  • Legacy (IPv4) with VLAN Hdr (96b)
  • IP Header provided by MR!
  • Constant (16b) In Egress Hdr Format
  • EtherType1 (16b) 802.1Q

40
LC TCAM Lookups on Egress
  • Key
  • VLAN(16b)
  • TxMI(16b)
  • Result
  • Common across all SL Types (108b)
  • From Result (60b)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • Ignored for Legacy Traffic
  • QID (20b)
  • Stats Index (16b)
  • Local Memory (48b)
  • Eth Hdr SA (48b) tied to physical interface (10
    entry table in Egress Hdr Format)
  • SL Type Specific Headers
  • Legacy (IPv4) without VLAN Hdr (64b)
  • IP Header provided by MR!
  • Constant (0b)
  • Calculated (0b)

41
LC Lookup Block Parameters
  • All lookups will be Exact Match.
  • Ingress
  • Databases 1
  • 4 bits in Key identify the SL Type
  • 0000 DC
  • 0001 IPv4 Tunnel
  • 0010 Legacy (non-substrate) with or without VLAN
  • 0011 VLAN0
  • 0100 MA (with or without VLAN)
  • Core Size 72b
  • Key Size 24b - 72b
  • AD Result Size 64b of which well use 60 bits
  • Egress
  • Databases 1
  • Core Size 36b
  • Key Size 32b
  • AD Result Size 128b of which well use different
    amounts per SL Type
  • With one problem to still work out.

42
SUMMARY LC TCAM Lookups
DC Tunnel W/ vlan Tunnel w/o vlan VLAN0 MA w/ vlan MA w/o vlan Legacy w/ vlan Legacy w/o vlan
Ingress Key 24 72 72 72 24 24 24 24
Ingress Result 76 76 76 76 76 76 76 76
Egress Key 32 32 32 32 32 32 32 32
Egress Result 108 156 140 124 76 60 92 76
  • Ingress Key Size 24 bits or 72 bits
  • Ingress Result Size 76 bits
  • Egress Key Size 32 bits
  • Egress Result Size 60-156 bits
  • The IP Tunnel with VLANs Substrate Link option is
    a problem.
  • Discussion of ways to handle them are on next
    slide
  • We also need to watch out for the Egress Result
    for Tunnels w/o VLANs. If we introduce anything
    else we want in there then we go beyond the 128
    bits supportable through the TCAMs Associated
    memory.

43
Handling IP Tunnel SL with VLANs
  • Result Fields (156 bits)
  • SL Type(4b)
  • Port(4b)
  • MLI(16b)
  • QID (20b)
  • Stats Index (16b)
  • Eth Hdr DA (48b)
  • IP Hdr Dst Addr (32b)
  • VLAN (16b)
  • 128 bits is max size of a Result stored in TCAM
    Associated Data SRAM
  • Options for handling this Result
  • Not allow this type of SL
  • Might be ok for short term but almost certainly
    not ok for long term.
  • Find 28 bits we dont really need in Result
  • Do a second lookup when we find a SL like this.
  • Do a Multi-Hit lookup and put two entries in for
    these SLs and only one entry for all others.
  • Then concatenate the two results when we get
    them.
  • Only allow a small fixed number of this type of
    SL
  • Store an index in the 4 bits we have left

44
MR Lookup Block
Control
TCAM
XScale
XScale
DeMux
Rx
Parse
DeMux
Rx
Parse
Lookup
Lookup
Tx
HeaderFormat
Tx
HeaderFormat
MR (NPUA)
MR (NPUB)
45
Common Router Framework (CRF) Functional Blocks
Parse
HeaderFormat
Lookup
Tx
DeMux
Rx
MR-1
. . .
MR-n
  • Lookup
  • Function
  • Perform lookup in TCAM based on MR Id and lookup
    key
  • Result
  • Output MI
  • QID
  • Stats index
  • MR-specific Lookup Result (flags, etc. ?)
  • How wide can/should this be?

46
MR Lookup Block Requirements
  • Shared NP Lookup Engine specific
  • Number of Lookups per second required
  • 1 lookup required per packet
  • 5Gb/s per NP on a blade
  • Average sized packet 200Bytes, 1600 bits
  • If we assume 6.25 MPkts/sec for 10Gb/s then for
    5Gb/s would be 3.125 MPkt/s
  • We would want 3.125 M Lookups/sec per LA-1
    Interface, total of 6.25 M Lookups/sec for the
    TCAM Core.
  • Minimum Sized Packet 76Bytes, 608 bits
  • If we assume 16.45 MPkts/sec for 10Gb/s then for
    5Gb/s would be 8.225 MPkt/s
  • We would want 8.225 M Lookups/sec per LA-1
    Interface, total of 16.45 M Lookups/sec for the
    TCAM Core.
  • Number of MRs to be supported?
  • Will each get its own database? No. This would
    limit it to 16 which is not enough.
  • How many keys will each MR be limited to?
  • How much of Result can be MR-specific?
  • How much of Key can be MR-specific?
  • How are masks to be supported?
  • Mask core is same size as Data core. One mask per
    Entry
  • Global Mask Registers also available for masking
    key to match size of Entry during Multi Database
    Lookups where the multiple databases have
    different sizes.
  • How will multiple hits across databases be
    supported?

47
IPv4 MR Lookup Entry Examples
  • Route Lookup Longest Prefix Match
  • Entry (64b)
  • MR ID (16b)
  • MI (16b)
  • DAddr (32b)
  • Mask (32 Prefix length) high order bits set to
    1
  • GM Match Lookup
  • Entry (148b)
  • MR ID (16b)
  • MI (16b)
  • If we could shorten the MR/MI fields by a total
    of 4 bits this would fit in a 144 bit core size.
  • SAddr (32b)
  • DAddr (32b)
  • Protocol (8b)
  • Sport(16b)
  • Dport(16b)
  • TCP_Flags (12b)
  • Mask Completely general, user defined.
  • EM Match Lookup

48
IPv4 MR Lookup Databases
  • How many databases to use?
  • Three Options
  • 3 a separate DB for each
  • 2 one DB for GM and one for RL and EM
  • 1 RL, GM and EM all in one DB
  • Assumptions
  • We want to be able to easily change priorities of
    Filters
  • We want Routes being strictly Longest Prefix is
    the best Match.
  • A Filter, either Exact Match or Range/Best Match,
    always takes precedence over a Route
  • EM is generally higher priority than Range/Best
    Match, but not always.
  • We still want the best highest priority match of
    each and then compare them.
  • We may not want to pay the overhead penalty of
    shuffling filter entries when we change
    priorities.
  • Currently unknown what the penalty will be.

49
IPv4 MR Lookup Databases
  • 3 Databases
  • Means we would use Multi Database Lookup (MDL)
    command
  • More efficient use of CAM core entries as each DB
    could be sized closer to its Entry size
  • Guaranteed at least one Result from each Database
    if an existing match existed in each database.
  • 2 Databases
  • We could use MDL command
  • Guaranteed one Result from GM and one from either
    EM or RL but not both!
  • Order is important
  • EM filters would all go first in EM/RL DB, with
    full masks.
  • At most one entry would match
  • EM filters would always be higher priority than
    Routes.
  • If no EM filter match, we would get the best RL
    match.
  • RL entries would be sorted by prefix length so
    first match was the longest.
  • We could use two separate commands Lookup or MHL
    for GM and MHL for EM/RL
  • Guaranteed at least one Result from each
    GM,EM,RL if an existing match existed in each.
  • Price Two lookups per packet.
  • 1 Database
  • Use Multi Hit Lookup (MHL) command
  • Efficient use of the CAM core entries is a
    potential problem.

50
IPv4 MR Lookup Example 3 DBs
  • Order matters
  • Same Key will be applied to all Databases(MDL)
  • Multi-Database Lookup (MDL)
  • Each Database will use the number of bits it was
    configured for, starting at the MSB.
  • DAddr field needs to be first
  • TCP_Flags field needs to be last
  • Route Lookup Longest Prefix Match
  • Key (64b)
  • MR ID (16b)
  • MI (16b)
  • DAddr (32b)
  • GM Match Lookup Best/Range Match
  • Key (148b)
  • MR ID (16b)
  • MI (16b)
  • DAddr (32b)
  • SAddr (32b)
  • Protocol (8b)
  • Sport(16b)

51
IPv4 MR Lookup Example 3 DBs
Lookup Key 148 bits out of 5 32-bit words
transmitted with Lookup command.
DAddr(32b)
SAddr(32b)
TCP_Flags (12b)
Proto (8b)
DPort(16b)
SPort(16b)
Pad (12b)
MR ID (16b)
MI (16b)
W1
W2
W3
W4
W5
MDL
GMR
GMR
GMR
Core Entries for GM DB 148 bits Core Size 288
bits GMR0xFFFFFFFFF 0xFFFFFFFFF
0xFFFFFFFFF 0xFFFFFFFFF
0xF00000000 0x000000000
0x000000000 0x000000000
52
IPv4 MR Lookup Result Examples
  • Result
  • QID(20b)
  • Output MI (16b)
  • Priority(8b) range 0-255
  • Drop(1b)
  • Port(8b)
  • Stats Index (8b) 256 indices for stats
  • Total of 61 bits
  • Each Database will have 64 bits of associated
    data, of which we will use the low order 61 bits.
  • And for MDL lookups only 61 of 64 bits of
    Associated Data is returned.
  • RTN1b
  • ADSP1b
  • AD WIDTH01b
  • Results Mailbox
  • D Done (1b) set to 1 when ALL searches are
    completed.
  • H Hit (1b) set to 1 if the search was
    successful and result is valid, 0 otherwise
  • MH MHit (1b) set to 1 if search was successful
    and there were additional hits in database.
  • R Reserved bits.
  • AD (Associated Data) 61 of the 64 bits of
    Associated Data from the Associated Data ZBT SRAM
    attached to TCAM.

Results Mailbox
AD6032 (1st Search)
D
MH
H
AD31 0 (1st Search)
AD6032 (2nd Search)
H
MH
R
AD31 0 (2nd Search)
AD6032 (3rd Search)
H
MH
R
AD31 0 3rd Search)
Not Used
Not Used
53
IPv4 MR Database Core Sizes
  • Route Database
  • Core Size 72b
  • Entries per Segment 8K
  • Number of Entries needed per route 1
  • Number of Routes per Segment 8K
  • GM Database
  • Core Size 288b
  • Entries per Segment 2K
  • Number of Entries needed per filter dependent on
    filter
  • Number of Filters per Segment lt 2K
  • EM Database
  • Core Size 144b
  • Entries per Segment 4K
  • Number of Entries needed per filter 1
  • Number of Filters per Segment 4K
  • Configuration used in our FPX-based Router
  • 32 GM filters
  • 10K Ingress EM Filters
  • 10K Egress EM Filters

54
IPv4 MR Database AD Usage
  • Each Segment can be configured with a Base
    Address and a result size for calculating an
    address into the Associated Data.
  • The Associated Data is stored in a 512K x 36 bit
    ZBT SRAM
  • Using 64bit Results will give us 256K slots in
    the AD SRAM.
  • 48K Route DB Entries
  • lt 2K GM DB Entries
  • 20K EM DB Entries
  • Max Total of 70K Results needed.
  • Plenty of room in the AD for the IPv4 MR Results

55
MPLS Lookup
  • MPLS uses a 20 bit Label
  • Key (52 bits)
  • MR ID (16b)
  • MI (16b)
  • MPLS_Label (20b)
  • Use an Exact Match Database
  • MPLS Label Database
  • Core Size 72b
  • Entries per Segment 8K
  • Number of Entries needed per label 1
  • Number of Labels per Segment 8K
  • Drop Bit
  • Does MPLS need a Drop Bit?
  • Perhaps it would use a Miss as the same thing as
    Drop. That is, the fact that a label is not
    entered in the Database is an indication that
    frames using that label should be dropped.
  • But, if we explicitly have a drop bit than Hits
    on those Entries could be counted separately from
    Misses.
  • What will MPLS Label Lookup Result look like?
  • New Label (20 bits)
  • QID(20b)
  • Output MI (16b)

56
MPLS Lookup Result Examples
  • Result
  • Reserved (3b) Dont uses these, they will not
    show up in Results Mailbox.
  • New Label (20b)
  • QID(20b)
  • Output MI (16b)
  • Stats Index(16b)
  • Drop bit (1b)
  • Total of 73 bits (76 counting reserved bits)
  • DB will use 128 bits of associated data and will
    return the Associated Data followed by the
    Absolute Index.
  • We dont need the Absolute Index and we dont
    need the top 3 bits of the AD. With this ordering
    we just have to read the first 4 words on the
    results Mailbox instead of 5.
  • RTN1b
  • ADSP1b
  • AD WIDTH10b
  • Results Mailbox
  • D Done (1b) set to 1 when search is completed.
  • H Hit (1b) set to 1 if the search was
    successful and result is valid, 0 otherwise
  • MH MHit (1b) set to 1 if search was successful
    and there were additional hits in database.
  • Absolute Index Index offset from beginning of
    TCAM array.
  • Associated Data 128 bits of Associated Data from
    the Associated Data ZBT SRAM attached to TCAM.

Associated Data 12496
D
MH
H
Associated Data 9564
Associated Data 6332
Associated Data 310
Results Mailbox
Absolute Index210
Reserved3122
Not Used
Not Used
Not Used
57
Lookup Block Implementation Plan
  • Investigate impact of shortening MR_ID to 12 bits
  • How much shifting, masking and anding will this
    take?
  • How costly in cycles will that be?
  • Phase 0 Implement a generic Lookup Block with 1
    Database
  • With the right ifdefs to generalize it this
    should work for
  • LC-Ingress
  • LC-Egress
  • IPv4 MR with 1 combined DB
  • MPLS MR
  • This may be what we run for the November demo.
  • Phase 1 Implement a 2 or 3 DB IPv4 MR Lookup
    Block
  • I believe that for flexibility and ease of
    management this will be what we really want for
    this project and for ONL.
  • Phase 2 Shared NPU Lookup Block

58
Lookup Block
CTX-0
NSE
QDR SRAM NSE Interface
TCAM
SRAM Controller
CTX-1
In NN Ring
Out NN Ring
. . .
. . .
CTX-2
KEY
KEY
KEY
KEY
Result
Result
Result
Result
. . .
CTX-7
59
Lookup Block
  • In NN !Empty
  • Input NN Ring is not empty, something for us to
    read.
  • Out NN !Full
  • Output NN Ring is not full, space for us to write
    to it.
  • NSE Result Read Done
  • Our Read of Results Mailbox has completed.
  • Next_Ctx Start
  • Our turn to read from the In NN Ring.
  • Next_Ctx Done
  • Our turn to write to the Out NN Ring.

Next_Ctx Start
Next_Ctx Done
NSE Result Read Done
CTX-x
In NN !Empty
Out NN !Full
Next_Ctx Start
Next_Ctx Done
60
Ingress LC Lookup Block Pseudocode
  • Initialization Phase
  • Start
  • Wait on ((Next_Ctx Start signal) and (In NN Ring
    !Empty signal))
  • Phase 1
  • Assert Next_Ctx Start signal
  • Read In NN Ring(buf_handle, Key, SL_Type)
  • Extract Key of correct size based on SL_Type
  • Build Lookup command (IDT Macro)
  • Send Lookup command to NSE (sram write
    instruction)
  • Calculate Delay Time and Wait (IDT Macro)
  • Phase 2
  • Issue Command to Read Result from Results Mailbox
    (IDT Macro)
  • Macro does Wait for Result and checks Done bit
    and continues to read until Done bit is set.
  • Wait for ((Next_Ctx Done signal) and (Out NN Ring
    !Full signal))
  • Phase 3
  • Assert Next_Ctx Done signal
  • Send (buf_handle, Result) to Out NN Ring
  • Wait on ((Next_Ctx Start signal) and (In NN Ring
    !Empty signal))

61
Egress LC Lookup Block Pseudocode
  • Initialization Phase
  • Start
  • Wait on ((Next_Ctx Start signal) and (In NN Ring
    !Empty signal))
  • Phase 1
  • Assert Next_Ctx Start signal
  • Read In NN Ring(buf_handle, Offset, Key)
  • Extract Key of VLAN and TxMI
  • Build Lookup command (IDT Macro)
  • Send Lookup command to NSE (sram write
    instruction)
  • Calculate Delay Time and Wait (IDT Macro)
  • Phase 2
  • Issue command to Read Result from Results Mailbox
    (IDT Macro)
  • Macro does Wait for Result and checks Done bit
    and continues to read until Done bit is set.
  • Wait for ((Next_Ctx Done signal) and (Out NN Ring
    !Full signal))
  • Phase 3
  • Assert Next_Ctx Done signal
  • Send (buf_handle, Offset, Result) to Out NN Ring

62
IPv4 MR 3 DB Lookup Block Pseudocode
  • Initialization Phase
  • Initialize GMR_GM, GMR_EM, GMR_RL for each
    type/size of lookup database/key
  • Start
  • Wait on ((Next_Ctx Start signal) and (In NN Ring
    !Empty signal))
  • Phase 1
  • Assert Next_Ctx Start signal
  • Read In NN Ring(buf_handle, dram_ptr(?), Offset,
    MR_Id, Input_MI, MR_Mem_Ptr, Key)
  • Extract Key of correct number of bits
  • Build Multi Database Lookup (MDL) command using
    Key and GMR_GM, GMR_EM, GMR_RL
  • IDT Macro
  • Send MDL command to NSE (sram write
    instruction)
  • Calculate Delay Time and Wait (IDT Macro)
  • Phase 2
  • Issue command to Read Result from Results Mailbox
    (IDT Macro)
  • Macro does Wait for Result and checks Done bit
    and continues to read until Done bit is set.
  • If no hits, zero Out_Result and then set Miss bit
  • Else compare priority of hits and select highest
    priority and write into Out_Result
  • Wait for ((Next_Ctx Done signal) and (Out NN Ring
    !Full signal))

63
Extra
  • The next set of slides are for templates or extra
    information if needed

64
Text Slide Template
65
Image Slide Template
66
IPv4 MR Lookup Result Examples
  • PROBLEM one problem with using the MDL is that
    we do not get an index back with our results.
  • Hence we will not be able to easily increment a
    counter based on the lookup result.
  • Multi Database Lookup (MDL) cmd returns one and
    only one of the following per database searched
  • Absolute Index
  • Translated Index
  • Associated Data
  • Lookup cmd returns
  • Absolute Index followed by Associated Data
  • Associated Data followed by Absolute Index
  • Translated Index
  • Options 1
  • three back-to-back Lookup cmds, one for each
    database.
  • Each result would provide us with the
  • result data
  • Index
  • This requires 3 times the number of lookups in
    the TCAM.
  • Option 2
  • Use MDL but have result include the index and not
    the data.
  • We would then have to have the result data in a
    separate memory that we would then read.

67
MPLS Lookup Result Examples
  • Note No Stats Index included in Results
  • Option 1
  • Use the Absolute Index returned with the
    Associated Data, to locate a counter to increment
  • Option 2
  • Increase the result size to 128 bits.
  • This would also allow us to put the Drop bit in.
  • Result
  • New Label (20b)
  • QID(20b)
  • Output MI (16b)
  • Stats Index(16b)
  • Drop bit (1b)
  • Total of 73 bits
  • Lets go with Option 2.

68
TCAM Latency Data
  • IDT App Note AN-459 IDT75K72234 Instruction
    Latency
  • Provides data and examples for latency
    calculations
  • Assumptions
  • The NSE has no instructions in the pipeline
  • The measured instruction is the only instruction
    issued
  • The other NSE interfaces are idle.

69
TCAM Latency Data
  • Example from IDT App Note AN-459
  • 288-bit Lookup (288/32 9 QDR cycles to transfer
    288 bit key)
  • 32-bits of Associated ZBT SRAM data is returned
  • QDR clock frequency is 200 MHz
  • System clock frequency is 200 MHz

Description Clock Domain Freq (MHz) of clocks Time (ns)
QDR xfer time QDR 200 9 45
Instruction FIFO QDR 200 2 10
Synchronizer System 200 3 15
Execution Latency System 200 32 160
Re-Synchronizer QDR 200 1 5
Total Time 47 235
  • Execution Latency numbers are from Table 1 of
    AN-459

70
TCAM Latency Data
  • Parameters for our LC Ingress Lookup
  • 128-bit Lookup (128/32 4 QDR cycles to transfer
    128 bit key)
  • 128-bits of Associated ZBT SRAM data is returned
  • QDR clock frequency is 200 MHz
  • System clock frequency is 200 MHz
  • Core Blocking (CB) Delay 8 cycles
  • Backend Latency 14 cycles

Description Clock Domain Freq (MHz) of clocks Time (ns)
QDR xfer time QDR 200 4 20
Instruction FIFO
Write a Comment
User Comments (0)
About PowerShow.com