The Ins and Outs of - PowerPoint PPT Presentation

About This Presentation
Title:

The Ins and Outs of

Description:

T1: 80-100 millisec. Traffic Management Required for New Global Applications ... Fast Ethernet: 1-2 millisec. GbE: 100-200 microsec. NANOG L4 Switching Presentation ... – PowerPoint PPT presentation

Number of Views:36
Avg rating:3.0/5.0
Slides: 34
Provided by: markh108
Category:
Tags: ins | millisec | outs

less

Transcript and Presenter's Notes

Title: The Ins and Outs of


1
  • The Ins and Outs of
  • Layer 4 Switching

Dr. Shirish Sathaye ssathae_at_alteon.com Vice
President of Engineering
2
Is Layer 4 Switching Meaningful?
  • You cant switch at Layer 4 BUT you can use Layer
    4 information to make switching decisions!
  • The term Layer 4 Switching is too confusing.
    It usually means one of two things
  • 1. Layer 4 information is used to prioritize and
    queue traffic (routers have done this for years)
  • 2. Layer 4 information is used to direct
    application sessions to different servers (next
    generation load balancing)
  • Though the term may be meaningless the idea and
    value of L4 switching is valid

3
Packet-by-Packet Traffic ManagementInsufficient
  • L-2 Switches and Routers
  • Increasing Hardware Integration
  • High performance
  • Optimized for packet-by-packet forwarding under
    normal conditions
  • Expensive exception handling
  • Hop-by-Hop Traffic Management
  • Stateless protocols RSVP, IGMP, 802.1z,
    802.1p/Q, ...
  • Requires every device along path to collaborate
  • No built-in end-system feedback
  • Only useful for WAN and LAN/WAN boundary

4
Session-Based Traffic ManagementRequired
  • Session-Aware Devices
  • Firewalls, traffic directors, packet shapers
  • End-to-End Traffic Management
  • ATM, TCP, HTTP, FTP, ...
  • Maintain session states
  • Built-in end-station feedback
  • Precise control over service quality,
    availability and performance
  • Per session handling is protocol and application
    specific
  • Requires session-specific software and massive
    processing power

5
How L4-Aware Systems Work
  • By making intelligent switching decisions and to
    forward frames based on TCP/UDP port information
    and IP source/destination addresses
  • L4 switchingSession Switching
  • examines client requests directed at the L4
    switch
  • multiplexes client requests across any server
    available to handle those requests
  • passively measures application health and
    responsiveness to determine server availability
  • stateful processing
  • By combining the benefits of L4 sofware on a
    high-speed L2 switching platform
  • By using this information to establish policy
    controls for how traffic is to be managed

6
Why is L4-switching important?
7
Emergence of L4-Aware DevicesSession Management
and Packet-Switched Devices
External Server Farm
Packet Switching
Internal Server Farm
Firewall
LAN Clients
Firewall
QoS Mgr
Proxy Cache
Session Management
Internet
Firewall
Packet Switching
Intranet
Session Management
8
Integrating L4 Switching
Application Servers
  • Single-function devices subsumed by routers and
    server switches
  • L4 switch functions
  • Multi-speed server connectivity
  • Reduce network overhead on servers
  • Monitor individual server/ application
  • Application session management
  • Server load-balancing
  • Web cache redirection
  • High availability
  • Session-by-session QoS

Internet
Web Servers
Intranet
NFS Server
Cache Servers
Backup Server
9
Traffic Management Required for New Global
Applications
Example Incremental delay experienced by a 64
byte packet queued behind 10 x 1,500 byte packets
10
Key Layer 4-based Applications
  • 1. Local/Global Server load balancing
  • 2. High availability applications
  • 3. Web Cache Redirection
  • 4. DNS redirection
  • 5. Firewall Load Balancing
  • 6. URL-based redirection, switching

11
Local Server Load Balancing
  • Scalable application processing capacity
  • Add servers on-demand
  • High availability
  • Server/application health monitoring
  • Backup and overflow servers
  • Hot-standby switch configurations
  • Tiers-of-service by servers
  • Priority users/applications can be directed to
    premium servers
  • Integrated switch and load balancer
  • Flexibility
  • Scalability
  • Economy of scale
  • Performance

FTP HTTP
DNS
Database Queries
HTTP
DNS
FTP
Clients
12
Basic Configuration
13
Separate Real Server Groups
14
Multiple VIPs
15
Back-Up Servers
  • Real Servers can be configured as Back-Up Servers
    for other Real Servers or specified Virtual
    Services.
  • When backing up a Real Server, the Back-Up Server
    will come into service if the Real Server fails.
  • When backing up a Virtual Service, the Back-Up
    Server will come into service if all Real Servers
    which are part of the Virtual Service group fail.
  • Support for Back-Up Servers alone might be
    compelling reason for customers to invest in L4
    Switching.

16
Load Balancing Algorithms
  • Round Robin
  • LeastConns
  • Load Based
  • Server Feedback Based

17
Session ID SubstitutionClient to Server
18
Session ID Substitution Server-to-Client
19
Global Server Load BalancingIssues
  • Increase application availability in event of
    entire site failure or overload
  • Scale application performance by load balancing
    traffic across multiple sites
  • Need for more granularity and control in
    directing Web traffic
  • More flexibility in building and managing
    Internet infrastructures

20
Distributed Content Sites Today
  • Mostly static content on Web (HTTP, FTP, NNTP..)
    servers
  • Load and site distribution through Round Robin
    DNS
  • No Site Health Awareness
  • No Site Performance Awareness
  • No Geographic Awareness
  • Cached DNS requests for servers that are down
    produces failure to connect messages

www1.company.com www2.company.com www3.company.com
21
How L4 GSLB Works
Rank Site Traffic 1 B 70 2 C 20 3 A 10
www.foo.com 162.113.25.20
C
www.foo.com 172.168.13.10
5
1
4
1. Clients DNS request for www.foo.com sent to
local DNS 2. Local DNS queries upstream DNS 3.
Switch at site C receives DNS request and
determines that sites B and C are closest to
user. Acting as Authoritative Name Server,
switch selects the best site (B) and returns site
Bs IP to clients local DNS 4. Local DNS server
responds to client with site Bs VIP 5. Client
opens application session to 205.178.2.2 (site B)
2
DSSP Updates
3
A
DNS
Rank Site Traffic 1 B 80 2 C 20 3 A 10
B
www.foo.com 205.178.2.2
Site health, response time and
throughput exchanged between switches on a
periodic or event-driven basis using encoded
DSSP
Rank Site Traffic 1 B 75 2 C 15 3 A 5
22
Distributed Site State Protocol
  • Lightweight, encoded protocol runs over HTTP
  • Used to exchange health, load, throughput
    information
  • Periodic Updates
  • Peer site performance behavior (one sites view
    of all other sites)
  • Local site status information (server health,
    current connections, etc)
  • Periodic Updates result in each switch building
    an Ordered Handoff Table
  • Triggered Updates
  • If a site observes that another site is
    unresponsive, it will Trigger all other sites to
    check the questionable site
  • If a site experiences a connection spike
    (reaching MaxConns) it will trigger an update to
    all other sites to stop Site Handoff

23
Dynamic, Global Site Performance Knowledge
  • Sites ranked based on statistical site
    performance data
  • Test each remote sites (VIP) health, throughput,
    response, load and available capacity
  • Build Site Table based on time-averaged test
    results
  • Sites ranked based on global view of top sites
  • Periodically exchange Site Table with all peer
    sites
  • Computes Weighted Handoff Table based on how
    frequently each site is ranked top performing by
    peers
  • Dynamic site ranking with triggered updates
  • If a site finds a peer site unresponsive, it will
    trigger all other sites to check questionable
    site
  • If a site experiences a connection spike
    (reaching MaxConns) it will trigger an update
    to all other sites

Site D 5 health checks 25MB/900ms 1000 active
sessions 1000 available sessions
Site C 5 health checks 25MB/1800ms 2000 active
sessions 400 available sessions
Site A 5 health checks 25MB/1200ms 1200 active
sessions 600 available sessions
A
D
B
C
24
Global Server Load BalancingAdvantages
  • No connection delay
  • Client geographic awareness based on DNS request
    origination
  • Distributed site performance awareness
  • Fair site selection
  • Statistical site performance measurements
    minimize impact of traffic spikes
  • Best performing sites get fair proportion of
    traffic but are not overwhelmed
  • Protection against best site failure
  • HTTP Redirect or IP Proxy as last resort
  • Straight-forward configuration
  • All IP protocols supported

25
Global Server Load BalancingSite Performance
Awareness
  • Each site performs health and performance tests
    on all peer sites
  • Server switch views a peer VIP in a remote site
    as a remote server
  • Switch performs periodic health/performance
    checks on all remote servers
  • Switch builds ordered site handoff sequence per
    remote server
  • Dynamic site ranking based on global, statistical
    site performance data
  • Switch periodically exchanges site handoff
    sequence with all other peer sites
  • Switch recomputes site handoff sequence based on
    each peer sites ranking by all other peer sites

Peer Site 1 VIP-1 for www.company.com Remote
Server to Site 2
Peer Site 2 VIP-2 for www.company.com Remote
Server to Site 1
26
Web Cache Deployment Options
  • Proxy caching
  • Browser sends requests for web pages to cache
    instead of origin server
  • Transparent proxy caching
  • Browser sends requests for web pages to origin
    server
  • Cache sits in data path, examines all packets
    bound for the Internet, intercepts web traffic
    and processes web requests
  • Transparent proxy caching with web cache
    redirection
  • Browser sends requests for web pages to origin
    server
  • LAN switch sits in data path, examines all
    packets bound for the Internet, and redirects web
    traffic to cache(s)
  • Cache(s) attached to web cache redirector
    processes web requests

27
Transparent Proxy Caching with Web Cache
Redirection
Host B
Host C
  • Pro Limited impact on non-Web traffic
  • Pro No browser or cache administration required
  • Pro Each client hits multiple caches
  • Takes advantage of data stored in all local
    caches, raising hit rate
  • Higher hit rates mean less user delay and less
    unnecessary WAN traffic
  • If any cache is down, traffic directed to other
    caches
  • Con Must purchase and deploy web cache
    redirection hardware/software

Host A
L4
Cache Servers
HTTPTo A
HTTPTo C
HTTPTo B
HTTPTo B
28
High AvailabilityHot Stand By Set-Up
29
Link Failure Detection and Failover
Single Link Failure
Combined Network/Server Failure
30
DNS Redirection
31
Firewall Load Balancing
32
Beyond Layer 4
33
Conclusion
Write a Comment
User Comments (0)
About PowerShow.com