GPN Disaster Recovery Seminar September 2526, 2007 - PowerPoint PPT Presentation

1 / 59
About This Presentation
Title:

GPN Disaster Recovery Seminar September 2526, 2007

Description:

1:00 2:00 BC/DR / COOP Making sure you are prepared Listen as a outside ... 3:00 3:15 Break ... Sullivan's Steakhouse Wine Cellar. Agenda Day 2 ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 60
Provided by: jimarc
Category:

less

Transcript and Presenter's Notes

Title: GPN Disaster Recovery Seminar September 2526, 2007


1
GPN Disaster Recovery Seminar September 25-26,
2007
  • Scott Silverstein
  • Director, Channel and Enterprise sales
  • ssilvers_at_ciena.com
  • 303-302-3450

2
Agenda Day 1
  • 100 200 BC/DR / COOP Making sure you are
    prepared Listen as a outside consultant
    share the common Mythconceptions of BC/DR
  • 200 300 SAN 101 An overview of storage
    technologies and protocols, differences
  • 300 315 Break
  • 315 415 GPN Panel Discussion Campus Issues,
    Capital Issues, Data Center Constraints
  • 530 730 Share refreshments with other GPN
    members and our sponsoring vendors
  • Sullivan's Steakhouse Wine Cellar

3
Agenda Day 2
  • 900 - 1000 A wide selection of products and
    services How do you Choose? IBM will
    detail their product offerings and BC/DR
    solutions
  • 1000 1100 Consolidating multiple protocols
    and applications - Intelligence on the edge
    of your network
  • 1100 1200 Business Protection Services
    Qwest will detail how to protect your business
    in the six most critical areas
  • 1200 100 Break for lunch
  • 100 230 Panel w/ round table discussion
  • 245 Conference Adjourns

4
Who is Ciena?The network specialist
Serving the worlds largest and most advanced
service provider, government, and enterprise
networks
  • Specialty Practical innovation for creating
    software-defined, service-selectable networks for
    highly flexible, adaptable, manageable and
    assured networking
  • Key offerings
  • Optical, data and access networking platforms
  • Network and service management systems
  • Global network services and professional
    services
  • Foundation for critical networks worldwide
  • Two-thirds of the worlds 25 largest service
    providers
  • Global 2000 enterprises in finance, retail and
    healthcare
  • Federal, state and local government agencies
  • Founded 1992
  • HQ in Maryland USA
  • Cumulative global market leader in long haul DWDM
    and Intelligent Optical Switching
  • Over 250 Patents held with 100 pending.
  • Numerous RD centers worldwide.
  • Service and support across the Americas, EMEA and
    Asia

5
Differentiating BC DR
  • Business Continuity
  • A Concept or plan that proactively seeks to put
    in place process, policy, systems and evaluation
    methods to avoid a down condition.
  • Disaster Recovery
  • A subset of BC that deals with the actual
    reactive process in the event of a down
    condition.

6
Business Continuity Requires Geographic Diversity
Power grid diversity
7
Can it happen to you?
  • During the WTC attack, Pace University was hit
    hard. According to Frank J. Monaco, CIO of the
    university Pace lost dozens of people during the
    attack. Its World Trade Institute, which occupied
    the entire 55 floor of the WTC Tower One, was
    completely destroyed. The main New York City
    campus, which is less than three blocks from the
    WTC site, was totally paralyzed with no public
    phones, dorm phones, or Internet connectivity.
    The lives of over 14,000 students and 2,500
    staff, faculty were seriously interrupted with no
    academic activities or school services for about
    7 days.
  • The LSU Hurricane Public Health Center (HPHC),
    also known as the Center for the Study of Public
    Health Impacts of Hurricanes, began
    multidisciplinary research on public health
    aspects of hurricanes and major floods in 2002.
     Comprised of a growing list of over forty
    researchers and advisory board members, in fields
    ranging from engineering to computer modeling to
    medicine, the initial pilot study of the Center
    was to assess and mitigate the public health
    impacts of a major hurricane strike in New
    Orleans.  This catastrophic event was in many
    ways realized in the fourth year of the five year
    project - by Hurricane Katrina in August
    2005.Prior to hurricane season 2005 however,
    many research initiatives from the Center were
    able to be applied to pre-Katrina hurricane
    planning and mitigation, as well as later
    response and recovery efforts

8
A New Budget Market
Emergency Management is a New Budget Market.
Homeland Security funding is now being allocated
to all-hazards not exclusively to terrorist
threats.
New in FY2006 1.8 Billion in funding to 35
identified high-threat regions.
  • Hurricanes
  • Pandemics
  • WMDs
  • Earthquakes
  • Firestorms
  • Landslides
  • Tornados
  • Flooding
  • Hazmats
  • Landslides

Source Center for Digital Government DHS FY 06
9
Current Threats
  • Coastal storms account for 71 of recent US
    disaster losses annually.
  • Annual damage from tornadoes, hurricanes, and
    floods is 11.4 billion a year.
  • Drought results in average annual losses between
    6-8 billion in all sectors.
  • Over 10,000 dams in the US classified as
    high- hazard 10 of all dams in the country.
  • Source FEMA, NOAA Magazine, National Center for
    Atmospheric Research (NCAR), H. John Heinz III
    Center for Science Economics and the Environment

10
Emergency Preparedness Response Have you
thought of everything?
Natural Disasters Severe Weather Earthquakes
Extreme Heat Floods Hurricanes Landslides
Mudslides Tornadoes Tsunamis Volcanoes
Wildfires Winter Weather
Bioterrorism Anthrax Plague QFever Tularemia
Recent Outbreaks and Incidents Bridge
Collapse Anthrax Salmonella Formaldehyde Asbestos
Botulism Blast Injuries Salmonella Wandsworth
Outbreak XDR Tuberculosis Acanthamoeba
Infection Salmonella Outbreak Avian Flu
http//emergency.cdc.gov/
11
Im 99 Sure Weve Thought of Everything!
  • How good is 99
  • 120 Newborns would be given to the wrong parents
    each day
  • 30,560 copies of tomorrows Wall Street Journal
    would be missing one of the three sections
  • 200,000 incorrect drug prescriptions would be
    written each year
  • 8,800,000 credit cards would have incorrect
    cardholder information on there magnetic strips

12
BC/DRCommon MythConceptions
  • Bill Bedsole
  • President
  • William Travis Group
  • bbedsole_at_williamtravisgroup.com
  • 847-303-0055

13
Storage 101Bringing Up SAN
  • Garry Moreau  Senior Staff Alliance Consultant
  • Ciena Communications
  • gmoreau_at_ciena.com
  • (763) 421-1449

14
DAS Direct Attached Storage
  • DAS
  • Storage is captive behind server CPU
  • Data Access is file system and platform dependant
  • Server CPU must handle user I/O requests
  • Costly to scale, complex to manage

Intranet
15
NAS Network Attached Storage
  • NAS
  • File Oriented Access
  • Multiple Clients
  • Shared Access to Data

16
SAN Storage Area Network
  • SAN
  • Block Level I/O Access
  • Interconnects different types of storage devices
  • SANs can incorporate subnetworks with
    network-attached storage (NAS) systems

17
Determining BC/DR Methods
  • Recovery Time Objective (RTO)
  • Time needed to recover from a disaster
  • How long can you afford to be without your
    systems
  • Recovery Point Objective (RPO)
  • Age of the data you want the ability to restore
    in the event of a disaster
  • Any data created or modified inside your recovery
    point objective will be either lost or must be
    recreated during a recovery
  • Network Recovery Objective (NRO)
  • Time required to recover or fail over network
    operations

18
Disaster Recovery Concepts
Recovery Point
Recovery Time
  • Recovery Point Objective (RPO)
  • Point in time to which applications data must be
    recovered to resume business transactions
  • Recovery Time Objective (RTO)
  • Maximum elapsed time required to complete
    recovery of application data

19
Recovery Point Objective (RPO)
  • Business needs drive the technology choice

Recovery Point
Recovery Time
20
Recovery Time Objective (RTO)
  • Business needs drive the technology choice

Recovery Point
Recovery Time
  • Recovery Time includes
  • Fault detection
  • Recovering data
  • Bringing apps back online

21
Synchronous Mirroring
DS3, OC-3/12/48 STM-1/4/16
MAN WAN
OR
OC-3/12/48 STM-1/4/16
Source
Target
22
Asynchronous Mirroring
DS3, OC-3/12/48 STM-1/4/16
MAN WAN
OR
CN 2000
OC-3/12/48 STM-1/4/16
Target
Source
23
GPN Panel DiscussionWhat are the issues?
Delmar R Johnson South Dakota State University
  • Greg Monaco, Ph.D., Executive Director
  • Great Plains Network Consortium

24
Business Continuity Recovery ServicesKeep your
business up and running with affordable solutions
  • Karen Watzel
  • Sales Program ManagerBusiness Continuity and
    Resiliency Services
  • IBM
  • Klwatzel_at_us.ibm.com
  • 719 325 7664

25
Consolidating Multiple Applications
ProtocolsIntelligence on the edge of your
networkMAN/WAN Infrastructure for BC/DR
  • Garry Moreau  Senior Staff Alliance Consultant
  • Ciena Communications
  • gmoreau_at_ciena.com
  • (763) 421-1449

26
AGENDA
  • Ciena Deployment Overview
  • Networking Options
  • Performance Considerations
  • Deployment Examples
  • Summary

27
Application Solution Qualifications
Qualifications
Interoperability
28
Ciena Data Center and Optical WAN
SolutionsEnterprise RE Government
Applications
Regional and National Backbones Optical
switching transport for government agencies and
enterprises
29
Ciena Solutions for IBM Mainframe Environments
Remote Data Center
  • Lowest networking costs for GDPS / STP
  • Only IBM-certified system capable of multiplexing
    STP channels onto a single wavelength
  • This in turn drives highest scalability
  • Unprecedented flexibility, with programmable
    approach to enable evolution to advanced protocol
    support (IB)

Long Distance Storage and LAN Extension For Metro
Mirror, Global Mirror, z/OS Global Mirror,
Metro/Global Mirror, SVC, PtP-VTS
Up to 1,000s of km
WDM For GDPS/STP Metro Mirror, Global Mirror,
z/OS Global Mirror,
WDM over dark fiber
Primary Data Center
Remote Data Center
Up to 100s of km
30
Cienas Enterprise Product Portfolio
ON-Center Manager
CN 2000
CN 4200 MC
CN 4200
31
AGENDA
  • Overview
  • Networking Options
  • Performance Considerations
  • Deployment Examples
  • Summary

32
Choices Connecting Data Centers
  • Deploy Fiber between buildings
  • Build your own network (likely using WDM
    equipment)
  • Lease a connection from a carrier or service
    provider
  • Likely a SONET service, wavelength service, or
    an IP service
  • Manage your own network, and the gateways
  • Lease a Managed Storage Service
  • Combination of the gateway solution the
    connectivity
  • Customized SLAs and service features focused on
    extension

33
Wide Area Network Considerations
  • Cost
  • Must meet Budget Constraints
  • Security
  • Guaranteed Isolation of Sensitive Data
  • Guaranteed Data Delivery
  • Performance
  • Minimal Impact on the Application with a High
    Throughput, Low Latency, and Rapid Restore Times
  • Capacity
  • Intelligent utilization of network resources
  • Manageability
  • Ability to Monitor/Report/Protect to Maximize
    Performance and Perform Rapid Fault Isolation.
  • Flexibility
  • Support for all Data Types (Storage, Voice, Data,
    Video) and Applications

34
MAN / WAN Options
Available Transports
  • WDM / Dark Fiber Multiple services on a fiber,
    high capacity, campus or metro
  • SONET/SDH Secure bandwidth, ubiquitous service
    access
  • GbEQoS point-to-point GbE private line services
  • IP/PoSStorage mapped into IP packets, usually
    through TCP/IP
  • ATM/PoSUsed by legacy channel extension
    technology, costly

35
MAN/WAN Options
36
IP Routed Network
GbE
GbE
FC
FC
GbE
GbE
FC
FC
FC over IP
FC Switch
FC over IP
Switch
FC Switch
Switch
Router
Router
  • Storage extension over IP networks (e.g. a leased
    GbE service) is only feasible if very high QoS
    SLAs can be guaranteed
  • Latency less than 5 ms
  • Packet Delivery Ratio (PDR) of 99.99 or higher
  • TCP can drive a maximum of 100 Mbps over a
    network with a latency of 5 ms and PDR of 99.99
  • IP networks must be dedicated to SAN extension in
    order to maximize PDR and minimize latency and
    even then, they can not support synchronous
    applications due to their high latency
  • Bandwidth must be over-provisioned by up to 50
    to account for dropped packets and retransmissions

37
ATM Dedicated Network
ATM Network
OC-3c
OC-12c
FC
FC
OC-3c
OC-12c
FC
FC
Channel Extender
FC Switch
Channel Extender
ATM Switch
FC Switch
ATM Switch
  • Channel extenders represent a legacy approach to
    storage extension
  • Support ESCON/FC/FICON neglecting overall data
    center extension requirement likely including GbE
  • ATM mapping adds significant overhead resulting
    in bandwidth inefficiency
  • Extremely high pricing
  • Adds significant latency to applications not
    requiring host emulation resulting in decreased
    application performance

38
Gigabit Ethernet Private Line Services
  • GbE over wavelengths
  • Mapping directly into wavelength
  • Dark fiber distances

Fiber
GbE
GbE
DWDM
DWDM
  • GbE over SONET/SDH
  • Mapping directly into TDM
  • 43 of Ethernet Services in NA are based on SONET

GbE
GbE
SONET SDH
SONET SDH
SONET SDH
  • GbE over switched service
  • Over shared service
  • Similar performance to IP VPN service

GbE or 100bT
L2 Switched Ethernet Network Or L2 Ethernet VLAN
over L3 Network
Carriers GbE services can have high performance,
BUT still need to consider the switching/routing
equipment within an enterprise
39
Storage over WDM
Protocol Rate
Service
Fiber or Wavelength
Characteristics for Storage
Todays Applications
  • Fiber relief
  • Native protocol carriage (GbE, Fibre
    Channel, ESCON)
  • Virtual Private Network
  • Bandwidth Leasing
  • Large capacity (80G in Metro)
  • Reach limited to metro distances
  • DWDM is cost effective for large of services
    and volumes of data (gt 10 Gbps)
  • CWDM provides a lower cost solution

40
Storage over SONET / SDH
  • What is SONET / SDH?
  • Self-monitored high performance networking
    technology
  • Ubiquitous network with over 150,000 installed
    carrier rings
  • Uses Time Division Multiplexing (TDM) to
    aggregate multiple signals together
  • Standardized rates from 50Mbps 40 Gbps
  • Why SONET / SDH?
  • Guaranteed, high bandwidth
  • Low latency
  • Deterministic
  • Secure, 99.999 availability network
  • Metro and Long Haul networks
  • National and International

SONET/SDH Based Services Perfectly Match the
Requirements of Business Continuance Applications
41
SONET as the Backbone
Native Services are Transported Worldwide over
SONET
  • 99 of all data traffic goes across the SONET
    network, including IP
  • Native protocols are mapped directly into SONET
    as soon as they leave the building or campus
    where they originated

42
AGENDA
  • Overview
  • Networking Options
  • Performance Considerations
  • Deployment Examples
  • Summary

43
Performance Considerations
  • Latency
  • Delay introduced by the intermediate equipment
    (i.e. switches, transport, speed of light in
    fiber, packet loss, jitter) which slows down the
    response time of applications
  • Bandwidth
  • Dedicate enough bandwidth to ensure optimum
    application performance, but only allocate the
    required to bandwidth to minimize MAN/WAN costs
  • Choose a storage networking technology that
    efficiently makes use of expensive MAN / WAN
    resources
  • Protocol Flow Control
  • Not have sufficient protocol extension
    capabilities will leave the application waiting

Latency, Bandwidth and Protocol Flow Controlcan
significantly impact application performance
44
Performance Considerations Network Latency
Element Switching
  • Time of Flight
  • Time it takes light to traverse the network
  • 5 ?sec per km
  • 1 ms per 125 miles

Data Center
Data Center
45
TCP/IP Flow Control
A packet is missing, please resend the last 20
packets
Lost packet
Enterprise Location 1
Enterprise Location 2
  • TCP enables guaranteed services across lossy IP
    networks
  • TCP has a mechanism to request retransmission of
    lost packets
  • Bandwidth wasted increases with high data rates
    and latency
  • When extending high-end applications over a lossy
    metropolitan or wide area network, severe
    performance limitations emerge
  • Need to consider
  • Packet Delivery Ratio
  • Packet Error Ratio
  • of TCP Retransmits

Sources of Inefficiencies
  • Time to recognize that a packet is lost
  • Time for lost packet message to be sent
  • Amount of data resent

Storage over IP devices typically offload this
retransmission requirement from the storage, but
this does not eliminate the end to end latency
46
Packet / Cell Loss Contributions
IP Network
Typical operating region
Required operating region
Provisioned Bandwidth
1200
1000
800
Throughput
Average Data Throughput
Mbps
600
400
Lost Packet
200
Timeout
0
99
99.99
99.9
99.999
  • Lost packet results in cutting throughput in half
  • Throughput slowly recovers over time
  • Similar effects with other IP and ATM Storage
    Solutions

Packet / Cell Delivery Ratio
Due to lost packets and retransmissions, goodput
is artificially limited
Performance of Storage Across IP and ATM
Infrastructures is Significantly Affected by Loss
47
Performance Considerations BandwidthTradeoff
Between Cost and Application Performance
  • Cost
  • MAN/WAN networking can be greater than 60 of the
    BC/DR application cost
  • Minimize the bandwidth used through
  • Data Compression
  • Efficient Mapping
  • Application Consolidation over a single transport
  • Performance
  • Need to consider the actual throughput of data
  • Need to guarantee application isolation and
    efficient delivery of data
  • If not enough bandwidth is allocated, or too much
    contention in the network, a bottleneck to the
    application will occur

Optimum SolutionBandwidth Allocated
Application Requirement
48
Typical Deployment Comparison
  • Option 1 Storage over IP Routed Network

May be consolidated into a single device
DWDM/OC-3 or GbE
FCIP
FC/IP
FC
FCIP
FC/IP
FC
FC
FC
MAN/WAN
Optional Switch
FCIP Gateway
LAN Switch
FCIP Gateway
Router
FC Switch
Router
FC Switch
Disk
Disk
LAN
LAN
Multiple G/Ws, switches and routers add
latency, dropped packets, lower throughput and
complexity (Both Sides)
Option 2 Storage over DWDM/SONET/GbE Network
FC Pt-to-Pt mode
FC Pt-to-Pt mode
DWDM/OC-n or GbE
or
or
FC
FC
FC Fabric mode
FC Fabric mode
MAN/WAN
GbE
Disk
GbE
CN2000 Storage over SONET/DS3
Disk
CN2000 Storage over SONET/DS3
FC Switch
FC Switch
LAN
LAN
Router
Guarantees layer 1 isolation, yet aggregation and
data compression of all traffic (FCLAN) before
connecting to the WAN
49
Data Compression
Protocol Line Rate
Application Rate (varies)
Compressed Data Rate (dependent upon application
content)
1 Gbps
300 Mbps
lt150 Mbps
Compressor
Active Multiplexing
Data Frame
Data Compression
IDLE Frame
Content Compression IDLE removal
  • Each port has dedicated compression chips,
    ensuring wirespeed performance and maximum
    compression
  • Uses industry standard LZS (Lempel Ziv Stac)
    algorithm
  • Automatically disabled on a per-packet basis when
    data not compressible
  • Available on all protocols and all interfaces
  • FC/FICON/ESCON/GbE
  • DS3, OC-3/12/48, STM-1/4/16, DWDM, Dark Fiber

Per Port Hardware Compression Dramatically
Reduces WAN Bandwidth Costs
50
Reducing BandwidthDynamic Bandwidth Sharing
Remote Data Center
Disk Mirroring
Fibre Channel
65
Aggregated Traffic
Tape Backup
GbE
20
15
GbE
LAN
  • Each port is assigned a guaranteed minimum
    percentage of the bandwidth
  • The percentage can be assigned to any value from
    0 to 100
  • When only one port has traffic to send, it has
    access to the entire link

51
Performance Considerations Standards Protocol
Efficiency
52
AGENDA
  • Overview
  • Networking Options
  • Performance Considerations
  • Deployment Examples
  • Summary

53
Certification Test ResultsIBM Global Mirroring
in IBM Tucson
Global Mirror Establish Bandwidth
Ciena
240
200
Ciena
160
Competitor
Bandwidth
(MB/sec)
120
Competitor
80

40
0
0
2000
4000
6000
8000
10000
12000
PPRC Distance
PPRC Distance
(miles)
(miles)
  • Data was extracted from an IBM white paper
    titled
  • IBM TotalStorage ESS Global Mirror
  • Asynchronous Peer-to-Peer Remote Copy Performance
    Perspectives - Aug 2004


54
Deployment ExampleStorage over SONET
Why Customer Choose Storage over SONET
  • gt 50 capital cost savings
  • 2X improvement in WAN utilization
  • Detailed performance monitoring of services and
    WAN

55
Deployment ExampleFortune 1000 Financial Services
Atlanta, GA
Sterling Forest, NJ
Optional LAN traffic GbE
Carrier SONET Service
56
Deployment ExampleDWDM and Storage over SONET
Why the customer choose DWDM Storage over SONET
  • DWDM provided extremely low latency and
    significant capacity
  • 2.5 Gbps today scaling to 80 Gbps
  • Leased a SONET OC-3 service as a backup

57
AGENDA
  • Overview
  • Networking Options
  • Performance Considerations
  • Deployment Examples
  • Summary

58
Summary
  • Costs due to WAN extension dominate the business
    case for BC/DR solutions
  • Solutions vary greatly based on cost,
    performance, security and data availability
    targets there is no single answer
  • Carriers and Service providers must be leveraged
    for solutions beyond fiber only
  • New products are available that both simplify the
    deployment of, and help drive down the costs
    associated with distance solutions for storage
    extension

59
Planning for the WorstNetwork Design
Considerations
  • Rick Spencer
  • Staff of the CTO
  • Qwest Communication
  • rick.spencer_at_qwest.com
  • 720-578-9198
Write a Comment
User Comments (0)
About PowerShow.com