Title: CETS 2000 Session 36: OpenVMS Fibre Channel Update
1 OpenVMS Storage/SAN Technical Directions Keith
Parris Systems/Software Engineer HP
Services Multivendor Systems Engineering Budapest
, Hungary 23 May 2003 Presentation slides on
this topic courtesy of Christian Moser
of OpenVMS Engineering
2hp OpenVMS Storage Technology
- Review of V7.3-1 Storage Features
- New Storage Products
- New SAN Technology
- Post V7.3-1 Projects
- Itanium Based Systems IO Plans
- Longer Term Storage Interconnects
3hp OpenVMS V7.3-1 Storage Features
- Disk Failover to the MSCP Served Path
- Static Multipath Path Balancing
- Multipath Poller Enhancements
- Multipath Tape Support
- Fibre Channel IOLOCK8 Hold Time Reduction
- Fibre Channel Interrupt Coalescing
- Distributed Interrupts
- KZPEA Fastpath
- Smartarray Support
4hp OpenVMS V7.3-1 Storage Features
Disk Failover to the MSCP Served Path
- V7.2-1 thorough V7.3 support failover among
direct paths (FibreChannel and SCSI) - V7.3-1 allows failover to a MSCP served path if
all direct paths are down - Automatic failback when direct path is restored
- No failback on a manual pathswitch to the MSCP
path - Supported for multihost FibreChannel and SCSI
connections - Multipath failover is enabled via MPDEV_ENABLE 1
(default) - MSCP Path failover is enabled via MPDEV_REMOTE
1 (default)
5Typical FibreChannel Configuration
LAN / Cluster Interconnect
CPU 1
CPU 2
CPU 3
CPU 4
CPU 1
CPU 2
CPU 3
CPU 4
PCI
PCI
PCI
PCI
pga
pgb
pga
pgb
fgb
fga
fgb
fga
FC Switch
FC Switch
100KM max
FC Switch
FC Switch
1
2
3
4
1
2
3
4
HSG
A
B
HSG
A
B
6hp OpenVMS V7.3-1 Storage Features
Multipath Poller Enhancements
- To monitor path availability a poller monitors
the status of all paths every 60 seconds - Only one disk connected to a given path is
selected for polling - If the polled disk goes offline, other disks
connected to the same path are polled - If all disks on a given path go offline, the
poller interval in decreased to 30 seconds - set dev 1dga100 /nopoll to disable polling to
a specific device - Poller is enabled via MPDEV_POLLER 1 (default)
- The poller is used to drive failback of MSCP
served paths
7hp OpenVMS V7.3-1 Storage Features
Multipath Poller Enhancements
- The poller reports broken paths in 3 ways
- OPCOM messages (the only notification prior to
V7.3-1) - Show device /full
- Path PGB0.5000-1FE1-0015-2C5C (CLETA), not
responding. - Error count 0 Operations
completed 1 - ( pipe show dev /full search sysinput
responding) - Show dev /multipath
- Device Device Error
Current - Name Status Count Paths
path - 1DGA4001 (CLETA) Mounted 0 7/ 9
PGB0.5000-1FE1-0015-2C58
8hp OpenVMS V7.3-1 Storage Features
Static Multipath Path Balancing
- Prior to V7.3-1 multipath selected the first path
it found as the primary and current path
(often causing disks to switch ports of the
storage controller) - In V7.3-1, multipath still selects the first path
found as primary, but it will now possibly
switch the current path in an attempt to
balance disks across online paths - Initial path balancing occurs at startup
- At mount time (and when mount verification
occurs), the online path with the least number
of active connections is selected as current - Path balancing only occurs within a given node
(not cluster aware) - Path balancing will not select offline paths
(if possible) - There is no failback capability if paths are
later switched due to error conditions
9hp OpenVMS V7.3-1 Storage Features
Multipath Tape Support
- Basic Fibre Channel tape support via MDR was
introduced in V7.3 and backported to V7.2-2 - This featured connection via a single FC path
- Tapes could be configured across the 2 possible
MDR FC paths using MDR SSP - Tapes served to non FC nodes via TMSCP
- V7.3-1 supports full tape multipath capability
- Automatic failover on path error
- Static path balancing via set dev 2mgax
/switch/pathxxx - Failover between direct paths and TMSCP paths is
not supported - MDR/NSR and tape drive is still a single point of
failure - Tape robot failover doesnt work and will be
fixed via tima kit
10Typical FibreChannel Tape Configuration
CPU 1
CPU 2
CPU 3
CPU 4
CPU 1
CPU 2
CPU 3
CPU 4
PCI
PCI
PCI
PCI
pga
pgb
pgc
pgd
pga
pgb
pgc
pgd
fgd
fgc
fgb
fga
fgd
fgc
fgb
fga
FC Switch
FC Switch
1
2
3
4
1
2
HSG
A
B
MDR/NSR
11hp OpenVMS V7.3-1 Storage Features
Fibre Channel IOLOCK8 Hold Time Reduction
- OpenVMS currently synchronizes all IO activity
with the systemwide SCS/IOLOCK8 - This can become a significant bottleneck on an
SMP system. - Current Alpha systems will use 13-18us of IOLOCK
per FibreChannel disk IO - Max system io rate of 30K IO/sec.. (if all you
do is disk io and have a really good tailwind) - FibreChannel driver optimization
- Reduces IOLOCK8 hold time by 3-6us per IO
- This optimization combined with interrupt
coalescing cuts IOLOCK8 time by 50 and allows a
2x increase in maximum IO/second
12hp OpenVMS V7.3-1 Storage Features
Fibre Channel Interrupt Coalescing
- Aggregates IO completion interrupts in the host
bus adapter - Saves passes through the interrupt handler and
reduced IOLOCK8 hold time - Initial tests show a 25 reduction of IOLOCK8
hold time (3-4us per IO), resulting in a direct
25 increase in maximum IO/second for high IO
workloads - Controlled with sysetcfccp
- Default of OFF in V7.3-1
- Suggested setting 8 IO/s or 1MS before interrupt
- Only effective with 5K IO/sec on a single KGPSA
- Can be controlled per KGPSA
- This feature may negatively impact performance of
applications dependant on high speed single
stream I/O
13hp OpenVMS V7.3-1 Storage Features
Distributed Interrupts
- Allows hardware interrupts to be directly
targeted to the preferred fastpath CPU - Frees up CPU cycles on the primary processor
- Avoids IP interrupt overhead to direct interrupt
to the preferred fastpath CPU - Automatically enabled on all fastpath devices
- You can only disable distributed interrupts by
turning off fastpath (FASTPATH0)
14hp OpenVMS V7.3-1 Storage Features
KZPEA Fastpath
- Fastpath capability now available for KZPEA
- Reduces IOLOCK8 by 50
- Allows secondary processors to run much of the IO
stack - Enabled via bit 2 0 in FASTPATH_PORTS (enabled
by default) - No plans to support SCSI clusters with KZPEA
15hp OpenVMS V7.3-1 Storage Features
Smart Array Support (Latent)
- SmartArray 5300
- Backplane RAID adapter
- 2/4 Ultra3 SCSI channels
- Up to 56 drives
- 15K IO/sec 200MB/sec
- Configured with console utility or host Web based
GUI - Available Q3 2002
16hp OpenVMS V7.3-1 Storage Features
Dynamic DKDRIVER Logging
- Dynamic per UCB command logging
- Controlled by SDA
- Data display/analysis via SDA
- Intended by use by engineering to solve
complicated field issues - No data is logged or analyzed
17New Storage Products
18New Storage Products
EVA5000 (VCS V3)
- Enables 2gb front end FC ports
- 200MB/s to a single volume (from VMS)
- 400MB/sec across the whole array (from VMS)
- Enables multi-level snapshots
- SSSU for host based control
- Supports 15Krpm 36GB disks
- Requires V2 of the EVA Element Manager
- Supports CA/DRM for OpenVMS
19Enterprise Virtual Array 3000
Delivering enterprise-class functionality to the
mid-range market
- Provides enterprise-class availability and
reliability features - Nearly 2x the performance of our nearest
competitors - Supports spacing saving vSnaps and immediately
accessible Snapclones - Scales up to 56 disk drives or 8.1TB
- Lower TCO by significantly reducing the amount
of time needed to manage storage with eva3000s
simplified, powerful, virtualized-management - Offering a solution for an ubeatable price
eva3000
20New Storage Products
XP128 / XP1024
- VMS support claimed by HP prior to merger
- Current support is for V7.2-2
- Expanded support as required
- Qualification is very expensive
- Performance vis-à-vis EVA is totally unknown
- Significant performance is possible
- Huge cache 32GB
- Up to 24 FC ports
- Up to 1024 spindles
- Very nice performance tools
- OpenVMS working with Storage to qualify larger
cluster configurations
21New Storage Products
Modular Storage Array 1000
- 2gb FibreChannel front-end
- 4 U160 SCSI backend ports
- 4u rackmount with 14 drives
- 28 additional drives with
- 2 external storage shelves
- Works in existing SANs
- Low cost 2 node clusters
- with embedded 3 port FC-AL hub
- (V7.3-1 only)
- Supported with
- V7.2-2
- V7.3
- V7.3-1
- Available Q2 CY2003
22MSA1000 SAN Solution
FC Connection Options Single 2Gb port for
external switch 3 port 2Gb Hub 8 port 2Gb switch
(Brocade)
7 4-7
23New Storage Products
Modular Storage Array 1000
- High Performance Architecture
- 200MB/sec throughput
- 25,000 I/O per second
- Redundant controller support
- Active/Standby in initial product
- Active/Active in future
- RAID 0, 1, 01, 5 and ADG
- LUN Masking (SSP)
- 2Gb/1Gb auto-sense host ports
- Dual Cache Modules
- Upgradeable to 512MB (per controller)
- Serial line config and management
24Compaq StorageWorks Enclosures
Expanding Storage with a Dual Bus Enclosure
Expanding Storage with Single Bus Enclosures
25Low End FC-AL Based Cluster
LAN / Cluster Interconnect
CPU 1
CPU 2
CPU 1
CPU 2
PCI
PCI
PCI
PCI
pga
pgb
pga
pgb
fgb
fga
fgb
fga
1
2
3
4
MSA
A
B
26New Storage Products
Smart Array 5300
- Smart Array 5300
- 2/4 U160 SCSI Channels
- Up to 56 drives (4TB)
- Raid0/1/5/ADG
- Up to 256MB cache
- Supported on V7.3-1
- 300MB/sec
- 20K io/sec
- Available now
- Doesnt support forced error commands so full
shadowing support is an issue. Shadowing works
fine but a member will be ejected if an
unrecoverable disk error occurs on one member and
an error cannot be forced on the shadow copy
27HSG / MSA / EVA Volume Construction
RAID 5 Volume
HSG 80 (XP also)
RAID 1 Volume
RAID 0 Volume
RAID
RAID 5 Volume
RAID 0 Volume
RAID 1 Volume
MSA 1000 / Smart Array 5300
RAID 5 Volume
ADG Volume
RAID 1 Volume
Logical
Vraid
Physical
28 Storage Positioning
Homogenous SAN
Direct Attached
Heterogeneous SAN
Small Clusters
- Data Center Multiplatform and Operating System
Support - Zero Downtime (For example, Secure Path or DRM )
- Storage Consolidation
Enterprise Virtual Array
EMA16000, EMA12000, and MA8000
- Dept/Branch Office
- Cluster Solutions
- Limited Technical Pool
- More Price-Sensitive
- Remote Management
MSA1000
- Workgroups
- Lower Entry Price
- Simple Deployment
- Mostly SCSI-based
Smart Array
29New Storage Products
Network Storage Router M2402
- 1U Fibre Channel-to-SCSI router
- 2Gb FC Support
- 4 module slots
- 2x 2gb Fibre Channel
- 4x LVD/SE SCSI
- 4x HVD SCSI
- Web Based Management
- Embedded product for
- tape libraries (E2400)
- Supported in V7.3-1
- Tima kit for V7.3 (fibre_scsi v300)
- Tima kit for V7.2-2 (fibre_scsi v300)
30New Storage Products
Network Storage Router N1200
- 1U Fibre Channel-to-SCSI router
- High Performance - 2Gb FC Support
- 200 MB/s of information throughput
- 1 2Gb Fibre Channel port
- 2 U160 LVD SCSI ports
- Web Based Management
- Embedded product for tape libraries (E1200)
- Supported in V7.3-1
- Tima kit for V7.3 (fibre_scsi v300)
- Tima kit for V7.2-2 (fibre_scsi v300)
31New Storage Products
Tape Libraries
- ESL 9595
- Up to 16 SDLT / LTO drives
- Up to 4 2gb FC ports
- Up to 595 cartridges
- 95TB of uncompressed data
- 900GB/hour uncompressed backup rate SDLT 160/320
- 1.7TB/hour uncompressed backup rate LTO-2
- MSL 5052
- 4 SDLT / LTO drives
- 52 cartridges
- MSL 5026
- 2 SDLT / LTO drives
- 26 cartridges
32New Storage Products
LTO Tape Drives
- VMS will never support Ultrium 1 drives
- No support of transfers of an odd number of bytes
- VMS supports the Ultrium 2 (LTO460) drives
- Testing completed on all current AlphaServer
systems - Supported in both direct-attach SCSI and behind
the FC Bridges (NSR, MDR) - Currently testing these drives with ESL/MSL
libraries support statement on those coming
soon
33Recent Backup Performance Measurements
34New SAN Technology
35New SAN Features
2gb SANs
- Full Family of 2Gb switches
- Brocade
- SAN Switch 2/8, 2/16, 2/32, 2/64
- 2Gb firmware 3.0.2f will interoperate with
installed base of 1Gb SAN Switch family running
v2.6.0c firmware - Does not interoperate with original 1Gb switch
DSGGA-AA/AB - McData
- Edge Switch 2/16, 2/24, 2/32
- SAN Director 2/64, 2/140
- McData qualified on VMS, but most testing occurs
on Brocade. - Interoperability of Brocade/McData not supported
36New SAN Features
Bigger SAN Switches
- SAN Switch 2/32
- 32 port monolithic switch
- 4 port 8Gb/sec ISL trunking available
- Core Switch 2/64
- 2 x 64 port blade type switch
- 16 ports per blade
- Total of 128 2gb ports
- 4 port 8Gb/sec ISL trunking available
- Redundant everything
- Highest performance, highest SAN option
37New SAN Features
LP9802
- Emulex LP9802 will be the next generation Fibre
Channel HBA - PCI-X capable
- 1 2gb port
- 35K IO/sec
- Mid 2003 availability
- Positioned as replacement for LP9002
- V7.2-2, V7.3, V7.3-1 support
38Post V7.3-1 Projects
39Post V7.3-1 Projects
Dynamic Volume Expansion
- Storage arrays today can dynamically expand
volumes (HSG/HSV/MSA) - OpenVMS cannot utilize the expanded volume
without re-initializing - OpenVMS dynamic volume expansion will allow
allocation of extra bitmap space at init time and
then later be able to expand volume size while
the device is mounted - init /limit 1dga100 ! Allocates 1TB bitmap
- change volume size using storage sub-system
commands - set volume 1dga100 /sizexxxxxx
- For volumes initialized prior to Opal, there will
be the capability to dismount the volume, expand
the size and re-mount (without re-initializing
the volume) - This same project will also enable shadowing of
different sized volumes
40Post V7.3-1 Projects
Quieter Mount Verification
- In a SAN there are many reasons for normal
mount verifications - Path switch by another cluster node
- Dropped FC packets (not a norm, but it does
happen) - Rezone of a SAN (causes in flight IO to be
dropped) - These result in mount verification messages that
alarm the users - Quiet mount verification will allow infrequent,
immediately recovered mount verifications to be
suppressed from operator logs - Sysgen parameter will allow current behavior
41Post V7.3-1 Projects
Host-Based MiniMerge
- Recent cancellation of HSG80 Write History
Logging project redirected the MiniMerge plans - Working on design/prototype of host-based
MiniMerge solution, based on Write Bit Map
technology (from MiniCopy) - Will support ALL FC types (HSG, HSV, MSA, XP)
- Does NOT require any storage firmware assists
- Schedule to be published in June 2003
42Post V7.3-1 Projects
Fibre Channel as a Cluster Interconnect
- Development project underway to prototype SCS
traffic over FibreChannel - Provides LAN over FC
- Use PEDRIVER to provide SCS communication
- Goal is to provide stretch clusters without
requiring additional cluster interconnect - May also have cleaner failure characteristics
because SCS and storage will fail as a single
unit - CI/MC class latency is not a goal (and not
possible) - It is a non goal to provide general TCP/IP or
DECNET over FC links (but well probably get it
for free) - Prototype working since Sept-2002
- Release goal is Opal Tima kit
43Itanium Based Systems IO Plans
44OpenVMS Itanium Based Systems IO Plans
- SCSI
- U160 controller (lvd only)
- U320 controller (lvd only)
- No plans for multi-host SCSI
- Host Bus Adapter RAID
- Smartarray family
- FibreChannel
- 2gb adapter
- Storage Cluster Interconnect
- Storage Arrays
- HSG / EVA / MSA / XP
45Longer Term Storage Interconnects
46Long Term Storage Interconnects
- 10Gb FibreChannel
- 2004???
- Very expensive infrastructure costs at first
- iSCSI
- Industry has stagnated somewhat in 2002
- Has some promise as a low-cost way to connect PCs
to a SAN - Host performance overhead is the main issue today
- Next Generation SCS Cluster Interconnect
- Infiniband and iWARP are being investigated
- Looking at interaction of SCS with Storage in
these new interconnects - Target for these is only Itanium-based OpenVMS
system
47(No Transcript)