Jerry Held - PowerPoint PPT Presentation

About This Presentation
Title:

Jerry Held

Description:

Hardening Linux for Enterprise Applications. Peter Knaggs & Xiaoping Li ... Hardening Linux - Using NIC Failover for HA. Understanding network bonding driver ... – PowerPoint PPT presentation

Number of Views:194
Avg rating:3.0/5.0
Slides: 29
Provided by: Analy7
Category:
Tags: held | jerry | linux

less

Transcript and Presenter's Notes

Title: Jerry Held


1
(No Transcript)
2
Hardening Linux for Enterprise Applications
Session id 40274
  • Peter Knaggs Xiaoping Li
  • Oracle Corporation
  • Sunil Mahale
  • Network Appliance

3
Agenda
  • Hardening Linux - Using NIC Failover for HA
  • Understanding network bonding driver
  • System Requirements Configuration
  • Test Procedure Observation
  • Status Statistics information
  • Summary
  • Q A

4
Hardening Linux Using NIC Failover for HA
  • Redundant data paths to networked storage
  • Ability to tolerate failures of NICs
  • Active/Active Load balancing or failover
  • Achieving HA in Oracle environments with NAS

5
Understanding network bonding driver
  • Linux bonding driver to accomplish NIC failover
  • Included in 2.4 kernel
  • Bonds multiple network interfaces
  • Configured as a loadable kernel module
  • Understanding functionality of NIC failover in
    Oracle

6
System Configuration
  • Hardware
  • Linux Systems
  • 2 Intel White Boxes with 4 CPU and 3GB RAM
  • 3 Intel Pro1000 Gigabit Ethernet NICs per
    system
  • Storage
  • 3 Network Appliance F880 filers
  • Total of 18 Disk Shelves with 3TB usable storage
  • Total of 5 Gigabit Ethernet NICs
  • Switch
  • Cisco 6509 Gigabit Ethernet Switch

7
System Requirements
  • Software
  • Linux Systems
  • Red Hat Advanced Server 2.1, kernel 2.4.9, e.12
  • Intel Pro1000 Ethernet driver (e1000_4412k1)
  • Oracle 9i Release 2 database
  • Storage
  • NetApp Filer F880 running Data ONTAP 6.4.1

8
NIC Fail over environment
SERVER
Redo Log i/o Path
bond0
Data File i/o Paths
Gigabit Ethernet switch
NetApp Filers
LOG1
DATA1
DATA2
8
9
Setup Configuration
  • Servers
  • Setup the server with Red Hat Advanced Server
    2.1, kernel 2.4.9, e.12
  • Use the e1000_4412k1 module for the Intel GiGE
    NICs
  • Configure the GiGE NICs in a private network
  • Ensure the GiGE NICs are connected to the Cisco
    switch

10
Setup Configuration
  • Servers (cont)
  • Bonding Driver/module
  • Check if the bonding driver is loaded (lsmod)
  • Check to see if there is module to load
    (bonding.o)
  • Load the bonding module into the kernel
    (modprobe)

11
Setup Configuration
  • Servers (cont)
  • Configure two GiGE network interfaces as eth3 and
    eth4
  • Use the e1000_4412k1 module for eth3 and eth4
  • Bring down all the interfaces using the e1000
    module
  • Unload the default e1000 module (rmmod e1000)
  • Load the new e1000 module (modprobe e1000_4412k1)
  • Bring up all the network interfaces

12
Setup Configuration
  • Servers (cont)
  • Configuring the bond0 virtual interface
  • Add the alias for bond0 interface to
    /etc/modules.conf
  • alias bond0 bonding
  • Create the configuration file for bond0 interface
  • /etc/sysconfig/network-scripts/ifcfg-bond0
  • DEVICEbond0
  • IPADDR10.1.3.101
  • NETMASK255.255.255.0
  • NETWORK10.1.3.0
  • BROADCAST10.1.3.255
  • BOOTPROTOnone
  • ONBOOTyes
  • GATEWAY130.35.148.1
  • USERCTLno

13
Setup Configuration
  • Servers (cont)
  • Bring down the eth3 and eth4 interface to be used
    for bond0
  • Unmount any file systems or volumes currently
    mounted by eth3 and eth4
  • Delete the configuration files for eth3 and eth4
  • Remove the ifcfg-eth3 and ifcfg-eth4 from
    /etc/sysconfig/network-scripts

14
Setup Configuration
  • Servers (cont)
  • Create the bond0 virtual interface
  • modprobe bonding
  • ifconfig bond0 netmask 255.255.255.0 broadcast
    10.1.3.255
  • ifconfig bond0 10.1.3.101
  • ifenslave bond0 eth3
  • ifenslave bond0 eth4
  • ifenslave bond0 up
  • Check to see if bond0,eth3 and eth4 have the same
    MAC address

15
Setup Configuration
  • Storage
  • Configure the 3 NetApp filers
  • 2 Filers are used for storing Oracle datafiles, 1
    for Oracle log files, (DATA1, DATA2 and LOG1)
  • DATA1 and DATA2 each have 2 GiGE NICs configured
  • Filer LOG1 has 1 GiGE NIC configured
  • Filer DATA1 and DATA2 each have 4 logical volumes
  • Filer LOG1 has 1 logical volume
  • All the GiGE NICs are connected to the Cisco
    switch

16
Setup Configuration
  • Switch
  • Enable channel trunking or port trunking
  • Interface eth3 and eth4 from the server are
    connected to 2 ports of the switch
  • Create a port channel for these ports
  • Consolegt (enable) set port channel 4/1-2 on
  • Where
  • eth3 eth4 are connected to port 4/1-2
  • Enable portfast for the ports (spantree portfast)

17
Test Procedure Observation
  • Non Database Tests
  • Oracle Database Tests

18
Test Procedure Observation
  • Non Database Tests
  • Copy of large file over the bond0 interface to
    the NetApp filer
  • Simulate NIC failure
  • Down the eth3 interface of bond0
  • ifconfig eth3 down
  • Bring up eth3 interface
  • ifconfig eth3 up
  • Pulling out network cables on the enslaved
    interface, eth3
  • Observations
  • IO load was distributed over the eth3 and eth4 of
    bond0
  • I/O load switched to the remaining interface,
    eth4

19
Test Procedure Observation
  • Database Tests
  • Create a very large database
  • Create a large Oracle 9i OLTP database (1TB) on
    Filers
  • Run the OLTP workload with 55 users, around 6500
    tpmC
  • The workload was run for about 30min
  • Simulated NIC failure by pulling network cable
  • Observation
  • Average load on the bond0 interface was about
    10MB/s
  • The network traffic on eth3 and eth4 were evenly
    spread
  • The effect of simulated NIC failure on thruput
    was lt 10

20
Test Procedure Observation
  • Testing with new bonding driver
  • The new bonding driver at HPs website
  • http//h18007.www1.hp.com/support/files/networkin
    g/nics
  • Has been running in Oracle data centers with good
    stability
  • Download the RPMs, build and install the driver
  • Remove the default module and load the new one

21
Test Procedure Observation
  • Testing with active/passive mode with new bonding
    driver
  • Load the new module with mode1
  • modprobe bonding mode1
  • The I/O load will be only on first slave NIC
  • The other slave will act as a backup
  • When the active slave fails, the backup will take
    over
  • You must have portfast enabled on the switch
    for the ports

22
Status Statistics information
  • Advantages of the new bonding driver
  • Clear status information in the proc file system
  • cat /proc/net/bond0/info
  • Bonding Mode active-backup
  • Currently Active Slave eth3
  • MII Status up
  • MII Polling Interval (ms) 100
  • Up Delay (ms) 0
  • Down Delay (ms) 0
  •  
  • Slave Interface eth4
  • MII Status up
  • Link Failure Count 7
  • Slave Interface eth3
  • MII Status up
  • Link Failure Count 8

23
Status Statistics information
  • Advantages of the new bonding driver
  • Clear status information from the dmesg log file
  • modprobe bonding miimon100
  • dmesg
  • bonding.cv1.0.1-2HP
  • bond0 registered with MII link monitoring set to
    100 ms, in bonding mode.
  • bond0 registered without ARP monitoring

24
Status Statistics information
  • Advantages of the new bonding driver
  • Clear status information from the sar report
  • I/O load on the bond interface bond0 is
    consistent with its slaves
  • In load balancing mode, I/O activity shown on
    bond0 is sum of its slaves

25
Status Statistics information
  • sar activity report
  • 110733 AM IFACE rxpck/s txpck/s
    rxbyt/s 110736 AM eth3 5935.88
    2853.82 8454566.78 110736 AM eth4
    4564.45 2835.22 6491304.32 110736 AM
    bond0 10500.33 5689.04 14945871.10

26
Status Statistics information
  • Advantages of the new bonding driver
  • Clear status information in the rpm database
  • rpm -qil bonding
  • Useful man pages

27
Summary
  • The bonding driver can be used for NIC failover
  • Provides redundant data paths for networked
    storage
  • The default bonding driver only supports load
    balancing
  • The new driver, supports Active/Passive or load
    balancing
  • The effect of simulated NIC failures on thruput
    was lt 10
  • Achieve HA in Oracle environment with NAS

28
A
Write a Comment
User Comments (0)
About PowerShow.com