ORACLE/RAC2012. 5. 18. 16:24
반응형

How to Configure Solaris Link-based IPMP for Oracle VIP [ID 730732.1]

  수정 날짜 23-NOV-2011     유형 REFERENCE     상태 PUBLISHED  

In this Document
  Purpose
  Scope
  How to Configure Solaris Link-based IPMP for Oracle VIP
  References


Applies to:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.2.0.2 - Release: 10.2 to 11.2
Oracle Solaris on SPARC (64-bit)
Oracle Solaris on x86-64 (64-bit)

Purpose

This note will give a sample configuration for Link-based failure detection mode for IPMP which is introduced in Sun Solaris 10 platform.

Before Sun Solaris 10, there is only Probe-based failure detection IPMP configuration that you can find the example in note 283107.1

The main different between Probe-based IPMP and Link-based IPMP
- In Probe-based IPMP, beside the host's Physical IP address you also need to assign a test IP address for each NIC card. And one target system, normally default gateway, where multipathing daemon used for ICMP probe message.

- In Link-based IPMP, only the host's Physical IP address is required.

Scope

By default, link-based failure detection is always enabled in Solaris 10, provided that the driver for the interface supports this type of failure detection. The following Sun network drivers are supported in the current release


hme
eri
ce
ge
bge
qfe
dmfe
e1000g
ixgb
nge
nxge
rge
xge



Network Requirement
--------------------------------
There is no different for Probe-based and Link-based in term of hardware requirement.

It is only one Physical IP address required per cluster node. The following is list of NIC Card and IP addresses that will be used in the following example.
- Public Interface: ce0 and ce1
- Physical IP: 130.35.100.123
- Oracle RAC VIP: 130.35.100.124

How to Configure Solaris Link-based IPMP for Oracle VIP

IPMP Configuration
-----------------------------
1. ifconfig ce0 group racpub
2. ifconfig ce0 addif 130.35.100.123 netmask + broadcast + up
3. ifconfig ce1 group racpub

To preserve the IPMP configuration across reboot, you need to update the /etc/hostname.* files as following
1. The entry of /etc/hostname.ce0 file
130.35.100.123 netmask + broadcast + group racpub up

2. The entry of /etc/hostname.ce1 file
group racpub up

Before CRS installation , the 'ifconfig -a' output will be

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.1.1 netmask ffffff00 broadcast 192.168.1.255
ether 0:19:b9:3f:87:11
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 130.35.100.123 netmask ffffff00 broadcast 130.35.100.255
groupname racpub
ether 0:14:d1:13:7b:7e
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname racpub
ether 0:18:e7:8:c5:8b


Since no test IP assigned to public interfaces, the IP address of ce0 is the physical IP address and ce1 is 0.0.0.0.

CRS / VIPCA configuration
----------------------------------------
Upon successful of root.sh on the CRS installation, vipca will only make the primary interface as the public interface. If you start vipca application manually, the second screen (VIP Configuration Assistant, 1 of 2) will only list ce0 as the available public interface.

After that, you need to update CRS for the second NIC (ce1) information with srvctl command

# srvctl modify nodeapps -n tsrac1 -A 130.35.100.124/255.255.255.0/ce0\|ce1

After CRS is installed and Oracle VIP is running, the 'ifconfig -a' output will be

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.1.1 netmask ffffff00 broadcast 192.168.1.255
ether 0:19:b9:3f:87:11
ce0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 130.35.100.123 netmask ffffff00 broadcast 130.35.100.255
groupname racpub
ether 0:14:d1:13:7b:7e
ce0:1: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 3
inet 130.35.100.124 netmask ffffff00 broadcast 130.35.100.255
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname racpub
ether 0:18:e7:8:c5:8b


When the primary interface on the public network failed, either NIC faulty or the LAN cable broken, Oracle VIP will follow the physical IP failover to standby interface. As the following output of 'ifconfig -a' shows

lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
hme0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.1.1 netmask ffffff00 broadcast 192.168.1.255
ether 0:19:b9:3f:87:11
ce0: flags=19000802<BROADCAST,MULTICAST,IPv4,NOFAILOVER,FAILED> mtu 0 index 3
inet 0.0.0.0 netmask 0
groupname racpub
ether 0:14:d1:13:7b:7e
ce1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 0.0.0.0 netmask ff000000 broadcast 0.255.255.255
groupname racpub
ether 0:18:e7:8:c5:8b
ce1:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 4
inet 130.35.100.123 netmask ffffff00 broadcast 130.35.100.255
ce1:2: flags=1040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4> mtu 1500 index 4
inet 130.35.100.124 netmask ffffff00 broadcast 130.35.100.255

References

NOTE:283107.1 - Configuring Solaris IP Multipathing (IPMP) for the Oracle 10g VIP
NOTE:368464.1 - How to Setup IPMP as Cluster Interconnect
docs.oracle.com/cd/E19253-01/816-4554/mpoverview/index.html

관련 정보 표시 관련 자료


제품
  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
키워드
CLUSTERWARE; CONFIGURATION; SOLARIS; VIP

반응형
Posted by [PineTree]