ORACLE/RAC2023. 3. 6. 16:31
반응형

오라클 DB 리소스 정보가 1,2번 순서가 db 장애 이후에 바뀌어져 나올 때
리소스를 삭제하고 재 등록 하면 된다.

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.racdb.db
      1        ONLINE  ONLINE       rac2                     Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             _1,STABLE
      2        ONLINE  ONLINE       rac1                     Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             _1,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
DB 인스턴스가 순서가 바뀌어 나옴을 확인

db 정지
srvctl stop database -d racdb

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.racdb.db
      1        ONLINE  OFFLINE                               Instance Shutdown,STABLE
      2        ONLINE  OFFLINE                               Instance Shutdown,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
정지 확인

리소스 삭제
[racdb1:/home/oracle]> srvctl remove database -d racdb
racdb 데이터베이스를 제거하겠습니까? (y/[n]) y
삭제 완료
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
DB 정보 나오지 않음

DB 리소스 등록
[racdb1:/home/oracle]> srvctl add database -d racdb -oraclehome /u01/app/oracle/product/19c/db_1

인스턴스 리소스 등록
[racdb1:/home/oracle]> srvctl add instance -db racdb -instance racdb1 -node rac1
[racdb1:/home/oracle]> srvctl add instance -db racdb -instance racdb2 -node rac2

[racdb1:/home/oracle]> srvctl start database -d racdb
PRCR-1079 : ora.racdb.db 리소스 시작을 실패했습니다.
CRS-5017: The resource action "ora.racdb.db start" encountered the following error: 
ORA-01078: failure in processing system parameters
LRM-00109: '/u01/app/oracle/product/19c/db_1/dbs/initracdb1.ora' ������������ ��������� ��� ��� ������������
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/rac1/crs/trace/crsd_oraagent_oracle.trc".

CRS-5017: The resource action "ora.racdb.db start" encountered the following error: 
ORA-01078: failure in processing system parameters
LRM-00109: '/u01/app/oracle/product/19c/db_1/dbs/initracdb2.ora' ������������ ��������� ��� ��� ������������
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/rac2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.racdb.db' on 'rac2' failed
CRS-2632: There are no more servers to try to place resource 'ora.racdb.db' on that would satisfy its placement policy
CRS-2674: Start of 'ora.racdb.db' on 'rac1' failed

혹시 라도 기동 실패시에 1,2번 노드에서 각 각 오라클 홈 밑에 dbs 경로에 spfile 경로를 지정해 주면 된다.
파라미터 파일 을 못 찾아서 나는 에러
[racdb1:/u01/app/oracle/product/19c/db_1/dbs]> vi initracdb1.ora 
spfile='+data/racdb/parameterfile/spfile.274.1093539721'

2번 서버
[racdb2:/u01/app/oracle/product/19c/db_1/dbs]> vi initracdb2.ora 
spfile='+data/racdb/parameterfile/spfile.274.1093539721'

정상 적으로 기동 완료
[racdb1:/home/oracle]> srvctl start database -d racdb
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac1                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.racdb.db
      1        ONLINE  ONLINE       rac1                     Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             _1,STABLE
      2        ONLINE  ONLINE       rac2                     Open,HOME=/u01/app/o
                                                             racle/product/19c/db
                                                             _1,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac1                     STABLE
--------------------------------------------------------------------------------

정상 확인

반응형
Posted by [PineTree]
ORACLE/RAC2023. 3. 6. 15:34
반응형

RAC에서 DB 인스턴스 종료 후 

ORACLE RAC DATABASE RESOURCE 삭제

[racdb1:/home/oracle]> srvctl remove database -d racdb
racdb 데이터베이스를 제거하겠습니까? (y/[n]) y

하면 

--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.racdb.db
      1        OFFLINE  OFFLINE                               Instance Shutdown,STABLE
      2        OFFLINE  OFFLINE                               Instance Shutdown,STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------

삭제 후


--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.chad
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.net1.network
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
ora.ons
               ONLINE  ONLINE       rac1                     STABLE
               ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        ONLINE  OFFLINE                               STABLE
ora.CRS.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.DATA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.FRA.dg(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       rac1                     STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       rac2                     STABLE
ora.asm(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     Started,STABLE
      2        ONLINE  ONLINE       rac2                     Started,STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.asmnet1.asmnetwork(ora.asmgroup)
      1        ONLINE  ONLINE       rac1                     STABLE
      2        ONLINE  ONLINE       rac2                     STABLE
      3        OFFLINE OFFLINE                               STABLE
ora.cvu
      1        ONLINE  ONLINE       rac2                     STABLE
ora.qosmserver
      1        ONLINE  ONLINE       rac2                     STABLE
ora.rac1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.rac2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       rac1                     STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       rac2                     STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------

RACDB 리소스 삭제 확인

반응형
Posted by [PineTree]
ORACLE/RAC2023. 3. 6. 15:26
반응형

기존 명령어

crs_stat -p 

 

12C 이후  명령어 변경

[root@rac2 bin]# crsctl status resource
NAME=ora.ASMNET1LSNR_ASM.lsnr(ora.asmgroup)
TYPE=ora.asm_listener.type
TARGET=ONLINE        , ONLINE        , ONLINE
STATE=ONLINE on rac1, ONLINE on rac2, OFFLINE

NAME=ora.CRS.dg(ora.asmgroup)
TYPE=ora.diskgroup.type
TARGET=ONLINE        , ONLINE        , OFFLINE
STATE=ONLINE on rac1, ONLINE on rac2, OFFLINE

NAME=ora.DATA.dg(ora.asmgroup)
TYPE=ora.diskgroup.type
TARGET=ONLINE        , ONLINE        , OFFLINE
STATE=ONLINE on rac1, ONLINE on rac2, OFFLINE

NAME=ora.FRA.dg(ora.asmgroup)
TYPE=ora.diskgroup.type
TARGET=ONLINE        , ONLINE        , OFFLINE
STATE=ONLINE on rac1, ONLINE on rac2, OFFLINE

NAME=ora.LISTENER.lsnr
TYPE=ora.listener.type
TARGET=ONLINE        , ONLINE
STATE=ONLINE on rac1, ONLINE on rac2

NAME=ora.LISTENER_SCAN1.lsnr
TYPE=ora.scan_listener.type
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.LISTENER_SCAN2.lsnr
TYPE=ora.scan_listener.type
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.LISTENER_SCAN3.lsnr
TYPE=ora.scan_listener.type
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.asm(ora.asmgroup)
TYPE=ora.asm.type
TARGET=ONLINE        , ONLINE        , OFFLINE
STATE=ONLINE on rac1, ONLINE on rac2, OFFLINE

NAME=ora.asmnet1.asmnetwork(ora.asmgroup)
TYPE=ora.asm_network.type
TARGET=ONLINE        , ONLINE        , OFFLINE
STATE=ONLINE on rac1, ONLINE on rac2, OFFLINE

NAME=ora.chad
TYPE=ora.chad.type
TARGET=ONLINE        , ONLINE
STATE=ONLINE on rac1, ONLINE on rac2

NAME=ora.cvu
TYPE=ora.cvu.type
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.net1.network
TYPE=ora.network.type
TARGET=ONLINE        , ONLINE
STATE=ONLINE on rac1, ONLINE on rac2

NAME=ora.ons
TYPE=ora.ons.type
TARGET=ONLINE        , ONLINE
STATE=ONLINE on rac1, ONLINE on rac2

NAME=ora.qosmserver
TYPE=ora.qosmserver.type
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.rac1.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.rac2.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.racdb.db
TYPE=ora.database.type
TARGET=ONLINE , ONLINE
STATE=OFFLINE, OFFLINE

NAME=ora.scan1.vip
TYPE=ora.scan_vip.type
TARGET=ONLINE
STATE=ONLINE on rac1

NAME=ora.scan2.vip
TYPE=ora.scan_vip.type
TARGET=ONLINE
STATE=ONLINE on rac2

NAME=ora.scan3.vip
TYPE=ora.scan_vip.type
TARGET=ONLINE
STATE=ONLINE on rac2

반응형
Posted by [PineTree]
ORACLE/RAC2019. 10. 8. 11:03
반응형

DB RESOURCE 재등록 방법 입니다.

기존에 DB가 기동되어있는 상태에서 CRS의 RESOURCE만 삭제 후 등록하여도 기동되어있는 INSTANCE에는 영향을 주지 않습니다.

 

1.     기존 정보 확인

srvctl config database –d unique_name

2.     기존 DB Resource 삭제

Srvctl remove database –d unique_name –f  è running일 경우 강제로 삭제 하며현재 기동중인 instance에는 영향이 없습니다.

3.     Resource 재등록

가.   DB RESOURCE : srvctl add database –d unique_name –o oracle_home path –spfile spfile_path –pwfile pwfile_path  è srvctl config database 확인 했던 값으로 명시하시면 됩니다.

나.   Instance resource 등록 : srvctl add instance –d unique_name –i instance_name –n node_name <1.2번노드>

4.     Crsctl stat res –t 통해서 resource 등록 확인

5.     Srvctl start database –d unique_name

 

감사합니다.


반응형
Posted by [PineTree]
ORACLE/RAC2016. 6. 10. 13:29
반응형


다른서버에 엔진을 묶어서 싱글로 올릴때는 tar로 말아서 옮긴후 아래 내용중 relink 단계까지만 하고 dbca로 생성 하니 잘되네요


My last post about Turn on/off RAC basically is to implement Converting/Migrating Oracle RAC to Single instance. I try to find a way to do this in Google but it seem no result regarding this. You can find thousand of article on converting Single Instance to Oracle RAC but not the other way around. Why?

You'll find a Doc. on Metalink with step by step instructions on "How to convert a
single instance Database to RAC", but not the other way round.
The analyst's response from Oracle is "We cannot convert a RAC database into a Single
instance database, that is the reason why you didnt find any steps.
It is not supported.

Why Oracle mentioned it is not supported? It does not makes any sense that you cannot convert from RAC to non-RAC environment, unless Oracle does not want their customers to go away from RAC.
Anyway…it’s all about $$$$$.
So, based on my knowledge on how Oracle RAC work, I’ve successfully done it.


This is based on Oracle 10G Release 2 and assumes:

1. Oracle RAC running with cluster file system
2. You have basic knowledge about Oracle RAC


Test Server:
OS : Red Hat Enterprise Linux Server release 5.4
Database Version : 10.2.0.4
File system: OCFS2


1. Stop database and CRS on both node

$ srvctl stop database -d mydb
# crsctl stop crs

2. Turn Off RAC

SQL> startup
ORA-29702 error occurred in Cluster Group Service operation


Relink with the RAC OFF.
$ cd $ORACLE_HOME/rdbms/lib
$ /usr/ccs/bin/make -f ins_rdbms.mk rac_off
Relinking oracle
$ make -f ins_rdbms.mk ioracle
## OR , both working fine
$ cd $ORACLE_HOME/bin
$ relink oracle


If ASM Instance Exist, run below command as root
# /oracle/product/10.2.0/db/bin/localconfig delete
# /oracle/product/10.2.0/db/bin/localconfig add

3. Parameter(Pfile/spfile) & database changes

SQL> startup
SQL> alter database disable thread 2;
SQL> alter system set remote_listener='';


3a. Remove unwanted logfile
SQL> select thread#, group# from v$log;
SQL> alter database drop logfile group 3;
SQL> alter database drop logfile group 4;


3b. Remove unwanted tablespace
SQL> drop tablespace UNDOTBS2 including contents and datafiles;


3c. Rename instance name.
SQL> alter system set instance_name=<new_name> scope=spfile;
SQL> shutdown immediate
SQL> startup
- Change your ORACLE_SID environment


4. Run $ORA_CRS_HOME/install/rootdelete.sh on both node
- This will stop and remove all CRS startup related file

5. Remove $ORA_CRS_HOME binary using Clusterware OUI installer
- Ignore any error if 2nd node already down
- rm -rf $ORA_CRS_HOME


6. Modify listener file
$ vi $ORACLE_HOME/network/admin/listener.ora


6a. Modify tnsname file
$ vi $ORACLE_HOME/network/admin/tnsnames.ora

반응형
Posted by [PineTree]
ORACLE/RAC2015. 9. 6. 18:35
반응형


In this Document

Purpose
Scope
Details

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.2.0.1 to 12.1.0.1 [Release 10.2 to 12.1]
IBM AIX on POWER Systems (64-bit)

PURPOSE

AIX GPFS certification information on RAC

SCOPE

IBM AIX on POWER Systems (64-bit)
IBM AIX Based Systems (64-bit)
Oracle Database Server - Enterprise Edition - Version: 10gR2, 11gR2, 12cR1
AIX 7.1, 6.1 and 5.3 Systems (64-bit)
GPFS Version 4.1, 3.5, 3.4 and3.3.
IBM Spectrum Scale 4.1.1.
This document contains information specific to GPFS and IBM Spectrum Scale on AIX with Oracle RAC. For general
AIX information, refer to the My Oracle Support, AIX note (282036.1).

 

DETAILS

NEW
Oracle certifications of GPFS 4.1 includes IBM Spectrum Scale 4.1.1+ with APAR IV76383

CURRENT
For Oracle RAC 12cR1, IBM Spectrum Scale 4.1.1 is certified with AIX7.1 and AIX6.1
Status of GPFS 3.2 Certifications with Oracle RAC.
For Oracle RAC 11gR2, 11gR1, & 10gR2: GPFS 3.2 out of support and removed from document.
For Oracle RAC 11gR2 , GPFS ver. 4.1 is certified with AIX7.1 and AIX6.1.
For Oracle RAC 12cR1 , GPFS ver. 3.5 and ver. 3.4 are certified with AIX7.1 and AIX6.1
Status of GPFS 4.1 Certifications with Oracle RAC.
For Oracle RAC 11gR2 , GPFS ver. 4.1 is certified with AIX7.1 and AIX6.1
Status of GPFS 3.5 Certifications with Oracle RAC.
For Oracle RAC 12cR1: GPFS 3.5 is certified with AIX7.1, and AIX6.1.
For Oracle RAC 11gR2: GPFS 3.5 is certified with AIX7.1, and AIX6.1.
Status of GPFS 3.4 Certifications with Oracle RAC.
For Oracle RAC 12cR1: GPFS 3.4 is certified with AIX7.1, and 6.1.
For Oracle RAC 11gR2: GPFS 3.4 is certified with AIX7.1, 6.1 and 5.3.
For Oracle RAC 10gR2: GPFS 3.4 is certified with AIX 5.3 and AIX6.1.
Status of GPFS 3.3 Certifications with Oracle RAC.
For Oracle RAC 11gR2, 11gR1, and 10gR2: GPFS 3.3 is certified with AIX 5.3 and 6.1.
For Oracle RAC 11gR2: GPFS 3.3 is certified with AIX7.1.

 

Please see “Software Requirements” section of the attachment to determine the minimum certified levels of the software products involved.

 

Database - RAC/Scalability Community
To discuss this topic further with Oracle experts and industry peers, we encourage you to review, join or start a discussion in the My Oracle Support Database - RAC/Scalability Community

 


반응형
Posted by [PineTree]
ORACLE/RAC2014. 8. 15. 13:29
반응형

출처 : http://fdsblog.tistory.com/entry/Convert-Oracle-RAC-to-single-instance

Oracle 관련 자료를 보면 Single Instacne를 RAC로 전환하는 자료는 Oracle 관련 사이트에 잘 정리된 자료가 많지만, RAC 환경에서 Single instance로의 전환 자료는 찾아 보기가 힘드네요.
아래에 정리가 잘 되어 있네요.


My last post about Turn on/off RAC basically is to implement Converting/Migrating Oracle RAC to Single instance.  I try to find a way to do this in Google but it seem no result regarding this. You can find thousand of article on converting Single Instance to Oracle RAC but not the other way around. Why?

You'll find a Doc. on Metalink with step by step instructions on "How to convert a
single instance Database to RAC", but not the other way round.
The analyst's response from Oracle is "We cannot convert a RAC database into a Single
instance database, that is the reason why you didnt find any steps.
It is not supported.


Why Oracle mentioned it is not supported? It does not makes any sense that you cannot convert from RAC to non-RAC environment, unless Oracle does not want their customers to go away from RAC.
Anyway…it’s all about $$$$$.
So, based on my knowledge on how Oracle RAC work, I’ve successfully done it.


This is based on Oracle 10G Release 2 and assumes:

1. Oracle RAC running with cluster file system
2. You have basic knowledge about Oracle RAC



Test Server
:

OS : Red Hat Enterprise Linux Server release 5.4
Database Version : 10.2.0.4
File system: OCFS2



1. Stop database and CRS on both node

$ srvctl stop database -d mydb
# crsctl stop crs


2. Turn Off RAC

SQL> startup
ORA-29702 error occurred in Cluster Group Service operation



Relink with the RAC OFF.

$ cd $ORACLE_HOME/rdbms/lib
$ /usr/ccs/bin/make -f ins_rdbms.mk rac_off
Relinking oracle
$ make -f ins_rdbms.mk ioracle
## OR , both working fine
$ cd $ORACLE_HOME/bin
$ relink oracle



If ASM Instance Exist, run below command as root
# /oracle/product/10.2.0/db/bin/localconfig delete
# /oracle/product/10.2.0/db/bin/localconfig add


3.     Parameter(Pfile/spfile) & database changes

SQL> startup
SQL> alter database disable thread 2;
SQL> alter system set remote_listener='';



3a. Remove unwanted logfile

SQL> select thread#, group# from v$log;
SQL> alter database drop logfile group 3;
SQL> alter database drop logfile group 4;



3b. Remove unwanted tablespace

SQL> drop tablespace UNDOTBS2 including contents and datafiles;



3c.    Rename instance name.

SQL> alter system set instance_name=<new_name> scope=spfile;
SQL> shutdown immediate
SQL> startup
- Change your ORACLE_SID environment



4. Run $ORA_CRS_HOME/install/rootdelete.sh on both node
- This will stop and remove all CRS startup related file


5. Remove $ORA_CRS_HOME binary using Clusterware OUI installer
- Ignore any error if 2nd node already down
- rm -rf $ORA_CRS_HOME


6. Modify listener file
$ vi $ORACLE_HOME/network/admin/listener.ora


6a. Modify tnsname file
$ vi $ORACLE_HOME/network/admin/tnsnames.ora

 

반응형
Posted by [PineTree]
ORACLE/RAC2014. 7. 2. 15:36
반응형
목적
범위
상세 내역
 1. 최신 Patchset Update (PSU) 를 적용하라
 2.  UDP 버퍼 크기가 적당한 지 확인하라
 3.  모든 10.2 와 11.1 클러스터에는 DIAGWAIT 값을 13으로 설정하라
 4.  리눅스 환경에서는 HugePages를 구현하라
 5.  OS Watcher 및 Cluster Health Monitor를 구현하라
 6.  OS설정에 대해서는 Best Practice를 따르라
 7.  AIX 플랫폼에서 과도한 페이징/스와핑 이슈를 방지하기 위해, 적절한 APAR가 제대로 설정돼 있는 지 확인하라
 8.  NUMA 패치를 적용하라
 9.  윈도우즈의 비상호적 데스크탑 힙 메모리 (noninteractive Desktop Heap)를 늘려라
 10.  RACcheck 유틸리티를 수행해라
 11. NTP를 slewing 옵션으로 설정하라.
참고

적용 대상:

Oracle Database - Enterprise Edition - 버전 10.2.0.1 to 11.2.0.3 [릴리즈 10.2 to 11.2]
이 문서의 내용은 모든 플랫폼에 적용됩니다.

목적

Many RAC instability issues are attributed to a rather short list of commonly missed Best Practices and/or Configuration issues.  The goal of this document it to provide a easy to find listing of these commonly missed Best Practices and/or Configuration issues with the hope to prevent instability caused by these issues.

범위

이 문서의 내용은 RAC이 구현된 모든 환경에 적용된다.

상세 내역

1. 최신 Patchset Update (PSU) 를 적용하라

적용 가능한 플랫폼:  모든 플랫폼

이유?: PSU는 CPU 패치 계획을 좀더 향상시키기 위해 10.2.0.4 이상 버전에서 도입되었다. PSU는 분기마다 출시되며 최근의 CPU를 포함하고 있고 거기에 추가적으로, 운영 환경의 안정화에 중요하다고 여겨지는 개별패치(fix)들을 포함하고 있다. 만약 신규 설치건이라면, 반드시 해당 버전의 가장 최신 PSU를 적용해야 한다. 기존 시스템의 경우, 지속적이고 정기적으로 최신의 PSU를 적용하는 방식으로 운영 환경을 유지보수 하도록 계획해야 한다. 오라클 Support에 문의되고 버그로 판명되는 이슈들중에는 최신의 PSU에서 이미 해결된 알려진 버그인 경우가 많다. 그리고, 윈도우즈 환경에서는 누적 번들 패치가 PSU보다는 좀더 자주 출시가 되는데, 이 경우의 최신의 PSU는, PSU 출시 분기 동안에 나왔던 윈도우즈 번들 패치에 포함된다. .

추가 정보: PSU에 관한 더 많은 정보는 아래 문서들을 참조하라:
Document 854428.1 Intro to Patch Set Updates (PSU)
Document 1082394.1 11.2.0.X Grid Infrastructure PSU Known Issues
Document 756671.1 Oracle Recommended Patches -- Oracle Database
Document 161549.1 Oracle Database, Networking and Grid Agent Patches for Microsoft Platforms

2.  UDP 버퍼 크기가 적당한 지 확인하라

적용 가능한 플랫폼: Windows를 제외한 모든 플랫폼

이유?: 인터커넥트(interconnect)는 RAC 데이터베이스에서 생명선과 같다. 그러나 UDP send/receive 버퍼에 할당된 버퍼 공간이 적당치 않다면 인터커넥트 성능에 상당히 영향을 끼치며, 이것은 곧 클러스터 안정성에 문제가 될 수 있다. 

추가 정보: 적절한 UDP 버퍼 크기 산정을 위한 더 많은 정보는 아래 문서들을 참조하라:
Document 181489.1 Tuning Inter-Instance Performance in RAC and OPS
Document 563566.1 gc lost blocks diagnostics

노트: 윈도우즈 클러스터는 캐쉬 퓨전 트래픽에 TCP를 사용하기 때문에, UDP 버퍼 설정은 윈도우즈 환경에서는 해당 사항 없다.

 

3.  모든 10.2 와 11.1 클러스터에는 DIAGWAIT 값을 13으로 설정하라

적용 가능한 플랫폼:  Windows를 제외한 모든 플랫폼

이유?:  10gR2 (10.2.x)와 11gR1 (11.1.x)에서 OPROCD 데몬을 위한 기본 마진값은 겨우 500 밀리세컨드 (0.5초)이다. 이 마진은 매우 바쁜 시스템의 경우 너무 작은 값이며 그로 인해 과부하시 가짜 리부팅 (false reboot)이 발생할 수 있다. diagwait 설정을 13으로 변경하는 것은 OPROCD의 마진을 10,000 밀리세컨드 (10초)로 변경해주며 이는 곧 바쁜 시스템에 대해 가짜 리부팅 상황을 피하게끔 충분한 마진을 부여하는 것이다. 게다가, diagwait 설정은, 만약 reboot이 발생하는 경우에 추가 디버깅해볼 수 있는 정보를 trace 화일에 쓰는 시간을 좀더 부여하게끔 해준다. 이 변경은 patchset에는 포함될 수 없다. 그 이유는 클러스터 전체의 중단이 필요하기 때문입니다. 모든 10gR2 와 11gR1 클러스터에서는 이 값을 13으로 설정할 것을 강력히 권고하는 바이다. 새로 설치하는 경우라면 인스톨 직후에 이 diagwait 값을 변경해야 한다. 기존에 설치된 시스템의 경우는 다운타임을 계획해서 필히 적용해야 한다. 현재 설정은 아래 명령어로 확인할 수 있다:

# $CLUSTERWARE_HOME\bin\crsctl get css diagwait

 

노트: 이 설정은 윈도우즈 환경에는 적용되지 않는다. 또한 11gR2 릴리즈 (11.2.0.1 과 그 이상 버전)에도 적용되지 않는다.


추가 정보: DIAGWAIT에 관한 더 많은 정보는 아래 문서들을 참조하라:
Document 559365.1  Using Diagwait as a diagnostic to get more information for diagnosing Oracle Clusterware Node evictions
Document 567730.1  Changes in Oracle Clusterware on Linux with the 10.2.0.4 Patchset

4.  리눅스 환경에서는 HugePages를 구현하라

적용 가능한 플랫폼:  모든 LINUX 64-Bit 플래폼

이유?:  HugePages를 구현하면 리눅스 환경에서 커널의 성능이 크게 향상된다. 이는, 더 많은 메모리를 가지고 있는 시스템의 경우 특히 그렇다. 일반적으로 12GB RAM 이상을 가진 시스템의 경우, HugePages를 적용하기에 적합한 대상이다. 더 많은 RAM을 가진 시스템일수록 HugePages 활성화를 통해서 얻는 이득이 많다. 왜냐하면 시스템이 가진 메모리가 많을 수록 커널이 그만한 양의 메모리에 대해 page table을 맵핑하고 유지해야 하는 작업량이 증가하기 때문이다. HugePages를 활성화하는 것은 커널이 관리해야 할 page수를 줄여주기 때문에 시스템이 훨씬 효율적으로 동작할 수 있게 해준다. 만약 HugePages가 활성화돼 있지 않다면, 경험적으로 봤을 때 커널이 오라클 클러스터웨어나 Real Application Clusters 데몬보다 선점하면서 인스턴스 또는 노드 eviction을 일으키는 사례가 종종 나타난다.

노트:  11g 자동 메모리 관리 (Automatic Memory Management; AMM)는 리눅스 플랫폼에서 HugePages와 호환되지 않는다. Best practice는 HugePages를 사용하기 위해서라면 AMM을 비활성화하는 것이다. 리눅스 상에서의 AMM및 HugePages에 관한 더 많은 정보는 Document 749851.1 를 참조하라.



추가 정보: 
Document 361323.1  HugePages on Linux: What It Is... and What It Is Not...
Document 401749.1  Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration

5.  OS Watcher 및 Cluster Health Monitor를 구현하라

적용 가능한 플랫폼s:  모든 플랫폼

이유?: 안정성 측면과 직접 관련은 없지만, OS Watcher와 Cluster Health Monitor는 OS 상태 및 노드 및 인스턴스 eviction을 유발하는 잠재적 근본 원인을 규명하는 데 있어 매우 유용한 툴이다. 어떤 문제가 처음 발생한 이후 그 문제를 진단할 수 있는 적절한 데이터가 있다면 그 자료를 확보하는 것은 문제 원인을 규명하는 시간을 단축하게끔 해주며, 그렇게 함으로써 향후 장애를 예방할 수 있을 것이다. 이런 종류의 대부분의 3rd party 데이터 수집툴은 수집 주기가 너무 길며 (예를 들어, 5분 또는 그 이상), 그 툴들의 자료를 해석하기가 어렵거나 적절한 데이터를 수집하지 않는 경우도 있다. OS Watcher는 30초 (기본값) 간격으로 기본적인 OS 정보를 수집하는 매우 간단하고 가벼운 툴이다. Cluster Health Monitor는, 모든 플랫폼에 쓸 수 있는 것은 아니지만, 좀더 세부 수준까지 실시간으로 데이터를 수집함으로써 OS Watcher를 보완한다. 어떤 이슈에 대해 보다 빠른 진단과 디버깅을 수월하게 하는데 있어, 이 유틸리티들을 하나 또는 둘다 항상 모든 클러스터 노드에서 수행해 두는 것은 매우 중요한다.

추가 정보:
Document 301137.1 OS Watcher User Guide
Document 1328466.1 Cluster Health Monitor (CHM) FAQ
Document 580513.1 How To Start OSWatcher Black Box Every System Boot (Linux specific)


6.  OS설정에 대해서는 Best Practice를 따르라

(시스템 안정화를 위한 메모리 튜닝에 관한 백서로서, 오라클과 IBM이 공동 작업함)

적용 가능한 플랫폼:  모든 AIX 버전

이유?: 백서 Oracle Real Application Clusters on IBM AIX Best practices in memory tuning and configuring for system stability는 양 벤더의 상호 경험치에 기반한 최적 사례를 함께 테스트하고 엮은 매우 좋은 문서이다. 경험적으로 RAC/AIX 클러스터에서 안정성 문제의 대부분은 이 문서에서 권고한대로 설정하여 해결할 수 있었다. AIX 버전 6.1은 이런 많은 권고안들을 기본값으로 포함하고 있으나, 이 설정들은 OS버전 또는 오라클 버전에 상관없이 모든 RAC 클러스터 상에서 확인돼야 한다.

추가 정보:
백서 다운로드: http://www.oracle.com/technetwork/database/clusterware/overview/rac-aix-system-stability-131022.pdf
Document 811293.1  RAC Assurance Support Team: RAC Starter Kit and Best Practices (AIX)

7.  AIX 플랫폼에서 과도한 페이징/스와핑 이슈를 방지하기 위해, 적절한 APAR가 제대로 설정돼 있는 지 확인하라


적용 가능한 플랫폼: 모든 AIX 버전

이유?: 경험적으로 이 사안은 AIX환경에 영향을 미치는 매우 흔한 이슈이다. 이 이슈의 본질상, 이 문제에 민감한 사람이라면 완전히 시스템 hang이 되는 상황을 경험했을 것이다. RAC이 아닌 환경에서는 이것은 인위적 개입 없이는 계속 시스템 hang 상황에 빠져 있게 된다. 그러나 RAC 환경에서는 해당 노드의 응답 없음으로 인해 노드 eviction 상황으로 전개된다. 

추가 정보: 이 문제에 대한 추가 정보는, 오라클 문서 Document 1088076.1 Paging Space Growth May Occur Unexpectedly on AIX Systems With 64K (medium) Pages Enabled 를 확인하라

노트: 해당 문서에 기술된 버전과 APAR 수는 주어진 Technology Level에 특화돼 있다. 적용해야 할 실제 APAR 또는 Fix#는 특정 Technology Level (TL)에 따라 다르다. 다른 Technology Level에는 다른 APAR을 적용해야 한다. 이 fix가 적절한 지 그리고 만약 그렇지 않다면 특정 fix를 얻기 위해선 어떤 TL 또는 APAR을 필요한 지 확인하기 위해,IBM과 함께 체크하라.

 

8.  NUMA 패치를 적용하라

적용 가능한 플랫폼:  모든 플랫폼

이유?:  10.2.0.4 과11.1.0.7 RDBMS 패치셋에는, NUMA (OS와 하드웨어 종속적)를 지원하는 플랫폼 상에서는 NUMA 최적화가 활성화돼 있었다. (NUMA를 지원하는 시스템 상의) RDBMS 코드내에 이 NUMA가 활성화 돼 있어서 데이터베이스의 성능 저하와 불안정을 야기하는 버그가 되어 왔었다. 10.2.0.4 과11.1.0.7에 NUMA 최적화와 관련있는 증상/이슈에 대한 전체 목록이 Document 759565.1에 나와 있다. 만약 10.2.0.4 또는 11.1.0.7 을 운영하고 있다면, NUMA관련 이슈를 적극적으로 해결하기 위해 Patch 8199533 를 적용할 것을 강력히 권고하는 바이다.

9.  윈도우즈의 비상호적 데스크탑 힙 메모리 (noninteractive Desktop Heap)를 늘려라

적용 가능한 플랫폼:  Windows 플랫폼 

이유?: 윈도우즈 클러스터에서 비상호적 데스크탑 힙 메모리의 기본 크기가 충분치 않음이 확인되었다. 이는 애플리케이션 연결성 이슈 및 클러스터의 일반적인 불안정(hang 및 crash)을 초래한다. 이 이슈에 대해 적극적으로 대처하기 위해, 비상호적 데스크탑 힙 메모리를 1MB까지 늘려줄 것을 권고한다. 권고치 1MB 보다 크게 늘려주는 것은, 마이크로소프트 측의 관여 없이는 수행하지 말아야 한다. 

추가 정보: 비상호적 데스크탑 힙 메모리를 어떻게 조정할 수 있는 지에 대한 설명은 Document 744125.1에서 볼 수 있다.

10.  RACcheck 유틸리티를 수행해라

적용 가능한 플랫폼:  Linux (x86 and x86_64), Solaris SPARC 그리고 AIX (bash shell 환경)

이유?: RACcheck은 Real Application Clusters (RAC), 오라클 클러스터웨어 (CRS), Automatic Storage Management (ASM), 그리고 Grid Infrastructure (GI) 에서 다양하고 중요한 설정 정보들을 살펴볼 수 있도록 개발된 RAC 설정 감사툴이다. 이 유틸리티는, RAC Assurance 개발/지원팀에서 유지 관리되고 있는 일련의 RAC/오라클 클러스터웨어 Best Practice와 Starter Kit 문서 (Document 810394.1 참조)에서 정의된 Best Practice 와 성공 요소들을 점검하는데 이용할 수 있다. RACcheck가 지원되는 플랫폼에서 RAC을 운영하고 있는 고객이라면 클러스터 안정성에 영향을 줄 수 있는 잠재적인 설정 이슈를 식별하기 위해서 이 툴을 활용할 것을 독려하는 바이다.

추가 정보: RACcheck에 대한 더 많은 정보 및 이 유틸리티를 내려받기 위한 링크는 Document 1268927.1에서 확인할 수 있다.

11. NTP를 slewing 옵션으로 설정하라.

적용 가능한 플랫폼:  모든 Linux 및 Unix 플랫폼.

이유?: slew 옵션이 없는 경우는 시간 불일치가 특정 임계치(플랫폼마다 다름)를 넘어설 때 NTP가 시스템 글럭을 앞으로 또는 뒤로 건너뛸 수 있다. 뒤로 시간을 많이 건너 뛰게 되면 클러스터웨어는 checkin이 없다고 보고 노드 eviction을 하게 될 수도 있다. 이러한 이유로, eviction 상황을 방지하기 위해 클럭이 시간을 동기시킬 수 있도록 NTP를 slew time (speed up 또는 speed down)으로 설정할 것을 강력히 권고하는 바이다. 각 플랫폼에서 NTP time slewing을 어떻게 구현하는 지, 아래 각 플랫폼별 RAC and Oracle Clusterware Best Practices and Starter Kit 문서를 참조하라 (아래).

반응형
Posted by [PineTree]
ORACLE/RAC2014. 3. 12. 17:08
반응형

출처 : http://www.commit.co.kr/101

 

Single_to_RAC(Converting).pdf

 

반응형
Posted by [PineTree]
ORACLE/RAC2014. 2. 22. 12:10
반응형

Steps to Remove Node from Cluster When the Node Crashes Due to OS/Hardware Failure and cannot boot up (Doc ID 466975.1) To BottomTo Bottom

Modified:16-Nov-2012Type:HOWTO

Rate this document Email link to this document Open document in new window Printable Page


In this Document

Goal

Fix

  Summary

  Example Configuration

  Initial Stage

  Step 1 Remove oifcfg information for the failed node

  Step 2 Remove ONS information

  Step 3 Remove resources

  Step 4 Execute rootdeletenode.sh

  Step 5 Update the Inventory

References

APPLIES TO:


Oracle Server - Enterprise Edition - Version 10.2.0.1 to 11.1.0.6 [Release 10.2 to 11.1]

Oracle Server - Standard Edition - Version 10.2.0.1 to 11.1.0.6 [Release 10.2 to 11.1]

Information in this document applies to any platform.

Oracle Server Enterprise Edition - Version: 10.2.0.1 to 11.1.0.6

Oracle Clusterware



GOAL


This document is intented to provide the steps to be taken to remove a node from the Oracle cluster. The node itself is unavailable due to some OS issue or hardware issue which prevents the node from starting up. This document will provide the steps to remove such a node so that it can be added back after the node is fixed.


The steps to remove a node from a Cluster is already documented in the Oracle documentation at


Version Documentation Link

10gR2 http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/adddelunix.htm#BEIFDCAF

11gR1 http://download.oracle.com/docs/cd/B28359_01/rac.111/b28255/adddelclusterware.htm#BEIFDCAF

This note is different because the documentation covers the scenario where the node is accessible and the removal is a planned procedure. This note covers the scenario where the Node is unable to boot up and therefore it is not possible to run the clusterware commands from this node.


For 11gR2, refer to note 1262925.1


 


FIX


Summary


Basically all the steps documented in the Oracle Clusterware Administration and Deployment Guide must be followed. The difference here is that we skip the steps that are to be executed on the node which is not available and we run some extra commands on the other node which is going to remain in the cluster to remove the resources from the node that is to be removed.


Example Configuration


 All steps outlined in this document were executed on a cluster with the following configuration:


Item Value

Node Names lc2n1, lc2n2, lc2n3

Operating System Oracle Enterprise Linux 5 Update 4

Oracle Clusterware Release 10.2.0.5.0

ASM & Database Release 10.2.0.5.0

Clusterware Home /u01/app/oracle/product/10.2.0/crs ($CRS_HOME)

ASM Home /u01/app/oracle/product/10.2.0/asm

Database Home /u01/app/oracle/product/10.2.0/db_1

 Cluster Name lc2

 


 Assume that node lc2n3 is down due to a hardware failure and cannot even boot up. The plan is to remove it from the clusterware, fix the issue and then add it again to the Clusterware. In this document, we will cover the steps to remove the node from the clusterware


Please note that for better readability instead of 'crs_stat -t' the sample script 'crsstat' from 

  Doc ID 259301.1 CRS and 10g/11.1 Real Application Clusters 

was used to query the state of the CRS resources. This script is not part of a standard CRS installation.

 


Initial Stage


At this stage, the Oracle Clusterware is up and running on nodes lc2n1 & lc2n2 (good nodes) . Node lc2n3 is down and cannot be accessed. Note that the Virtual IP of lc2n3 is running on Node 1. The rest of the lc2n3 resources are OFFLINE:


[oracle@lc2n1 ~]$ crsstat

Name                                     Target     State      Host      

-------------------------------------------------------------------------------

ora.LC2DB1.LC2DB11.inst                  ONLINE     ONLINE     lc2n1     

ora.LC2DB1.LC2DB12.inst                  ONLINE     ONLINE     lc2n2     

ora.LC2DB1.LC2DB13.inst                  ONLINE     OFFLINE              

ora.LC2DB1.LC2DB1_SRV1.LC2DB11.srv       ONLINE     ONLINE     lc2n1     

ora.LC2DB1.LC2DB1_SRV1.LC2DB12.srv       ONLINE     ONLINE     lc2n2     

ora.LC2DB1.LC2DB1_SRV1.LC2DB13.srv       ONLINE     OFFLINE              

ora.LC2DB1.LC2DB1_SRV1.cs                ONLINE     ONLINE     lc2n1     

ora.LC2DB1.db                            ONLINE     ONLINE     lc2n2     

ora.lc2n1.ASM1.asm                       ONLINE     ONLINE     lc2n1     

ora.lc2n1.LISTENER_LC2N1.lsnr            ONLINE     ONLINE     lc2n1     

ora.lc2n1.gsd                            ONLINE     ONLINE     lc2n1     

ora.lc2n1.ons                            ONLINE     ONLINE     lc2n1     

ora.lc2n1.vip                            ONLINE     ONLINE     lc2n1     

ora.lc2n2.ASM2.asm                       ONLINE     ONLINE     lc2n2     

ora.lc2n2.LISTENER_LC2N2.lsnr            ONLINE     ONLINE     lc2n2     

ora.lc2n2.gsd                            ONLINE     ONLINE     lc2n2     

ora.lc2n2.ons                            ONLINE     ONLINE     lc2n2     

ora.lc2n2.vip                            ONLINE     ONLINE     lc2n2     

ora.lc2n3.ASM3.asm                       ONLINE     OFFLINE              

ora.lc2n3.LISTENER_LC2N3.lsnr            ONLINE     OFFLINE              

ora.lc2n3.gsd                            ONLINE     OFFLINE              

ora.lc2n3.ons                            ONLINE     OFFLINE              

ora.lc2n3.vip                            ONLINE     ONLINE     lc2n1     

[oracle@lc2n1 ~]$

 


Step 1 Remove oifcfg information for the failed node


Generally most installations use the global flag of the oifcfg command and therefore they can skip this step. They can confirm this using:


[oracle@lc2n1 bin]$ $CRS_HOME/bin/oifcfg getif

eth0  192.168.56.0  global  public

eth1  192.168.57.0  global  cluster_interconnect

If the output of the command returns global as shown above then you can skip the following step (executing the command below on a global defination will return an error as shown below.


If the output of the oifcfg getif command does not return global then use the following command


[oracle@lc2n1 bin]$ $CRS_HOME/bin/oifcfg delif -node lc2n3 

PROC-4: The cluster registry key to be operated on does not exist.

PRIF-11: cluster registry error

 


Step 2 Remove ONS information


Execute the following command to find out the remote port number to be used


[oracle@lc2n1 bin]$ cat $CRS_HOME/opmn/conf/ons.config

localport=6113 

remoteport=6200 

loglevel=3

useocr=on

and remove the information pertaining to the node to be deleted using:


[oracle@lc2n1 bin]$ $CRS_HOME/bin/racgons remove_config lc2n3:6200

 


Step 3 Remove resources


In this step, the resources that were defined on this node have to be removed. These resources include Database Instances, ASm, Listener and Nodeapps resources. A list of these can be acquired by running crsstat (crs_stat -t) command from any node


[oracle@lc2n1 ~]$ crsstat |grep OFFLINE

ora.LC2DB1.LC2DB13.inst                  ONLINE     OFFLINE              

ora.LC2DB1.LC2DB1_SRV1.LC2DB13.srv       ONLINE     OFFLINE              

ora.lc2n3.ASM3.asm                       ONLINE     OFFLINE              

ora.lc2n3.LISTENER_LC2N3.lsnr            ONLINE     OFFLINE              

ora.lc2n3.gsd                            ONLINE     OFFLINE              

ora.lc2n3.ons                            ONLINE     OFFLINE             

 Before removing any resource it is recommended to take a backup of the OCR:


[root@lc2n1 ~]# cd $CRS_HOME/cdata/lc2

[root@lc2n1 lc2]# $CRS_HOME/bin/ocrconfig -export ocr_before_node_removal.exp

[root@lc2n1 lc2]# ls -l ocr_before_node_removal.exp

-rw-r--r-- 1 root root 151946 Nov 15 15:24 ocr_before_node_removal.exp

 Use 'srvctl' from the database home to delete the database instance on node 3:


[oracle@lc2n1 ~]$ . oraenv

ORACLE_SID = [oracle] ? LC2DB1

[oracle@lc2n1 ~]$ $ORACLE_HOME/bin/srvctl remove instance -d LC2DB1 -i LC2DB13

Remove instance LC2DB13 from the database LC2DB1? (y/[n]) y

 Use 'srvctl' from the ASM home to delete the ASM instance on node 3:


[oracle@lc2n1 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM1

[oracle@lc2n1 ~]$ $ORACLE_HOME/bin/srvctl remove asm -n lc2n3

Next remove the listener resource.


Please note that there is no 'srvctl remove listener' subcommand prior to 11.1 so this command will not work in 10.2. Using 'netca' to delete the listener from a down node also is not an option as netca needs to remove the listener configuration from the listener.ora.

10.2 only:


The only way to remove the listener resources is to use the command 'crs_unregister', please use this command only in this particular scenario:


[oracle@lc2n1 lc2]$ $CRS_HOME/bin/crs_unregister ora.lc2n3.LISTENER_LC2N3.lsnr

 11.1 only:


 Set the environment to the home from which the listener runs (ASM or database):


[oracle@lc2n1 ~]$ . oraenv

ORACLE_SID = [oracle] ? +ASM1

[oracle@lc2n1 lc2]$ $ORACLE_HOME/bin/srvctl remove listener -n lc2n3 

  As user root stop the nodeapps resources:


[root@lc2n1 oracle]# $CRS_HOME/bin/srvctl stop nodeapps -n lc2n3

[root@lc2n1 oracle]# crsstat |grep OFFLINE

ora.lc2n3.LISTENER_LC2N3.lsnr            OFFLINE    OFFLINE              

ora.lc2n3.gsd                            OFFLINE    OFFLINE              

ora.lc2n3.ons                            OFFLINE    OFFLINE              

ora.lc2n3.vip                            OFFLINE    OFFLINE        

 Now remove them:


[root@lc2n1 oracle]#  $CRS_HOME/bin/srvctl remove nodeapps -n lc2n3

Please confirm that you intend to remove the node-level applications on node lc2n3 (y/[n]) y

 At this point all resources from the bad node should be gone:


[oracle@lc2n1 ~]$ crsstat 

Name                                     Target     State      Host      

-------------------------------------------------------------------------------

ora.LC2DB1.LC2DB11.inst                  ONLINE     ONLINE     lc2n1     

ora.LC2DB1.LC2DB12.inst                  ONLINE     ONLINE     lc2n2     

ora.LC2DB1.LC2DB1_SRV1.LC2DB11.srv       ONLINE     ONLINE     lc2n1     

ora.LC2DB1.LC2DB1_SRV1.LC2DB12.srv       ONLINE     ONLINE     lc2n2     

ora.LC2DB1.LC2DB1_SRV1.cs                ONLINE     ONLINE     lc2n1     

ora.LC2DB1.db                            ONLINE     ONLINE     lc2n2     

ora.lc2n1.ASM1.asm                       ONLINE     ONLINE     lc2n1     

ora.lc2n1.LISTENER_LC2N1.lsnr            ONLINE     ONLINE     lc2n1     

ora.lc2n1.gsd                            ONLINE     ONLINE     lc2n1     

ora.lc2n1.ons                            ONLINE     ONLINE     lc2n1     

ora.lc2n1.vip                            ONLINE     ONLINE     lc2n1     

ora.lc2n2.ASM2.asm                       ONLINE     ONLINE     lc2n2     

ora.lc2n2.LISTENER_LC2N2.lsnr            ONLINE     ONLINE     lc2n2     

ora.lc2n2.gsd                            ONLINE     ONLINE     lc2n2     

ora.lc2n2.ons                            ONLINE     ONLINE     lc2n2     

ora.lc2n2.vip                            ONLINE     ONLINE     lc2n2  

 


Step 4 Execute rootdeletenode.sh


From the node that you are not deleting execute as root the following command which will help find out the node number of the node that you want to delete


[oracle@lc2n1 ~]$ $CRS_HOME//bin/olsnodes -n

lc2n1   1

lc2n2   2

lc2n3   3

this number can be passed to the rootdeletenode.sh command which is to be executed as root from any node which is going to remain in the cluster.


[root@lc2n1 ~]# cd $CRS_HOME/install

[root@lc2n1 install]# ./rootdeletenode.sh lc2n3,3

CRS-0210: Could not find resource 'ora.lc2n3.ons'.

CRS-0210: Could not find resource 'ora.lc2n3.vip'.

CRS-0210: Could not find resource 'ora.lc2n3.gsd'.

CRS-0210: Could not find resource ora.lc2n3.vip.

CRS nodeapps are deleted successfully

clscfg: EXISTING configuration version 3 detected.

clscfg: version 3 is 10G Release 2.

Successfully deleted 14 values from OCR.

Key SYSTEM.css.interfaces.nodelc2n3 marked for deletion is not there. Ignoring.

Successfully deleted 5 keys from OCR.

Node deletion operation successful.

'lc2n3,3' deleted successfully

[root@lc2n1 install]# $CRS_HOME/bin/olsnodes -n

lc2n1   1

lc2n2   2

 


Step 5 Update the Inventory


From the node which is going to remain in the cluster run the following command as owner of the CRS_HOME. The argument to be passed to the CLUSTER_NODES is a comma seperated list of node names of the cluster which are going to remain in the cluster. This step needs to be performed from once per home (Clusterware, ASM and RDBMS homes).


[oracle@lc2n1 install]$ $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/10.2.0/crs "CLUSTER_NODES={lc2n1,lc2n2}" CRS=TRUE  

Starting Oracle Universal Installer...


No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oracle/oraInventory

'UpdateNodeList' was successful.


[oracle@lc2n1 install]$ $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/10.2.0/asm "CLUSTER_NODES={lc2n1,lc2n2}"

Starting Oracle Universal Installer...


No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oracle/oraInventory

'UpdateNodeList' was successful.

[oracle@lc2n1 install]$ $CRS_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1 "CLUSTER_NODES={lc2n1,lc2n2}"

Starting Oracle Universal Installer...


No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /u01/app/oracle/oraInventory

'UpdateNodeList' was successful.

반응형
Posted by [PineTree]