最近的Oracle 11g RAC安装碰到了INS-30507错误,也就是在grid安装到创建ASM磁盘组的时候找不到任何候选磁盘,google了N多安装指导也没有找到蛛丝马迹。如果你碰到这个问题,不妨往下瞧。
1、错误信息与解释
SEVERE: [FATAL] [INS-30507] Empty ASM disk group.
CAUSE: No disks were selected from a managed ASM disk group.
ACTION: Select appropriate number of disks from a managed ASM disk group.
Oracle官方给出的这个解释也太揪心了,舍此之外,着实没有给出任何有用的信息
2、安装时的环境
操作系统(Oracle linux 5.5 32bit)
[root@node1 ~]# cat /etc/issue
Enterprise Linux Enterprise Linux Server release 5.5 (Carthage)
Kernel \r on an \m
Oracle版本
Oracle 11g RAC R2(32bit)
宿主机系统
Win7 64bit + vmware server 2.0.2
3、asm磁盘信息及权限
[grid@node1 ~]$ oracleasm listdisks
ASM_DATA
ASM_FRA
OCR_VOTE
[grid@node2 ~]$ oracleasm listdisks
ASM_DATA
ASM_FRA
OCR_VOTE
#从下面可以看出磁盘的属主,属组及权限都是ok的
[grid@node1 disks]$ ls -hltr
total 0
brw-rw---- 1 grid asmadmin 8, 17 Dec 11 11:49 OCR_VOTE
brw-rw---- 1 grid asmadmin 8, 33 Dec 11 11:49 ASM_DATA
brw-rw---- 1 grid asmadmin 8, 49 Dec 11 11:49 ASM_FRA
#两节点都安装了cvuqdisk包
[grid@node1 ~]$ CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
[grid@node1 ~]$ rpm -qa | grep cvuqdisk
cvuqdisk-1.0.7-1
[grid@node2 ~]$ CVUQDISK_GRP=oinstall; export CVUQDISK_GRP
[grid@node2 ~]$ rpm -qa | grep cvuqdisk
cvuqdisk-1.0.7-1
4、CVU校验结果
#安装前的校验
[grid@node1 grid]$ ./runcluvfy.sh stage -pre crsinst -fixup -n node1,node2 -verbose
Performing pre-checks for cluster services setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed
node1 passed
Verification of the hosts config file successful
Interface information for node "node2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.7.72 192.168.7.0 0.0.0.0 192.168.7.254 00:0C:29:BE:133 1500
eth1 10.10.7.72 10.10.7.0 0.0.0.0 192.168.7.254 00:0C:29:BE:13D 1500
Interface information for node "node1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.7.71 192.168.7.0 0.0.0.0 192.168.7.254 00:0C:29:ED:CF:A9 1500
eth1 10.10.7.71 10.10.7.0 0.0.0.0 192.168.7.254 00:0C:29:ED:CF:B3 1500
Check: Node connectivity of subnet "192.168.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth0 node1:eth0 yes
Result: Node connectivity passed for subnet "192.168.7.0" with node(s) node2,node1
Check: TCP connectivity of subnet "192.168.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:192.168.7.71 node2:192.168.7.72 passed
Result: TCP connectivity check passed for subnet "192.168.7.0"
Check: Node connectivity of subnet "10.10.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth1 node1:eth1 yes
Result: Node connectivity passed for subnet "10.10.7.0" with node(s) node2,node1
Check: TCP connectivity of subnet "10.10.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:10.10.7.71 node2:10.10.7.72 passed
Result: TCP connectivity check passed for subnet "10.10.7.0"
Interfaces found on subnet "192.168.7.0" that are likely candidates for VIP are:
node2 eth0:192.168.7.72
node1 eth0:192.168.7.71
Interfaces found on subnet "10.10.7.0" that are likely candidates for a private interconnect are:
node2 eth1:10.10.7.72
node1 eth1:10.10.7.71
Result: Node connectivity check passed
Check: Total memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 1.98GB (2075488.0KB) 1.5GB (1572864.0KB) passed
node1 1.98GB (2075488.0KB) 1.5GB (1572864.0KB) passed
Result: Total memory check passed
Check: Available memory
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 1.92GB (2010144.0KB) 50MB (51200.0KB) passed
node1 1.79GB (1874612.0KB) 50MB (51200.0KB) passed
Result: Available memory check passed
Check: Swap space
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2.98GB (3120464.0KB) 2.97GB (3113232.0KB) passed
node1 3GB (3145040.0KB) 2.97GB (3113232.0KB) passed
Result: Swap space check passed
Check: Free disk space for "node2:/tmp"
Path Node Name Mount point Available Required Comment
---------------- ------------ ------------ ------------ ------------ ------------
/tmp node2 / 12.79GB 1GB passed
Result: Free disk space check passed for "node2:/tmp"
Check: Free disk space for "node1:/tmp"
Path Node Name Mount point Available Required Comment
---------------- ------------ ------------ ------------ ------------ ------------
/tmp node1 / 7.87GB 1GB passed
Result: Free disk space check passed for "node1:/tmp"
Check: User existence for "grid"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: User existence check passed for "grid"
Check: Group existence for "oinstall"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: Group existence check passed for "oinstall"
Check: Group existence for "dba"
Node Name Status Comment
------------ ------------------------ ------------------------
node2 exists passed
node1 exists passed
Result: Group existence check passed for "dba"
Check: Membership of user "grid" in group "oinstall" [as Primary]
Node Name User Exists Group Exists User in Group Primary Comment
---------------- ------------ ------------ ------------ ------------ ------------
node2 yes yes yes yes passed
node1 yes yes yes yes passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed
Check: Membership of user "grid" in group "dba"
Node Name User Exists Group Exists User in Group Comment
---------------- ------------ ------------ ------------ ----------------
node2 yes yes yes passed
node1 yes yes yes passed
Result: Membership check for user "grid" in group "dba" passed
Check: Run level
Node Name run level Required Comment
------------ ------------------------ ------------------------ ----------
node2 5 3,5 passed
node1 5 3,5 passed
Result: Run level check passed
Check: Hard limits for "maximum open file descriptors"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
node2 hard 65536 65536 passed
node1 hard 65536 65536 passed
Result: Hard limits check passed for "maximum open file descriptors"
Check: Soft limits for "maximum open file descriptors"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
node2 soft 1024 1024 passed
node1 soft 1024 1024 passed
Result: Soft limits check passed for "maximum open file descriptors"
Check: Hard limits for "maximum user processes"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
node2 hard 16384 16384 passed
node1 hard 16384 16384 passed
Result: Hard limits check passed for "maximum user processes"
Check: Soft limits for "maximum user processes"
Node Name Type Available Required Comment
---------------- ------------ ------------ ------------ ----------------
node2 soft 2047 2047 passed
node1 soft 2047 2047 passed
Result: Soft limits check passed for "maximum user processes"
Check: System architecture
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 i686 x86 passed
node1 i686 x86 passed
Result: System architecture check passed
Check: Kernel version
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 2.6.18-194.el5PAE 2.6.18 passed
node1 2.6.18-194.el5PAE 2.6.18 passed
Result: Kernel version check passed
Check: Kernel parameter for "semmsl"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 250 250 passed
node1 250 250 passed
Result: Kernel parameter check passed for "semmsl"
Check: Kernel parameter for "semmns"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 32000 32000 passed
node1 32000 32000 passed
Result: Kernel parameter check passed for "semmns"
Check: Kernel parameter for "semopm"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 100 100 passed
node1 100 100 passed
Result: Kernel parameter check passed for "semopm"
Check: Kernel parameter for "semmni"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 142 128 passed
node1 142 128 passed
Result: Kernel parameter check passed for "semmni"
Check: Kernel parameter for "shmmax"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 4294967295 536870912 passed
node1 4294967295 536870912 passed
Result: Kernel parameter check passed for "shmmax"
Check: Kernel parameter for "shmmni"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 4096 4096 passed
node1 4096 4096 passed
Result: Kernel parameter check passed for "shmmni"
Check: Kernel parameter for "shmall"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 1073741824 2097152 passed
node1 1073741824 2097152 passed
Result: Kernel parameter check passed for "shmall"
Check: Kernel parameter for "file-max"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 6815744 6815744 passed
node1 6815744 6815744 passed
Result: Kernel parameter check passed for "file-max"
Check: Kernel parameter for "ip_local_port_range"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 between 9000 & 65500 between 9000 & 65500 passed
node1 between 9000 & 65500 between 9000 & 65500 passed
Result: Kernel parameter check passed for "ip_local_port_range"
Check: Kernel parameter for "rmem_default"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 262144 262144 passed
node1 262144 262144 passed
Result: Kernel parameter check passed for "rmem_default"
Check: Kernel parameter for "rmem_max"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 4194304 4194304 passed
node1 4194304 4194304 passed
Result: Kernel parameter check passed for "rmem_max"
Check: Kernel parameter for "wmem_default"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 262144 262144 passed
node1 262144 262144 passed
Result: Kernel parameter check passed for "wmem_default"
Check: Kernel parameter for "wmem_max"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 1048576 1048576 passed
node1 1048576 1048576 passed
Result: Kernel parameter check passed for "wmem_max"
Check: Kernel parameter for "aio-max-nr"
Node Name Configured Required Comment
------------ ------------------------ ------------------------ ----------
node2 3145728 1048576 passed
node1 3145728 1048576 passed
Result: Kernel parameter check passed for "aio-max-nr"
Check: Package existence for "make-3.81"
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 make-3.81-3.el5 make-3.81 passed
node1 make-3.81-3.el5 make-3.81 passed
Result: Package existence check passed for "make-3.81"
.................#package部分省略................
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Check: Current group ID
Result: Current group ID check passed
Checking Core file name pattern consistency...
Core file name pattern consistency check passed.
Checking to make sure user "grid" is not in "root" group
Node Name Status Comment
------------ ------------------------ ------------------------
node2 does not exist passed
node1 does not exist passed
Result: User "grid" is not part of "root" group. Check passed
Check default user file creation mask
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
node2 0022 0022 passed
node1 0022 0022 passed
Result: Default user file creation mask check passed
Starting Clock synchronization checks using Network Time Protocol(NTP)...
NTP Configuration file check started...
The NTP configuration file "/etc/ntp.conf" is available on all nodes
NTP Configuration file check passed
Checking daemon liveness...
Check: Liveness for "ntpd"
Node Name Running?
------------------------------------ ------------------------
node2 yes
node1 yes
Result: Liveness check passed for "ntpd"
Checking NTP daemon command line for slewing option "-x"
Check: NTP daemon command line
Node Name Slewing Option Set?
------------------------------------ ------------------------
node2 yes
node1 yes
Result:
NTP daemon slewing option check passed
Checking NTP daemon's boot time configuration, in file "/etc/sysconfig/ntpd", for slewing option "-x"
Check: NTP daemon's boot time configuration
Node Name Slewing Option Set?
------------------------------------ ------------------------
node2 yes
node1 yes
Result:
NTP daemon's boot time configuration check for slewing option passed
NTP common Time Server Check started...
NTP Time Server ".LOCL." is common to all nodes on which the NTP daemon is running
Check of common NTP Time Server passed
Clock time offset check from NTP Time Server started...
Checking on nodes "[node2, node1]"...
Check: Clock time offset from NTP Time Server
Time Server: .LOCL.
Time Offset Limit: 1000.0 msecs
Node Name Time Offset Status
------------ ------------------------ ------------------------
node2 0.0 passed
node1 0.0 passed
Time Server ".LOCL." has time offsets that are within permissible limits for nodes "[node2, node1]".
Clock time offset check passed
Result: Clock synchronization check using Network Time Protocol(NTP) passed
Pre-check for cluster services setup was successful.
#校验硬件及操作系统全部通过
[grid@node1 grid]$ ./runcluvfy.sh stage -post hwos -n node1,node2 -verbose
Performing post-checks for hardware and operating system setup
Checking node reachability...
Check: Node reachability from node "node1"
Destination Node Reachable?
------------------------------------ ------------------------
node1 yes
node2 yes
Result: Node reachability check passed from node "node1"
Checking user equivalence...
Check: User equivalence for user "grid"
Node Name Comment
------------------------------------ ------------------------
node2 passed
node1 passed
Result: User equivalence check passed for user "grid"
Checking node connectivity...
Checking hosts config file...
Node Name Status Comment
------------ ------------------------ ------------------------
node2 passed
node1 passed
Verification of the hosts config file successful
Interface information for node "node2"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.7.72 192.168.7.0 0.0.0.0 192.168.7.254 00:0C:29:BE:13:D3 1500
eth1 10.10.7.72 10.10.7.0 0.0.0.0 192.168.7.254 00:0C:29:BE:13:DD 1500
Interface information for node "node1"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
eth0 192.168.7.71 192.168.7.0 0.0.0.0 192.168.7.254 00:0C:29:ED:CF:A9 1500
eth1 10.10.7.71 10.10.7.0 0.0.0.0 192.168.7.254 00:0C:29:ED:CF:B3 1500
Check: Node connectivity of subnet "192.168.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth0 node1:eth0 yes
Result: Node connectivity passed for subnet "192.168.7.0" with node(s) node2,node1
Check: TCP connectivity of subnet "192.168.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:192.168.7.71 node2:192.168.7.72 passed
Result: TCP connectivity check passed for subnet "192.168.7.0"
Check: Node connectivity of subnet "10.10.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node2:eth1 node1:eth1 yes
Result: Node connectivity passed for subnet "10.10.7.0" with node(s) node2,node1
Check: TCP connectivity of subnet "10.10.7.0"
Source Destination Connected?
------------------------------ ------------------------------ ----------------
node1:10.10.7.71 node2:10.10.7.72 passed
Result: TCP connectivity check passed for subnet "10.10.7.0"
Interfaces found on subnet "192.168.7.0" that are likely candidates for VIP are:
node2 eth0:192.168.7.72
node1 eth0:192.168.7.71
Interfaces found on subnet "10.10.7.0" that are likely candidates for a private interconnect are:
node2 eth1:10.10.7.72
node1 eth1:10.10.7.71
Result: Node connectivity check passed
Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed
Post-check for hardware and operating system setup was successful.
#Author : Robinson
#Blog : http://blog.csdn.net/robinson_0612
4、分析与解决
a.从上面的各个检测情况来看,应该来说配置等方法应该是ok的。如果有问题的话,检测应该有部分无法通过
b.其次可以排除ams磁盘文件的属主属组以及权限问题
c.Google了N多11g rac安装指导,都没有提到要配置磁盘路径的问题,尝试修改磁盘路径为ORCL:没有结果(记得10g时要这样配)
d.尝试重新配置asm driver以及使用asmlib重新创建磁盘未果
e.最后在安装界面尝试修改路径为/dev/oracleasm/disk/*,终于转阴为晴阿
f.路径下/etc/sysconfig/oracleasm文件可以修改相关SCANORDER以及SCANEXCLUDE
g.最后感谢网友1x1xqq_cu大力支持。原帖:http://www.itpub.net/forum.php?mod=viewthread&tid=1747674&page=1#pid20722586
更多参考:
有关Oracle RAC请参考
使用crs_setperm修改RAC资源的所有者及权限
使用crs_profile管理RAC资源配置文件
RAC 数据库的启动与关闭
再说 Oracle RAC services
Services in Oracle Database 10g
Migrate datbase from single instance to Oracle RAC
Oracle RAC 连接到指定实例
Oracle RAC 负载均衡测试(结合服务器端与客户端)
Oracle RAC 服务器端连接负载均衡(Load Balance)
Oracle RAC 客户端连接负载均衡(Load Balance)
ORACLE RAC 下非缺省端口监听配置(listener.ora tnsnames.ora)
ORACLE RAC 监听配置 (listener.ora tnsnames.ora)
配置 RAC 负载均衡与故障转移
CRS-1006 , CRS-0215 故障一例
基于Linux (RHEL 5.5) 安装Oracle 10g RAC
使用 runcluvfy 校验Oracle RAC安装环境
有关Oracle 网络配置相关基础以及概念性的问题请参考:
配置非默认端口的动态服务注册
配置sqlnet.ora限制IP访问Oracle
Oracle 监听器日志配置与管理
设置 Oracle 监听器密码(LISTENER)
配置ORACLE 客户端连接到数据库
有关基于用户管理的备份和备份恢复的概念请参考
Oracle 冷备份
Oracle 热备份
Oracle 备份恢复概念
Oracle 实例恢复
Oracle 基于用户管理恢复的处理
SYSTEM 表空间管理及备份恢复
SYSAUX表空间管理及恢复
Oracle 基于备份控制文件的恢复(unsing backup controlfile)
有关RMAN的备份恢复与管理请参考
RMAN 概述及其体系结构
RMAN 配置、监控与管理
RMAN 备份详解
RMAN 还原与恢复
RMAN catalog 的创建和使用
基于catalog 创建RMAN存储脚本
基于catalog 的RMAN 备份与恢复
RMAN 备份路径困惑
使用RMAN实现异机备份恢复(WIN平台)
使用RMAN迁移文件系统数据库到ASM
linux 下RMAN备份shell脚本
使用RMAN迁移数据库到异机
有关ORACLE体系结构请参考
Oracle 表空间与数据文件
Oracle 密码文件
Oracle 参数文件
Oracle 联机重做日志文件(ONLINE LOG FILE)
Oracle 控制文件(CONTROLFILE)
Oracle 归档日志
Oracle 回滚(ROLLBACK)和撤销(UNDO)
Oracle 数据库实例启动关闭过程
Oracle 10g SGA 的自动化管理
Oracle 实例和Oracle数据库(Oracle体系结构)