Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide 10g Release 2 (10.2) Part Number B14197-15 |
|
|
PDF · Mobi · ePub |
This appendix explains how diagnose problems for Oracle Clusterware and Oracle Real Application Clusters (Oracle RAC) components. The appendix explains how to do this by using dynamic debugging, enabling tracing, as well as by using trace and log file information. This appendix also includes information about the Cluster Verification Utility (CVU). The topics in this appendix are:
Both Oracle Clusterware and Oracle RDBMS components have sub-components that you can troubleshoot as described in this chapter. You can enable dynamic debugging to troubleshoot Oracle Clusterware processing. You can also enable debugging and tracing for specific components and specific Oracle Clusterware resources to focus your troubleshooting efforts. A special clusterware health check command enables you to determine the status of several Oracle Clusterware components at one time.
The trace and log files that Oracle generates for Oracle RAC are available for both the Oracle Clusterware components and for the Oracle RDBMS with Oracle RAC components. For Oracle Clusterware, Oracle stores these under a unified directory log structure. In addition to debugging and tracing tools, you can use the Cluster Verification Utility (CVU) and instance-specific alert files for troubleshooting.
This section describes the diagnosability features for Oracle Clusterware which include log files for the Oracle Clusterware daemon, as well as log files for the Event Manager daemon (EVM) and the Oracle Notification Service (RACGONS) under the following topics:
Clusterware Log Files and the Unified Log Directory Structure
Enabling Additional Tracing for Oracle Real Application Clusters High Availability
You can use crsctl
commands as the root
user to enable dynamic debugging for Oracle Clusterware, the Event Manager (EVM), and the clusterware subcomponents. You can also dynamically change debugging levels using crsctl
commands. Debugging information remains in the Oracle Cluster Registry (OCR) for use during the next startup. You can also enable debugging for resources. The crsctl
syntax to enable debugging for Oracle Clusterware is:
crsctl debug log crs "CRSRTI:1,CRSCOMM:2"
The crsctl
syntax to enable debugging for EVM is:
crsctl debug log evm "EVMCOMM:1"
The crsctl
syntax to enable debugging for resources is:
crsctl debug log res "resname:1"
You can enable debugging for the CRS daemons, EVM, and their modules by setting environment variables or by running crsctl
commands as follows where module_name
is the name of the module, crs
, evm
, or css
and debugging_level
is a level from 1 to 5:
crsctl debug log module_name component:debugging_level
Run the following command to obtain component names where module_name
is the name of the module, crs
, evm
, or css
:
crsctl lsmodules module_name
Note:
You do not have to be theroot
user to run the crsctl
command with the lsmodules
option.You can use a crsctl
command as follows to stop Oracle Clusterware and its related resources on a specific node:
crsctl stop crs
You can use a crsctl
command as follows to start Oracle Clusterware and its related resources on a specific node:
crsctl start crs
Note:
You must run thesecrsctl
commands as the root
user.When the Oracle Clusterware daemons are enabled, they start automatically at the time the node is started. To prevent the daemons from starting during the boot process, you can disable them using crsctl
commands. You can use crsctl
commands as follows to enable and disable the startup of the Oracle Clusterware daemons. Run the following command to enable startup for all of the Oracle Clusterware daemons:
crsctl enable crs
Run the following command to disable the startup of all of the Oracle Clusterware daemons:
crsctl disable crs
Notes:
You must run these crsctl
commands as the root
user.
Neither of these crsctl
commands is supported on Windows.
Use the diagcollection.pl
script to collect diagnostic information from an Oracle Clusterware installation. The diagnostics provide additional information so that Oracle Support can resolve problems. Run this script from the following location:
CRS_HOME/bin/diagcollection.pl
Note:
You must run this script as theroot
user.The Oracle Clusterware posts alert messages when important events occur. The following is an example of an alert from the CRSD process:
[NORMAL] CLSD-1201: CRSD started on host %s [ERROR] CLSD-1202: CRSD aborted on host %s. Error [%s]. Details is %s. ERROR] CLSD-1203: Failover failed for the CRS resource %s. Details in %s. [NORMAL] CLSD-1204: Recovering CRS resources for host %s [ERROR] CLSD-1205: Auto-start failed for the CRS resource %s. Details in %s.
The location of this alert log on UNIX-based systems is CRS Home
/log/hostname/alerthostname.log
where CRS_Home
is the name of the location of Oracle Clusterware. Windows-based systems use the same path structure.
The following is an example of an EVMD alert:
[NORMAL] CLSD-1401: EVMD started on node %s [ERROR] CLSD-1402: EVMD aborted on node %s. Error [%s]. Details in %s.
You can use crsctl
commands to enable resource debugging using the following syntax:
crsctl debug log res "ora.node1.vip:1"
This has the effect of setting the environment variable USER_ORA_DEBUG
, to 1
, before running the start, stop, or check action scripts for the ora.node1.vip
resource.
Note:
You must run thiscrsctl
command as the root
user.Use the crsctl
check
command to determine the health of your clusterware as in the following example:
crsctl check crs
This command displays the status of the CSSD, EVMD, and CRSD processes. Run the following command to determine the health of individual daemons where daemon
is crsd
, cssd
or evmd
:
crsctl check daemon
Note:
You do not have to be theroot
user to perform health checks.Oracle uses a unified log directory structure to consolidate the Oracle Clusterware component log files. This consolidated structure simplifies diagnostic information collection and assists during data retrieval and problem analysis.
Oracle retains one current log file and five older log files that are 20 MB in size (120 MB of storage) for the cssd
process, and one current log file and 10 older log files that are 10 MB in size (110 MB of storage) for the crsd
process. In addition, Oracle overwrites the oldest retained log file for any log file group when the current log file gets stored. Alert files are stored in the following directory structures as described under the following headings:
Note:
This section uses UNIX-based directory structure examples. The directory structure in all cases is identical for Windows-based platforms.Log files for the CRSD process (crsd
) can be found in the following directories:
CRS home/log/hostname/crsd
The Oracle Cluster Registry (OCR) records log information in the following location:
CRS Home/log/hostname/client
You can find CSS information that the OCSSD
generates in log files in the following locations:
CRS Home/log/hostname/cssd
Event Manager (EVM) information generated by evmd
is recorded in log files in the following locations:
CRS Home/log/hostname/evmd
The Oracle RAC high availability trace files are located in the following two locations:
CRS home/log/hostname/racg
$ORACLE_HOME/log/hostname/racg
Core files are in the sub-directories of the log directories. Each RACG executable has a sub-directory assigned exclusively for that executable. The name of the RACG executable sub-directory is the same as the name of the executable.
This section explains how to troubleshoot the Oracle Cluster Registry (OCR) under the following topics:
This section explains how to use the OCRDUMP utility to view OCR content for troubleshooting. The OCRDUMP utility enables you to view the OCR contents by writing OCR content to a file or stdout in a readable format.
You can use a number of options for OCRDUMP. For example, you can limit the output to a key and its descendents. You can also write the contents to an XML-based file that you can view using a browser. OCRDUMP writes the OCR keys as ASCII strings and values in a datatype format. OCRDUMP retrieves header information based on a best effort basis. OCRDUMP also creates a log file in CRS_Home
/log/
hostname
/
client
. To change the amount of logging, edit the file CRS_Home
/srvm/admin/ocrlog.ini
.
To change the logging component, edit the entry containing the comploglvl=
entry. For example, to change the logging of the ORCAPI component to 3 and to change the logging of the OCRRAW component to 5, make the following entry in the ocrlog.ini
file:
comploglvl="OCRAPI:3;OCRRAW:5"
Note:
Make sure that you have file creation privileges in theCRS Home
/log/
hostname
/client
directory before using the OCRDUMP utility.This section describes the OCRDUMP utility command syntax and usage. Run the ocrdump
command with the following syntax where filename
is the name of a target file to which you want Oracle to write the OCR output and where keyname
is the name of a key from which you want Oracle to write OCR subtree content:
ocrdump [file_name|-stdout] [-backupfile backup_file_name] [-keyname keyname] [-xml] [-noheader]
Table A-1 describes the OCRDUMP utility options and option descriptions.
Table A-1 OCRDUMP Options and Option Descriptions
Options | Description |
---|---|
|
Name of a file to which you want OCRDUMP to write output. |
|
The predefined output location that you can redirect with, for example, a filename. |
|
The name of an OCR key whose subtree is to be dumped. |
|
Writes the output in XML format. |
|
Does not print the time at which you ran the command and when the OCR configuration occurred. |
|
Option to identify a backup file. |
|
The name of the backup file the content of which you want to view. You can query the backups using the |
The following ocrdump
utility examples extract various types of OCR information and write it to various targets:
ocrdump
Writes the OCR content to a file called OCRDUMPFILE
in the current directory.
ocrdump MYFILE
Writes the OCR content to a file called MYFILE
in the current directory.
ocrdump -stdout -keyname SYSTEM
Writes the OCR content from the subtree of the key SYSTEM
to stdout
.
ocrdump -stdout -xml
Writes the OCR content to stdout
in XML format.
The following OCRDUMP examples show the KEYNAME
, VALUE TYPE
, VALUE
, permission set (user
, group
, world
) and access rights for two sample runs of the ocrdump
command. The following shows the output for the SYSTEM.language
key that has a text value of AMERICAN_AMERICA.WE8ASCII37
.
[SYSTEM.language] ORATEXT : AMERICAN_AMERICA.WE8ASCII37 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : user, GROUP_NAME : group }
The following shows the output for the SYSTEM.version
key that has integer value 3
:
[SYSTEM.version] UB4 (10) : 3 SECURITY : {USER_PERMISSION : PROCR_ALL_ACCESS, GROUP_PERMISSION : PROCR_READ, OTHER_PERMISSION : PROCR_READ, USER_NAME : user, GROUP_NAME : group }
The OCRCHECK utility displays the version of the OCR's block format, total space available and used space, OCRID, and the OCR locations that you have configured. OCRCHECK performs a block-by-block checksum operation for all of the blocks in all of the OCRs that you have configured. It also returns an individual status for each file as well as a result for the overall OCR integrity check. The following is a sample of the OCRCHECK output:
Status of Oracle Cluster Registry is as follows : Version : 2 Total space (kbytes) : 262144 Used space (kbytes) : 16256 Available space (kbytes) : 245888 ID : 1918913332 Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded Device/File Name : /oradata/mirror.ocr Device/File integrity check succeeded Cluster registry integrity check succeeded
OCRCHECK creates a log file in the directory CRS_Home
/log/
hostname
/client
. To change amount of logging, edit the file CRS_Home
/srvm/admin/ocrlog.ini
.
Table A-2 describes common OCR problems with corresponding resolution suggestions.
Table A-2 Common OCR Problems and Solutions
Problem | Solution |
---|---|
Not currently using OCR mirroring and would like to. |
Run the |
An OCR failed and you need to replace it. Error messages in Enterprise Manager or OCR log file. |
Run the |
An OCR has a mis-configuration. |
Run the |
You are experiencing a severe performance effect from OCR processing or you want to remove an OCR for other reasons. |
Run the |
Oracle Support may ask you to enable tracing to capture additional information. Because the procedures described in this section may affect performance, only perform these activities with the assistance of Oracle Support. This section includes the following topics:
To generate additional trace information for a running resource, Oracle recommends that you use CRSCTL
commands. For example, run the following command to turn on debugging for resources:
crsctl debug log res "resname:level"
Alternatively, you can set the USR_ORA_DEBUG
parameter in your resource profile to the value 1
using crsctl
.
The event manager daemons (evmd)
running on separate nodes communicate through specific ports. To determine whether the evmd
for a node can send and receive messages, perform the test described in this section while running session 1 in the background.On node 1, session 1 enter:
$ evmwatch –A –t "@timestamp @@"
On node 2, session 2 enter:
$ evmpost -u "hello" [-h nodename]
Session 1 should show output similar to the following:
$ 21-Aug-2002 08:04:26 hello
Ensure that each node can both send and receive messages by executing this test in several permutations.
This section describes the diagnosability features for Oracle Real Application Clusters components. This section includes the following topics:
Using Instance-Specific Alert Files in Oracle Real Application Clusters
Enabling Tracing for Java-Based Tools and Utilities in Oracle Real Application Clusters
Oracle records information about important events that occur in your Oracle RAC environment in trace files. The trace files for Oracle RAC are the same as those in single-instance Oracle databases. As a best practice, monitor and back up trace files regularly for all instances to preserve their content for future troubleshooting.
Information about ORA-600
errors appear in the alert_SID
.log
file for each instance where SID is the instance identifier. For troubleshooting, you may need to also provide files from the following bdump
locations:
$ORACLE_HOME
/admin/db_name/bdump
on UNIX-based systems
%ORACLE_HOME%\admin\db_name\bdump
on Windows-based systems
Some files may also be in the udump
directory.
In addition, the directory cdmp_
timestamp
contains in-memory traces of Oracle RAC instance failure information. This directory is located in ORACLE_HOME
/admin/db_name/bdump/cdmp_
timestamp
, where timestamp
is the time at which the error occurred.
Trace dump files are stored under the cdmp
directory. Oracle creates these files for each process that is connected to the instance. The naming convention for the trace dump files is same as for trace files, but with .trw
as the file extension.
Oracle RAC background threads use trace files to record database operations and database errors. These trace logs help troubleshoot and also enable Oracle support to more efficiently debug cluster database configuration problems.
Background thread trace files are created regardless of whether the BACKGROUND_DUMP_DEST
parameter is set in the server parameter file (SPFILE). If you set BACKGROUND_DUMP_DEST
, then the trace files are stored in the directory specified. If you do not set the parameter, then the trace files are stored as follows:
$
ORACLE_BASE/admin/
db_name
/bdump
on UNIX-based systems
%ORACLE_BASE%\admin\
db_name
\bdump
on Windows-based systems
Oracle creates a different trace file for each background thread. For both UNIX- and Windows-based systems, trace files for the background processes are named SID_
process name
_process identifier
.trc
, for example:
Trace files are created for user processes if you set the USER_DUMP_DEST
initialization parameter. User process trace file names have the format SID
_ora_
process_identifier/thread_identifier
.trc
, where process_identifier
is a 5-digit number indicating the process identifier (PID) on UNIX-based systems and thread_identifier
is the thread identifier on Windows-based systems.
Each instance in an Oracle RAC database has one alert file. The alert file for each instance, alert.SID.log, contains important information about error messages and exceptions that occur during database operations. Information is appended to the alert file each time you start the instance. All process threads can write to the alert file for the instance.
The alert_SID.log
file is in the directory specified by the BACKGROUND_DUMP_DEST
parameter in the init
db_name.ora
initialization parameter file. If you do not set the BACKGROUND_DUMP_DEST
parameter, the alert_SID.log
file is generated in the following locations:
$
ORACLE_BASE/admin/
db_name
/bdump
on UNIX-based systems.
%ORACLE_BASE%\admin\
db_name
\bdump
on Windows-based system
All Java-based tools and utilities that are available in Oracle RAC are invoked by executing scripts of the same name as the tool or utility. This includes the Cluster Verification Utility (CVU), Database Configuration Assistant (DBCA), the Net Configuration Assistant (NETCA), the Virtual Internet Protocol Configuration Assistant (VIPCA), Server Control (SRVCTL), and the Global Services Daemon (GSD). For example to run DBCA, enter the command dbca
.
By default, Oracle enables traces for DBCA and the Database Upgrade Assistant (DBUA). For the CVU, GSDCTL, SRVCTL, and VIPCA, you can set the SRVM_TRACE
environment variable to TRUE
to make Oracle generate traces. Oracle writes traces to log files. For example, Oracle writes traces to log files in Oracle home
/cfgtoollogs/dbca
and Oracle home
/cfgtoollogs/dbua
for DBCA and the Database Upgrade Assistant (DBUA) respectively.
In some situations a SHUTDOWN IMMEDIATE
may be pending and Oracle will not quickly respond to repeated shutdown requests. This is because Oracle Clusterware may be processing a current shutdown request. In such cases, issue a SHUTDOWN ABORT
using SQL*Plus for subsequent shutdown requests.
This section describes the Cluster Verification Utility (CVU) under the following topics:
See Also:
Your platform-specific Oracle Clusterware and Oracle RAC installation guide for information about how to manually install CVUThe CVU can verify the primary cluster components during an operational phase or stage. A component can be basic, such as free disk space, or it can be complex, such as checking the Oracle Clusterware integrity. For example, CVU can verify multiple Oracle Clusterware subcomponents across the Oracle Clusterware layers. Additionally, CVU can check disk space, memory, processes, and other important cluster components. A stage could be, for example, database installation, for which CVU can perform a pre-check to verify whether your system meets the criteria for an Oracle RAC installation. Other stages include the initial hardware setup and the establishing of system requirements through the fully operational cluster setup.
When verifying stages, CVU uses entry and exit criteria. In other words, each stage has entry criteria that define a specific set of verification tasks to be performed before initiating that stage. This pre-check prevents you from beginning a stage, such as installing Oracle Clusterware, unless you meet the Oracle Clusterware stage's pre-requisites.
The exit criteria for a stage define another set of verification tasks that you need to perform after the completion of the stage. Post-checks ensure that the activities for that stage have been completed. Post-checks identify stage-specific problems before they propagate to subsequent stages.
The node list that you use with CVU commands should be a comma-delimited list of host names without a domain. The CVU ignores domains while processing node lists. If a CVU command entry has duplicate node entries after removing domain information, then CVU eliminates the duplicate node entries. Wherever supported, you can use the -n all
option to verify all of your cluster nodes that are part of a specific Oracle RAC installation. You do not have to be the root
user to use the CVU and the CVU assumes that the current user is the oracle
user.
Note:
The CVU only supports an English-based syntax and English online help.For network connectivity verification, the CVU discovers all of the available network interfaces if you do not specify an interface on the CVU command line. For storage accessibility verification, the CVU discovers shared storage for all of the supported storage types if you do not specify a particular storage identification on the command line. The CVU also discovers the Oracle Clusterware home if one is available.
Run the CVU command-line tool using the cluvfy
command. Using cluvfy
does not adversely affect your cluster environment or your installed software. You can run cluvfy
commands at any time, even before the Oracle Clusterware installation. In fact, the CVU is designed to assist you as soon as your hardware and operating system are operational. If you run a command that requires Oracle Clusterware on a node, then the CVU reports an error if Oracle Clusterware is not yet installed on that node.
You can enable tracing by setting the environment variable SRVM_TRACE
to true. For example, in tcsh
an entry such as setenv SRVM_TRACE true
enables tracing. The CVU trace files are created in the CV_HOME/cv/log
directory. Oracle automatically rotates the log files and the most recently created log file has the name cvutrace.log.0
. You should remove unwanted log files or archive them to reclaim disk place if needed. The CVU does not generate trace files unless you enable tracing.
At least 30MB free space for the CVU software on the node from which you run the CVU
A location for the current JDK, Java 1.4.1 or later
A work directory with at least 25MB free space on each node
Note:
When using the CVU, the CVU attempts to copy any needed information to the CVU work directory. Make sure that the CVU work directory exists on all of the nodes in your cluster database and that the directory on each node has write permissions established for the CVU user. Set this directory using theCV_DESTLOC
environment variable. If you do not set this variable, then the CVU uses /tmp
as the work directory on UNIX-based systems, and C:\temp
on Windows-based systems.This section describes the following Cluster Verification Utility topics:
The cluvfy
commands have context sensitive help that shows their usage based on the command-line arguments that you enter. For example, if you enter cluvfy
, then the CVU displays high-level generic usage text describing the stage and component syntax. If you enter cluvfy comp -list
, then the CVU shows the valid components with brief descriptions about each of them. If you enter cluvfy comp -help
, then the CVU shows detailed syntax for each of the valid component checks. Similarly, cluvfy stage -list
and cluvfy stage -help
display valid stages and their syntax for their checks respectively. If you enter an invalid CVU command, then the CVU shows the correct usage for that command. For example, if you type cluvfy stage -pre dbinst
, then CVU shows the correct syntax for the pre-check commands for the dbinst
stage. Enter the cluvfy -help
command to see detailed CVU command information.
Although by default the CVU reports in non-verbose mode by only reporting the summary of a test, you can obtain detailed output by using the -verbose
argument. The -verbose
argument produces detailed output of individual checks and where applicable shows results for each node in a tabular layout.
If a cluvfy
command responds with UNKNOWN
for a particular node, then this is because the CVU cannot determine whether a check passed or failed. The cause of this could be a loss of reachability or the failure of user equivalence to that node. The cause could also be any system problem that was occurring on that node at the time that CVU was performing a check.
If you run the CVU using the -verbose
argument and the CVU responds with UNKNOWN
for a particular node, then this is because the CVU cannot determine whether a check passed or failed. The following is a list of possible causes for an UNKNOWN
response:
The node is down
Executables that the CVU requires are missing in CRS_home
/bin
or the Oracle home
directory
The user account that ran the CVU does not have privileges to run common operating system executables on the node
The node is missing an operating system patch or a required package
The node has exceeded the maximum number of processes or maximum number of open files, or there is a problem with IPC segments, such as shared memory or semaphores
You can use the following nodelist shortcuts:
To provide the CVU a list of all of the nodes of a cluster, enter -n all
. CVU attempts to obtain the node list in the following order:
If vendor clusterware is available, then the CVU selects all of the configured nodes from the vendor clusterware using the lsnodes
utility.
If Oracle Clusterware is installed, then the CVU selects all of the configured nodes from Oracle Clusterware using the olsnodes
utility.
If neither the vendor nor Oracle Clusterware is installed, then the CVU searches for a value for the CV_NODE_ALL
key in the configuration file.
If vendor and Oracle Clusterware are not installed and no key named CV_NODE_ALL
exists in the configuration file, then the CVU searches for a value for the CV_NODE_ALL
environmental variable.
If you have not set this variable, then the CVU reports an error.
To provide a partial node list, you can set an environmental variable and use it in the CVU command. For example, on UNIX-based systems you can enter:
setenv MYNODES node1,node3,node5 cluvfy comp nodecon -n $MYNODES [-verbose]
You can use CVU's configuration file to define specific inputs for the execution of the CVU. The path for the configuration file is CV_HOME/cv/admin/cvu_config
. You can modify this using a text editor. The inputs to the tool are defined in the form of key entries. You must follow these rules when modifying the CVU configuration file:
Key entries have the syntax name=value
Each key entry and the value assigned to the key only defines one property
Lines beginning with the number sign (#
) are comment lines and are ignored
Lines that do not follow the syntax name=value
are ignored
The following is the list of keys supported by CVU:
CV_NODE_ALL
: If set, it specifies the list of nodes that should be picked up when Oracle Clusterware is not installed and a -n
all option has been used in the command line. By default, this entry is commented out.
CV_RAW_CHECK_ENABLED
: If set to TRUE
, it enables the check for accessibility of shared disks on RedHat release 3.0. This shared disk accessibility check requires that you install a cvuqdisk
rpm on all of the nodes. By default, this key is set to TRUE
and shared disk check is enabled.
CV_XCHK_FOR_SSH_ENABLED
: If set to TRUE
, it enables the X-Windows check for verifying user equivalence with ssh. By default, this entry is commented out and X-Windows check is disabled.
ORACLE_SRVM_REMOTESHELL
: If set, it specifies the location for ssh/rsh
command to override CVU's default value. By default, this entry is commented out and the tool uses /usr/sbin/ssh
and /usr/sbin/rsh
.
ORACLE_SRVM_REMOTECOPY
: If set, it specifies the location for the scp
or rcp
command to override the CVU default value. By default, this entry is commented out and CVU uses /usr/bin/scp
and /usr/sbin/rcp
.
If CVU does not find a key entry defined in the configuration file, then the CVU searches for the environment variable that matches the name of the key. If the environment variable is set, then the CVU uses its value, otherwise the CVU uses a default value for that entity.
You can perform the following tests using CVU as described under the following topics:
Cluster Verification Utility System Requirements Verifications
Cluster Verification Utility User and Permissions Verifications
Cluster Verification Utility Node Comparisons and Verifications
Cluster Verification Utility Oracle Clusterware Component Verifications
Cluster Verification Utility Cluster Integrity Verifications
Cluster Verification Utility Argument and Option Definitions
See Also:
Table A-3 for details about the arguments and options used in the following CVU examplesTo verify the minimal system requirements on the nodes prior to installing Oracle Clusterware or Oracle RAC, use the sys
component verification command as follows:
cluvfy comp sys [ -n node_list ] -p { crs | database } } [-r { 10gR1 | 10gR2 } ] [ -osdba osdba_group ] [ -orainv orainventory_group ] [-verbose]
To check the system requirements for installing Oracle RAC, use the -p
database
argument, and to check the system requirements for installing Oracle Clusterware, use the -p
crs
argument. To check the system requirements for installing Oracle Clusterware or Oracle RAC from Oracle Database 10g release 1 (10.1), use the -r 10gR1
argument. For example, verify the system requirements for installing Oracle Clusterware on the cluster nodes known as node1
,node2
and node3
by running the following command:
cluvfy comp sys -n node1,node2,node3 -p crs -verbose
To verify whether storage is shared among the nodes in your cluster database or to identify all of the storage that is available on the system and can be shared across the cluster nodes, use the component verification command ssa
as follows:
cluvfy comp ssa [ -n node_list ] [ -s storageID_list ] [-verbose]
See Also:
Refer to "Known Issues for the Cluster Verification Utility" for the types of storage that CVU supports.For example, discover all of the shared storage systems available on your system by running the following command:
cluvfy comp ssa -n all -verbose
You can verify the accessibility of a specific storage location, such as /dev/sda
, across the cluster nodes by running the following command:
cluvfy comp ssa -n all -s /dev/sda
To verify whether a certain amount of free space is available on a specific location in the nodes of your cluster database, use the component verification command space
.
cluvfy comp space [ -n node_list ] -l storage_location -z disk_space {B|K|M|G} [-verbose]
For example, you can verify the availability of at least 2GB of free space at the location /home/dbadmin/products
on all of the cluster nodes by running the following command:
cluvfy comp space -n all -l / home/dbadmin/products –z 2G -verbose
To verify the integrity of your Oracle Cluster File System (OCFS) on platforms on which OCFS is available, use the component verification command cfs
as follows:
cluvfy comp cfs [ -n node_list ] -f file_system [-verbose]
For example, you can verify the integrity of the cluster file system /oradbshare
on all of the nodes by running the following command:
cluvfy comp cfs -f /oradbshare –n all -verbose
Note:
The sharedness check for the file system is supported for Oracle Cluster File System version 1.0.14 or higher.To verify the reachability of the cluster nodes from the local node or from any other cluster node, use the component verification command nodereach
as follows:
cluvfy comp nodereach -n node_list [ -srcnode node ] [-verbose]
To verify the connectivity between the cluster nodes through all of the available network interfaces or through specific network interfaces, use the component verification command nodecon
as follows:
cluvfy comp nodecon -n node_list [ -i interface_list ] [-verbose]
Use the nodecon
command without the -i
option as follows to use CVU to:
Discover all of the network interfaces that are available on the cluster nodes
Review the interfaces' corresponding IP addresses and subnets
Obtain the list of interfaces that are suitable for use as VIPs and the list of interfaces to private interconnects
Verify the connectivity between all of the nodes through those interfaces
cluvfy comp nodecon -n all [-verbose]
You can run this command in verbose mode to identify the mappings between the interfaces, IP addresses, and subnets. To verify the connectivity between all of the nodes through specific network interfaces, use the comp nodeco
n command with the -i
option. For example, you can verify the connectivity between the nodes node1
,node2
, and node3
, through interface eth0
by running the following command:
cluvfy comp nodecon -n node1,node2,node3 –i eth0 -verbose
To verify user accounts and administrative permissions-related issues, use the component verification command admprv
as follows:
cluvfy comp admprv [ -n node_list ] [-verbose] | -o user_equiv [-sshonly] | -o crs_inst [-orainv orainventory_group ] | -o db_inst [-orainv orainventory_group ] [-osdba osdba_group ] | -o db_config -d oracle_home
To verify whether user equivalence exists on specific nodes, use the -o user_equiv
argument. On UNIX-based platforms, this command verifies user equivalence first using ssh
and then using rsh
, if the ssh
check fails. To verify the equivalence only through ssh
, use the -sshonly
option. By default, the equivalence check does not verify X-Windows configurations, such as whether you have disabled X-forwarding, whether you have the proper setting for the DISPLAY
environment variable, and so on.
To verify X-Windows aspects during user equivalence checks, set the CV_XCHK_FOR_SSH_ENABLED
key to TRUE
in the configuration file that resides in the path CV_HOME/cv/admin/cvu_config
before you run the admprv -o user_equiv
command. Use the -o crs_inst
argument to verify whether you have permissions to install Oracle Clusterware.
You can use the -o db_inst
argument to verify the permissions that are required for installing Oracle RAC and the -o db_config
argument to verify the permissions that are required for creating an Oracle RAC database or for modifying an Oracle RAC database's configuration. For example, you can verify user equivalence for all of the nodes by running the following command:
cluvfy comp admprv -n all -o user_equiv -verbose
On Linux and Unix platforms, this command verifies user equivalence by first using ssh and then using rsh if the ssh check fails. To verify the equivalence only through ssh, use the -sshonly
option. By default, the equivalence check does not verify X-Windows configurations, such as when you have disabled X-forwarding with the setting of the DISPLAY
environment variable. To verify X-Windows aspects during user equivalence checks, set the CV_XCHK_FOR_SSH_ENABLED
key to TRUE
in the configuration file CV_HOME/cv/admin/cvu_config
before you run the admprv -o user_equiv
command.
To verify the existence of node applications, namely VIP, ONS and GSD, on all of the nodes, use the component nodeapp
command:
cluvfy comp nodeapp [ -n node_list ] [-verbose]
Use the component verification peer
command to compare the nodes as follows:
cluvfy comp peer [ -refnode node ] -n node_list [-r { 10gR1 | 10gR2 } ] [ -orainv orainventory_group ] [ -osdba osdba_group ] [-verbose]
The following command lists the values of several pre-selected properties on different nodes from Oracle Database 10g release 2 (10.2):
cluvfy comp peer -n node_list [-r 10gR2] [-verbose]
You can also use the comp peer
command with the -refnode
argument to compare the properties of other nodes against the reference node.
To verify whether your system meets all of the criteria for an Oracle Clusterware installation, use the pre-check command for the Oracle Clusterware installation stage as follows:
cluvfy stage -pre crsinst -n node_list [ -c ocr_location ] [-r { 10gR1 | 10gR2 } ][ -q voting_disk ] [ -osdba osdba_group ] [ -orainv orainventory_group ] [-verbose]
After you have completed phase one, verify that Oracle Clusterware is functioning properly before proceeding with phase two of your Oracle RAC installation by running the post-check
command for the Oracle Clusterware installation stage or, -post crsinst
as follows:
cluvfy stage -post crsinst -n node_list [-verbose]
To verify whether your system meets all of the criteria for an Oracle RAC installation, use the pre-check command for the Database Installation stage as follows:
cluvfy stage -pre dbinst -n node_list [-r { 10gR1 | 10gR2 } ] [ -osdba osdba_group ] [ -orainv orainventory_group ] [-verbose]
To verify whether your system meets all of the criteria for creating a database or for making a database configuration change, use the pre-check command for the Database Configuration stage as follows:
cluvfy stage -pre dbcfg -n node_list -d oracle_home [-verbose]
To verify the integrity of all of the Oracle Clusterware components, use the component verification crs
command as follows:
cluvfy comp crs [ -n node_list ] [-verbose]
To verify the integrity of each individual Cluster Manager sub-component, use the component verification command clumgr
as follows:
cluvfy comp clumgr [ -n node_list ] [-verbose]
To verify the integrity of the Oracle Cluster Registry, use the component verification command ocr
as follows:
cluvfy comp ocr [ -n node_list ] [-verbose]
To check the integrity of your entire cluster, which means to verify that all of the nodes in the cluster have the same view of the cluster configuration, use the component verification command clu
as follows:
cluvfy comp clu
Table A-3 describes the CVU arguments and options used in the previous examples:
Table A-3 Cluster Verification Utility Arguments and Options
Argument or Option | Definition |
---|---|
|
The comma-delimited list of non-domain qualified node names on which the test should be conducted. If |
|
The comma-delimited list of interface names. |
|
The name of the file system. |
|
The comma-delimited list of storage identifiers. |
|
The storage path. |
|
The required disk space, in units of bytes (B), kilobytes (K), megabytes (M), or gigabytes (G). |
|
The name of the OSDBA group. The default is |
|
The name of the Oracle inventory group. The default is |
|
Makes CVU print detailed output. |
|
Checks user equivalence between the nodes. |
|
Check user equivalence for ssh setup only. |
|
Checks administrative privileges for installing Oracle Clusterware. |
|
Checks administrative privileges for installing Oracle RAC. |
|
Checks administrative privileges for creating or configuring a database. |
|
The node that will be used as a reference for checking compatibility with other nodes. |
|
The node from which the reachability to other nodes should be checked. |
|
The release of Oracle Database 10g for which the requirements for installation of Oracle Clusterware or Oracle RAC are to be verified. If this options is not specified, then Oracle Database 10g release 2 (10.2) is assumed. |
This section describes the following known limitations for CVU:
The current CVU release supports only Oracle Database 10g RAC and Oracle Clusterware and it is not backward compatible. In other words, CVU cannot check or verify pre-Oracle Database 10g products.
The current release of cluvfy
has the following limitations on Linux regarding shared storage accessibility check.
Currently NAS storage (r/w, no attribute caching) and OCFS (version 1.0.14 or higher) are supported.
For sharedness checks on NAS, cluvfy
commands require you to have write permission on the specified path. If the cluvfy
user does not have write permission, cluvfy
reports the path as not
shared
.
To perform discovery and shared storage accessibility checks for SCSI disks on Red Hat Linux 3.0 and SUSE Linux Enterprise Server, CVU requires the CVUQDISK package. If you attempt to use CVU and the CVUQDISK package is not installed on all of the nodes in your Oracle RAC environment, then CVU responds with an error.
Perform the following procedure to install the CVUQDISK package:
Login as the root
user.
Copy the rpm, cvuqdisk-1.0.1-1.rpm
, to a local directory. You can find this rpm in the rpm
sub-directory of the top-most directory in the Oracle Clusterware installation media. For example, you can find cvuqdisk-1.0.1-1.rpm
in the directory /
mountpoint
/clusterware/rpm/
where mountpoint
is the mounting point for the disk on which the directory is located.
Set the environment variable to a group that should own the CVUQDISK package binaries. If CVUQDISK_GRP
is not set, then by default the oinstall
group is the owner's group.
Determine whether previous versions of the CVUQDISK package are installed by running the command rpm -q cvuqdisk
. If you find previous versions of the CVUQDISK package, then remove them by running the command rpm -e cvuqdisk
previous_version
where previous_version
is the identifier of the previous CVUQDISK version.
Install the latest CVUQDISK package by running the command rpm -iv cvuqdisk-1.0.1-1.rpm
.