Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for HP-UX Part Number B14202-05 |
|
|
PDF · Mobi · ePub |
This chapter describes the storage configuration tasks that you must complete before you start Oracle Universal Installer. It includes information about the following tasks:
Reviewing Storage Options for Oracle Clusterware, Database, and Recovery Files
Configuring Storage for Oracle Clusterware Files on a Supported Shared File System
Configuring Storage for Oracle Clusterware Files on Raw Devices
This section describes supported options for storing Oracle Clusterware files, Oracle Database files, and data files. It includes the following sections:
Use the information in this overview to help you select your storage option:
There are two ways of storing Oracle Clusterware files:
A supported shared file system: Supported file systems include the following:
Cluster File System: A supported cluster file system. At release time, a certified cluster file system is not available. Refer to the Certify page available on the OracleMetaLink Web site (http://metalink.oracle.com
) to obtain information about certified cluster file systems.
Network File System (NFS): A file-level protocol that enables access and sharing of files.
Raw partitions: Raw partitions are disk partitions that are not mounted and written to using the HP-UX file system, but instead are accessed directly by the application.
There are three ways of storing Oracle Database and recovery files:
Automatic Storage Management: Automatic Storage Management (ASM) is an integrated, high-performance database file system and disk manager for Oracle files.
A supported shared file system: Supported file systems include the following:
OSCP-Certified NAS Network File System (NFS): Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware
Raw partitions (database files only): A raw partition is required for each database file.
See Also:
For information about certified compatible storage options, refer to the Oracle Storage Compatibility Program (OSCP) Web site, which is at the following URL:http://www.oracle.com/technology/deploy/availability/htdocs/oscp.html
For all installations, you must choose the storage option that you want to use for Oracle Clusterware files and Oracle Database files. If you want to enable automated backups during the installation, then you must also choose the storage option that you want to use for recovery files (the flash recovery area). You do not have to use the same storage option for each file type.
For voting disk file placement, ensure that each voting disk is configured so that it does not share any hardware device or disk, or other single point of failure. An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.
For single-instance Oracle Database installations using Oracle Clusterware for failover, you must use ASM or shared raw disks if you do not want the failover processing to include dismounting and remounting disks.
The following table shows the storage options supported for storing Oracle Clusterware files, Oracle Database files, and Oracle Database recovery files. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file. Oracle Clusterware files include the Oracle Cluster Registry (OCR), a mirrored OCR file (optional), the Oracle Clusterware voting disk, and additional voting disk files (optional).
Note:
For the most up-to-date information about supported storage options for RAC installations, refer to the Certify pages on the OracleMetaLink Web site:http://metalink.oracle.com
Storage Option | File Types Supported | |||
---|---|---|---|---|
OCR and Voting Disk | Oracle Software | Database | Recovery | |
Automatic Storage Management | No | No | Yes | Yes |
Local storage | No | Yes | No | No |
NFS file system
Note: Requires a certified NAS device |
Yes | Yes | Yes | Yes |
Shared raw partitions | Yes | No | Yes | No |
Use the following guidelines when choosing the storage options that you want to use for each file type:
You can choose any combination of the supported storage options for each file type provided that you satisfy any requirements listed for the chosen storage options.
Oracle recommends that you choose Automatic Storage Management (ASM) as the storage option for database and recovery files.
For Standard Edition RAC installations, ASM is the only supported storage option for database or recovery files.
You cannot use ASM to store Oracle Clusterware files, because these files must be accessible before any Oracle instance starts.
If you intend to use ASM with RAC, and you are configuring a new ASM instance, then you must ensure that your system meets the following conditions:
All nodes on the cluster have the release 2 (10.2) version of Oracle Clusterware installed
Any existing ASM instance on any node in the cluster is shut down
If you intend to upgrade an existing RAC database, or a RAC database with ASM instances, then you must ensure that your system meets the following conditions:
The RAC database or RAC database with ASM instance is running on the node from which the Oracle Universal Installer (OUI) and Database Configuration Assistant (DBCA) is run
The RAC database or RAC database with an ASM instance is running on the same nodes that you intend to make members of the new cluster installation. For example, if you have an existing RAC database running on a three-node cluster, then you must install the upgrade on all three nodes. You cannot upgrade only 2 nodes of the cluster, removing the third instance in the upgrade.
See Also:
Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing databaseIf you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
When you have determined your disk storage options, you must perform the following tasks in the following order:
1: Check for available shared storage with CVU
Refer to Checking for Available Shared Storage with CVU
2: Configure shared storage for Oracle Clusterware files
To use a file system (NFS for Oracle Clusterware files), refer to Configuring Storage for Oracle Clusterware Files on a Supported Shared File System
To use raw devices (partitions) for Oracle Clusterware files, refer to "Configuring Storage for Oracle Clusterware Files on Raw Devices"
3: Configure storage for Oracle Database files and recovery files
To use a file system for database or recovery file storage, refer to Configuring Storage for Oracle Clusterware Files on a Supported Shared File System, and ensure that in addition to the volumes you create for Oracle Clusterware files, you also create additional volumes with sizes sufficient to store database files.
To use raw devices (partitions) for database file storage, refer to "Configuring Disks for Database Files on Raw Devices".
To check for all shared file systems available across all nodes on the cluster with an NFS file system, use the following command:
/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list
If you want to check the shared accessibility of a specific shared storage type to specific nodes in your cluster, then use the following command syntax:
/mountpoint/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node_list -s storageID_list
In the preceding syntax examples, the variable mountpoint
is the mountpoint path of the installation media, the variable node_list
is the list of nodes you want to check, separated by commas, and the variable storageID_list
is the list of storage device IDs for the storage devices managed by the file system type that you want to check.
For example, if you want to check the shared accessibility from node1 and node2 of storage devices /dw/dsk/c1t2d3
and /dw/dsk/c2t4d5
, and your mountpoint is /dev/dvdrom/
, then enter the following command:
/dev/dvdrom/crs/Disk1/cluvfy/runcluvfy.sh comp ssa -n node1,node2 -s /dw/dsk/c1t2d3,/dw/dsk/c2t4d5
If you do not specify specific storage device IDs in the command, then the command searches for all available storage devices connected to the nodes on the list
Oracle Universal Installer (OUI) does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:
To use a file system for Oracle Clusterware files, the file system must comply with the following requirements:
To use an NFS file system, it must be on a certified NAS device.
Note:
If you are using a shared file system on a NAS device to store a shared Oracle home directory for Oracle Clusterware or RAC, then you must use the same NAS device for Oracle Clusterware file storage.If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then one of the following must be true:
The disks used for the file system are on a highly available storage device.
At least two file systems are mounted, and use the features of Oracle Database 10g Release 2 (10.2) to provide redundancy for the OCR.
If you intend to use a shared file system to store database files, then use at least two independent file systems, with the database files on one file system, and the recovery files on a different file system.
The oracle
user must have write permissions to create the files in the path that you specify.
Note:
If you are upgrading from Oracle9i release 2, then you can continue to use the raw device or shared file that you used for the SRVM configuration repository instead of creating a new file for the OCR.Use Table 3-1 to determine the partition size for shared file systems.
Table 3-1 Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Clusterware files (OCR and voting disks) with external redundancy |
1 |
At least 256 MB for each volume |
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software. |
1 |
At least 256 MB for each volume |
Redundant Oracle Clusterware files with redundancy provided by Oracle software (mirrored OCR and two additional voting disks) |
1 |
At least 256 MB of free space for each OCR location, if the OCR is configured on a file system or At least 256 MB available for each OCR location if the OCR is configured on raw devices or block devices. and At least 256 MB for each voting disk location, with a minimum of three disks. |
Oracle Database files |
1 |
At least 1.2 GB for each volume |
Recovery files Note: Recovery files must be on a different volume than database files |
1 |
At least 2 GB for each volume |
In Table 3-1, the total required volume size is cumulative. For example, to store all files on the shared file system, you should have at least 3.4 GB of storage available over a minimum of two volumes.
If you are using NFS, then you must set the values for the NFS buffer size parameters rsize
and wsize
to at least 16384. Oracle recommends that you use the value 32768.
For example, if you decide to use rsize
and wsize
buffer settings with the value 32768, and your NFS server is named nfs_server
, then update the /etc/fstab
file on each node with an entry similar to the following:
nfs_server:/vol/DATA/oradata /home/oracle/netapp nfs\ rw,hard,nointr,rsize=32768,wsize=32768,tcp,noac,vers=3 1 2
Use the following instructions to create directories for Oracle Clusterware files. If you intend to use a file system to store Oracle Clusterware files, then you can also configure file systems for the Oracle Database and recovery files.
Note:
For NFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:
If necessary, configure the shared file systems that you want to use and mount them on each node.
Note:
The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.Use the bdf
command to determine the free disk space on each mounted file system.
From the display, identify the file systems that you want to use:
File Type | File System Requirements |
---|---|
Oracle Clusterware files | Choose a file system with at least 512 MB of free disk space (one OCR and one voting disk, with external redundancy) |
Database files | Choose either:
|
Recovery files | Choose a file system with at least 2 GB of free disk space. |
If you are using the same file system for more than one type of file, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically, oracle
) has permissions to create directories on the disks where you plan to install Oracle Clusterware and Oracle Database, then OUI creates the Oracle Clusterware file directory, and DBCA creates the Oracle Database file directory, and the Recovery file directory.
If the user performing the installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:
Oracle Clusterware file directory:
# mkdir /mount_point/oracrs # chown oracle:oinstall /mount_point/oracrs # chmod 775 /mount_point/oracrs
Database file directory:
# mkdir /mount_point/oradata # chown oracle:oinstall /mount_point/oradata # chmod 775 /mount_point/oradata
Recovery file directory (flash recovery area):
# mkdir /mount_point/flash_recovery_area # chown oracle:oinstall /mount_point/flash_recovery_area # chmod 775 /mount_point/flash_recovery_area
By making the Oracle user the owner of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.
When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed NFS configuration.
The following subsections describe how to configure Oracle Clusterware files on raw partitions.
Disabling Operating System Activation of Shared Volume Groups
Configuring Raw Disk Devices Without HP Serviceguard Extension
Configuring Shared Raw Logical Volumes With HP Serviceguard Extension
Create the Oracle Database Configuration Assistant Raw Device Mapping File
Table 3-2 lists the number and size of the raw partitions that you must configure for Oracle Clusterware files.
Table 3-2 Raw Partitions Required for Oracle Clusterware Files
To prevent the operating system from activating shared volume groups when it starts, you must edit the /etc/lvmrc
file on every node, as follows:
Create a backup copy of the /etc/lvmrc
file:
# cp /etc/lvmrc /etc/lvmrc_orig
Open the /etc/lvmrc
file in any text editor and search for the AUTO_VG_ACTIVATE flag.
If necessary, change the value of the AUTO_VG_ACTIVATE flag to 0, to disable automatic volume group activation, as follows:
AUTO_VG_ACTIVATE=0
Search for the custom_vg_activation
function in the /etc/lvmrc
file.
Add vgchange
commands to the function, as shown in the following example, to automatically activate existing local volume groups:
custom_vg_activation() { # e.g. /sbin/vgchange -a y -s # parallel_vg_sync "/dev/vg00 /dev/vg01" # parallel_vg_sync "/dev/vg02 /dev/vg03" /sbin/vgchange -a y vg00 /sbin/vgchange -a y vg01 /sbin/vgchange -a y vg02 return 0 }
In this example, vg00
, vg01
, and vg02
are the volume groups that you want to activate automatically when the system restarts.
If you are installing Oracle Clusterware or Oracle Clusterware and Oracle Real Application Clusters on an HP-UX cluster without HP Serviceguard Extension for RAC, then you must use shared raw disk devices for the Oracle Clusterware files. You can also use shared raw disk devices for database file storage, however, Oracle recommends that you use Automatic Storage Management to store database files in this situation. This section describes how to configure the shared raw disk devices for Oracle Clusterware files (Oracle Cluster Registry and Oracle Clusterware voting disk) and database files.
Table 3-3 lists the number and size of the raw disk devices that you must configure for database files.
Note:
Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.Table 3-3 Raw Disk Devices Required for Database Files on HP-UX
To configure shared raw disk devices for Oracle Clusterware files, database files, or both:
If you intend to use raw disk devices for database file storage, then choose a name for the database that you want to create.
The name that you choose must start with a letter and have no more than four characters, for example, orcl
.
Identify or configure the required disk devices.
The disk devices must be shared on all of the cluster nodes.
To ensure that the disks are available, enter the following command on every node:
# /usr/sbin/ioscan -fun -C disk
The output from this command is similar to the following:
Class I H/W Path Driver S/W State H/W Type Description ========================================================================== disk 0 0/0/1/0.6.0 sdisk CLAIMED DEVICE HP DVD-ROM 6x/32x /dev/dsk/c0t6d0 /dev/rdsk/c0t6d0 disk 1 0/0/1/1.2.0 sdisk CLAIMED DEVICE SEAGATE ST39103LC /dev/dsk/c1t2d0 /dev/rdsk/c1t2d0
This command displays information about each disk attached to the system, including the block device name (/dev/dsk/c
x
t
y
d
z
) and the character raw device name (/dev/rdsk/c
x
t
y
d
z
).
If the ioscan
command does not display device name information for a device that you want to use, then enter the following command to install the special device files for any new devices:
# /usr/sbin/insf -e
For each disk that you want to use, enter the following command on any node to verify that it is not already part of an LVM volume group:
# /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, then the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.
Note:
If you are using different volume management software, for example VERITAS Volume Manager, then refer to the appropriate documentation for information about verifying that a disk is not in use.If the ioscan
command shows different device names for the same device on any node, then:
Change directory to the /dev/rdsk
directory.
Enter the following command to list the raw disk device names and their associated major and minor numbers:
# ls -la
The output from this command is similar to the following for each disk device:
crw-r--r-- 1 bin sys 188 0x032000 Nov 4 2003 c3t2d0
In this example, 188 is the device major number and 0x32000 is the device minor number.
Enter the following command to create a new device file for the disk that you want to use, specifying the same major and minor number as the existing device file:
Note:
Oracle recommends that you use the alternative device file names shown in the previous table.# mknod ora_ocr_raw_100m c 188 0x032000
Repeat these steps on each node, specifying the correct major and minor numbers for the new device files on each node.
Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk device that you want to use:
Note:
If you are using a multi-pathing disk driver with Automatic Storage Management, then ensure that you set the permissions only on the correct logical device name for the disk.If you created an alternative device file for the device, then set the permissions on that device file.
OCR:
# chown root:oinstall /dev/rdsk/cxtydz # chmod 640 /dev/rdsk/cxtydz
Oracle Clusterware voting disk or database files:
# chown oracle:dba /dev/rdsk/cxtydz # chmod 660 /dev/rdsk/cxtydz
If you are using raw disk devices for database files, then follow these steps to create the Oracle Database Configuration Assistant raw device mapping file:
Note:
You must complete this procedure only if you are using raw devices for database files. The Oracle Database Configuration Assistant raw device mapping file enables Oracle Database Configuration Assistant to identify the appropriate raw disk device for each database file. You do not specify the raw devices for the Oracle Clusterware files in the Oracle Database Configuration Assistant raw device mapping file.Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
C shell:
% setenv ORACLE_BASE /u01/app/oracle
Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
In this example, dbname
is the name of the database that you chose previously.
Change directory to the $ORACLE_BASE/oradata/
dbname
directory.
Using any text editor, create a text file similar to the following that identifies the disk device file name associated with each database file.
Oracle recommends that you use a file name similar to dbname
_raw.conf
for this file.
Note:
The following example shows a sample mapping file for a two-instance RAC cluster. Some of the devices use alternative disk device file names. Ensure that the device file name that you specify identifies the same disk device on all nodes.system=/dev/rdsk/c2t1d1 sysaux=/dev/rdsk/c2t1d2 example=/dev/rdsk/c2t1d3 users=/dev/rdsk/c2t1d4 temp=/dev/rdsk/c2t1d5 undotbs1=/dev/rdsk/c2t1d6 undotbs2=/dev/rdsk/c2t1d7 redo1_1=/dev/rdsk/c2t1d8 redo1_2=/dev/rdsk/c2t1d9 redo2_1=/dev/rdsk/c2t1d10 redo2_2=/dev/rdsk/c2t1d11 control1=/dev/rdsk/c2t1d12 control2=/dev/rdsk/c2t1d13 spfile=/dev/rdsk/dbname_spfile_raw_5m pwdfile=/dev/rdsk/dbname_pwdfile_raw_5m
In this example, dbname
is the name of the database.
Use the following guidelines when creating or editing this file:
Each line in the file must have the following format:
database_object_identifier=device_file_name
The alternative device file names suggested in the previous table include the database object identifier that you must use in this mapping file. For example, in the following alternative disk device file name, redo1_1
is the database object identifier:
rac_redo1_1_raw_120m
For a RAC database, the file must specify one automatic undo tablespace datafile (undotbs
n
) and two redo log files (redo
n
_1
, redo
n
_2
) for each instance.
Specify at least two control files (control1
, control2
).
To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs
) instead of the automatic undo management tablespace data files.
Save the file and note the file name that you specified.
When you are configuring the oracle
user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.
When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:
/dev/rdsk/cxtydz
Note:
The following subsections describe how to create logical volumes on systems with HP Serviceguard extension, using the command line. You can use SAM to complete the same tasks. Refer to the HP-UX documentation for more information about using SAM.This section describes how to configure shared raw logical volumes for Oracle Clusterware and database file storage for an Oracle Real Application Clusters (RAC) database. The procedures in this section describe how to create a new shared volume group that contains the logical volumes required for both types of files.
To use shared raw logical volumes, HP Serviceguard Extension for RAC must be installed on all cluster nodes. If HP Serviceguard Extension for RAC is not installed, then you can use shared raw disk devices to store the Oracle Clusterware or database files. However, Oracle recommends that you use this method only for the Oracle Clusterware files and use an alternative method such as Automatic Storage Management for database file storage.
Before you continue, review the following guidelines which contain important information about using shared logical volumes with this release of RAC:
You must use shared volume groups for Oracle Clusterware and database files.
The Oracle Clusterware files require less than 200 MB of disk space. To make efficient use of the disk space in a volume group, Oracle recommends that you use the same shared volume group for the logical volumes for both the Oracle Clusterware files and the database files.
If you are upgrading an existing Oracle9i release 2 RAC installation that uses raw logical volumes, then you can use the existing SRVM configuration repository logical volume for the OCR and create a new logical volume in the same volume group for the Oracle Clusterware voting disk. However, before you install Oracle Clusterware, you must remove this volume group from any Serviceguard package that currently activates it.
See Also:
The HP Serviceguard or HP Serviceguard Extension for RAC documentation for information about removing a volume group from a Serviceguard package.Note:
If you are upgrading a database, then you must also create a new logical volume for the SYSAUX tablespace. Refer to the "Create Raw Logical Volumes in the New Volume Group" section for more information about the requirements for the Oracle Clusterware voting disk and SYSAUX logical volumes.You must use either your own startup script or a Serviceguard package to activate new or existing volume groups that contain only database files and Oracle Clusterware files.
See Also:
The HP Serviceguard documentation for information about creating a Serviceguard package to activate a shared volume group for RAC.All shared volume groups that you intend to use for Oracle Clusterware or database files must be activated in shared mode before you start the installation.
All shared volume groups that you are using for RAC, including the volume group that contains the Oracle Clusterware files, must be specified in the cluster configuration file using the parameter OPS_VOLUME_GROUP.
Note:
If you create a new shared volume group for RAC on an existing HP Serviceguard cluster, then you must reconfigure and restart the cluster before installing Oracle Clusterware. Refer to the HP Serviceguard documentation for information about configuring the cluster and specifying shared volume groups.The procedures in this section describe how to create basic volumes groups and volumes. If you want to configure more complex volumes, using mirroring for example, then use this section in conjunction with the HP Serviceguard documentation.
To create a volume group:
If necessary, install the shared disks that you intend to use for the database.
To ensure that the disks are available, enter the following command on every node:
# /sbin/ioscan -fun -C disk
The output from this command is similar to the following:
Class I H/W Path Driver S/W State H/W Type Description ========================================================================== disk 0 0/0/1/0.6.0 sdisk CLAIMED DEVICE HP DVD-ROM 6x/32x /dev/dsk/c0t6d0 /dev/rdsk/c0t6d0 disk 1 0/0/1/1.2.0 sdisk CLAIMED DEVICE SEAGATE ST39103LC /dev/dsk/c1t2d0 /dev/rdsk/c1t2d0 disk 2 0/0/2/0.2.0 sdisk CLAIMED DEVICE SEAGATE ST118202LC /dev/dsk/c2t2d0 /dev/rdsk/c2t2d0
This command displays information about each disk attached to the system, including the block device name (/dev/dsk/c
x
t
y
d
z
) and the character raw device name (/dev/rdsk/c
x
t
y
d
z
).
If the ioscan
command does not display device name information for a device that you want to use, then enter the following command to install the special device files for any new devices:
# /usr/sbin/insf -e
For each disk that you want to add to the volume group, enter the following command on any node to verify that it is not already part of an LVM volume group:
# /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, then the disk is already part of a volume group.
For each disk that you want to add to the volume group, enter a command similar to the following on any node:
# /sbin/pvcreate /dev/rdsk/cxtydz
To create a directory for the volume group in the /dev
directory, enter a command similar to the following, where vg_name
is the name that you want to use for the volume group:
# mkdir /dev/vg_name
To identify used device minor numbers, enter the following command on each node of the cluster:
# ls -la /dev/*/group
This command displays information about the device numbers used by all configured volume groups, similar to the following:
crw-r----- 1 root sys 64 0x000000 Mar 4 2002 /dev/vg00/group crw-r--r-- 1 root sys 64 0x010000 Mar 4 2002 /dev/vg01/group
In this example, 64 is the major number used by all volume group devices and 0x000000 and 0x010000 are the minor numbers used by volume groups vg00
and vg01
respectively. Minor numbers have the format 0xnn0000, where nn is a number in the range 00 to the value of the maxvgs kernel parameter minus 1. The default value for the maxvgs parameter is 10, so the default range is 00 to 09.
Identify an appropriate minor number that is unused on all nodes in the cluster.
To create the volume group and activate it, enter commands similar to the following:
# /sbin/mknod /dev/vg_name/group c 64 0xnn0000 # /sbin/vgcreate /dev/vg_name /dev/dsk/cxtydz . . . # /sbin/vgchange -a y vg_name
In this example:
vg_name
is the name that you want to give to the volume group
0x
nn
0000
is a minor number that is unused on all nodes in the cluster
/dev/dsk/c
x
t
y
d
z
...
is a list of one or more block device names for the disks that you want to add to the volume group
To create the required raw logical volumes in the new volume group:
Choose a name for the database that you want to create.
The name that you choose must start with a letter and have no more than four characters, for example, orcl
.
Identify the logical volumes that you must create.
Table 3-4 lists the number and size of the logical volumes that you must create for Oracle Clusterware files.
Table 3-4 Raw Logical Volumes Required for Database Files on HP-UX
To create each required logical volume, enter a command similar to the following:
# /sbin/lvcreate -n LVname -L size /dev/vg_name
In this example:
LVname
is the name of the logical volume that you want to create
Oracle recommends that you use the sample names shown in the previous table for the logical volumes. Substitute the dbname
variable in the sample logical volume name with the name that you chose for the database in step 1.
vg_name
is the name of the volume group where you want to create the logical volume
size
is the size of the logical volume in megabytes
The following example shows a sample command used to create an 800 MB logical volume in the oracle_vg
volume group for the SYSAUX tablespace of a database named test
:
# /sbin/lvcreate -n test_sysaux_raw_800m -L 800 /dev/oracle_vg
Change the owner, group, and permissions on the character device files associated with the logical volumes that you created, as follows:
# chown oracle:dba /dev/vg_name/r* # chmod 755 /dev/vg_name # chmod 660 /dev/vg_name/r*
Change the owner and group on the character device file associated with the logical volume for the Oracle Cluster Registry, as follows:
# chown root:oinstall /dev/vg_name/rora_ocr_raw_100m
To export the volume group and import it on the other cluster nodes:
# /sbin/vgchange -a n vg_name
To export the description of the volume group and its associated logical volumes to a map file, enter a command similar to the following:
# /sbin/vgexport -v -s -p -m /tmp/vg_name.map /dev/vg_name
Enter commands similar to the following to copy the map file to the other cluster nodes:
# rcp /tmp/vg_name.map nodename:/tmp/vg_name.map
Enter commands similar to the following on the other cluster nodes to import the volume group that you created on the first node:
# mkdir /dev/vg_name # /sbin/mknod /dev/vg_name/group c 64 0xnn0000 # /sbin/vgimport -v -s -m /tmp/vg_name.map /dev/vg_name
Enter commands similar to the following on the other cluster nodes to change the owner, group, and permissions on the character device files associated with the logical volumes that you created:
# chown oracle:dba /dev/vg_name/r* # chmod 755 /dev/vg_name # chmod 660 /dev/vg_name/r*
Change the owner and group on the character device file associated with the logical volume for the Oracle Cluster Registry, as follows:
# chown root:oinstall /dev/vg_name/rora_ocr_raw_100m
Note:
You must complete this procedure only if you are using raw logical volumes for database files. You do not specify the raw logical volumes for the Oracle Clusterware files in the Oracle Database Configuration Assistant raw device mapping file.To enable Oracle Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:
Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
C shell:
% setenv ORACLE_BASE /u01/app/oracle
Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
In this example, dbname
is the name of the database that you chose previously.
Change directory to the $ORACLE_BASE/oradata/
dbname
directory.
Enter a command similar to the following to create a text file that you can use to create the raw device mapping file:
# find /dev/vg_name -user oracle -name 'r*' -print > dbname_raw.conf
Edit the dbname_raw.conf file in any text editor to create a file similar to the following:
Note:
The following example shows a sample mapping file for a two-instance RAC cluster.system=/dev/vg_name/rdbname_system_raw_500m sysaux=/dev/vg_name/rdbname_sysaux_raw_800m example=/dev/vg_name/rdbname_example_raw_160m users=/dev/vg_name/rdbname_users_raw_120m temp=/dev/vg_name/rdbname_temp_raw_250m undotbs1=/dev/vg_name/rdbname_undotbs1_raw_500m undotbs2=/dev/vg_name/rdbname_undotbs2_raw_500m redo1_1=/dev/vg_name/rdbname_redo1_1_raw_120m redo1_2=/dev/vg_name/rdbname_redo1_2_raw_120m redo2_1=/dev/vg_name/rdbname_redo2_1_raw_120m redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120m control1=/dev/vg_name/rdbname_control1_raw_110m control2=/dev/vg_name/rdbname_control2_raw_110m spfile=/dev/vg_name/rdbname_spfile_raw_5m pwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m
In this example:
vg_name
is the name of the volume group
dbname
is the name of the database
Use the following guidelines when creating or editing this file:
Each line in the file must have the following format:
database_object_identifier=logical_volume
The logical volume names suggested in this manual include the database object identifier that you must use in this mapping file. For example, in the following logical volume name, redo1_1
is the database object identifier:
/dev/oracle_vg/rrac_redo1_1_raw_120m
The file must specify one automatic undo tablespace datafile (undotbs
n
) and two redo log files (redo
n
_1
, redo
n
_2
) for each instance.
Specify at least two control files (control1
, control2
).
To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs
) instead of the automatic undo management tablespace data files.
Save the file and note the file name that you specified.
When you are configuring the oracle
user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.
Database files consist of the files that make up the database, and the recovery area files. There are four options for storing database files:
Network File System (NFS)
Automatic Storage Management (ASM)
Raw partitions (Database files only--not for the recovery area)
During configuration of Oracle Clusterware, if you selected NFS, and the volumes that you created are large enough to hold the database files and recovery files, then you have completed required pre-installation steps. You can proceed to Chapter 4, "Installing Oracle Clusterware".
If you want to place your database files on ASM, then proceed to Configuring Disks for Automatic Storage Management.
If you want to place your database files on raw devices, and manually provide storage management for your database and recovery files, then proceed to "Configuring Disks for Database Files on Raw Devices".
Note:
Databases can consist of a mixture of ASM files and non-ASM files. Refer to Oracle Database Administrator's Guide for additional information about ASM.This section describes how to configure disks for use with Automatic Storage Management. Before you configure the disks, you must determine the number of disks and the amount of free disk space that you require. The following sections describe how to identify the requirements and configure the disks on each platform:
Note:
Although this section refers to disks, you can also use zero-padded files on a certified NAS storage device in an Automatic Storage Management disk group. Refer to Oracle Database Installation Guide for HP-UX for information about creating and configuring NAS-based files for use in an Automatic Storage Management disk group.Note:
For the most up-to-date information about supported configurations, refer to the Certify pages on the OracleMetaLink Web site at the following URL:http://metalink.oracle.com
To identify the storage requirements for using Automatic Storage Management, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:
Determine whether you want to use Automatic Storage Management for Oracle Database files, recovery files, or both.
Note:
You do not have to use the same storage mechanism for database files and recovery files. You can use the file system for one file type and Automatic Storage Management for the other.If you choose to enable automated backups and you do not have a shared file system available, then you must choose Automatic Storage Management for recovery file storage.
If you enable automated backups during the installation, you can choose Automatic Storage Management as the storage mechanism for recovery files by specifying an Automatic Storage Management disk group for the flash recovery area. Depending on how you choose to create a database during the installation, you have the following options:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option) then you can decide whether you want to use the same Automatic Storage Management disk group for database files and recovery files, or you can choose to use different disk groups for each file type.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must use the same Automatic Storage Management disk group for database files and recovery files.
Choose the Automatic Storage Management redundancy level that you want to use for the Automatic Storage Management disk group.
The redundancy level that you choose for the Automatic Storage Management disk group determines how Automatic Storage Management mirrors files in the disk group and determines the number of disks and amount of disk space that you require, as follows:
External redundancy
An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.
Because Automatic Storage Management does not mirror data in an external redundancy disk group, Oracle recommends that you use only RAID or similar devices that provide their own data protection mechanisms as disk devices in this type of disk group.
Normal redundancy
In a normal redundancy disk group, Automatic Storage Management uses two-way mirroring by default, to increase performance and reliability. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.
For most installations, Oracle recommends that you select normal redundancy disk groups.
High redundancy
In a high redundancy disk group, Automatic Storage Management uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.
While high redundancy disk groups do provide a high level of data protection, you must consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.
Determine the total amount of disk space that you require for the database files and recovery files.
Use the following table to determine the minimum number of disks and the minimum disk space requirements for the installation:
Redundancy Level | Minimum Number of Disks | Database Files | Recovery Files | Both File Types |
---|---|---|---|---|
External | 1 | 1.15 GB | 2.3 GB | 3.45 GB |
Normal | 2 | 2.3 GB | 4.6 GB | 6.9 GB |
High | 3 | 3.45 GB | 6.9 GB | 10.35 GB |
For RAC installations, you must also add additional disk space for the Automatic Storage Management metadata. You can use the following formula to calculate the additional disk space requirements (in MB):
15 + (2 * number_of_disks) + (126 * number_of_Automatic_Storage_Management_instances)
For example, for a four-node RAC installation, using three disks in a high redundancy disk group, you require an additional 525 MB of disk space:
15 + (2 * 3) + (126 * 4) = 525
If an Automatic Storage Management instance is already running on the system, then you can use an existing disk group to meet these storage requirements. If necessary, you can add disks to an existing disk group during the installation.
The following section describes how to identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Automatic Storage Management disk group devices.
Note:
You need to complete this step only if you intend to use an installation method that runs Database Configuration Assistant in interactive mode, for example, if you intend to choose the Custom installation type or the Advanced database configuration option. Other installation types do not enable you to specify failure groups.If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.
Note:
If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Automatic Storage Management disk group should be the same size and have the same performance characteristics.
Do not specify more than one partition on a single physical disk as a disk group device. Automatic Storage Management expects each disk group device to be on a separate physical disk.
Although you can specify a logical volume as a device in an Automatic Storage Management disk group, Oracle does not recommend their use. Logical volume managers can hide the physical disk architecture, preventing Automatic Storage Management from optimizing I/O across the physical devices.
See Also:
The "Configuring Disks for Automatic Storage Management" section for information about completing this taskIf you want to store either database or recovery files in an existing Automatic Storage Management disk group, then you have the following choices, depending on the installation method that you select:
If you select an installation method that runs Database Configuration Assistant in interactive mode (for example, by choosing the Advanced database configuration option), then you can decide whether you want to create a disk group, or use an existing one.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.
Note:
The Automatic Storage Management instance that manages the existing disk group can be running in a different Oracle home directory.To determine whether an existing Automatic Storage Management disk group exists, or to determine whether there is sufficient disk space in a disk group, you can use Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:
View the contents of the oratab
file to determine whether an Automatic Storage Management instance is configured on the system:
# more /etc/oratab
If an Automatic Storage Management instance is configured on the system, then the oratab
file should contain a line similar to the following:
+ASM2:oracle_home_path
In this example, +ASM2
is the system identifier (SID) of the Automatic Storage Management instance, with the node number appended, and oracle_home_path
is the Oracle home directory where it is installed. By convention, the SID for an Automatic Storage Management instance begins with a plus sign.
Set the ORACLE_SID and ORACLE_HOME environment variables to specify the appropriate values for the Automatic Storage Management instance that you want to use.
Connect to the Automatic Storage Management instance as the SYS user with SYSDBA privilege and start the instance if necessary:
# $ORACLE_HOME/bin/sqlplus "SYS/SYS_password as SYSDBA"
SQL> STARTUP
Enter the following command to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:
SQL> SELECT NAME,TYPE,TOTAL_MB,FREE_MB FROM V$ASM_DISKGROUP;
From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.
If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.
Note:
If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.To configure disks for use with ASM on HP-UX, follow these steps:
If necessary, install the shared disks that you intend to use for the ASM disk group.
To make sure that the disks are available, enter the following command on every node:
# /usr/sbin/ioscan -fun -C disk
The output from this command is similar to the following:
Class I H/W Path Driver S/W State H/W Type Description ========================================================================== disk 0 0/0/1/0.6.0 sdisk CLAIMED DEVICE HP DVD-ROM 6x/32x /dev/dsk/c0t6d0 /dev/rdsk/c0t6d0 disk 1 0/0/1/1.2.0 sdisk CLAIMED DEVICE SEAGATE ST39103LC /dev/dsk/c1t2d0 /dev/rdsk/c1t2d0
This command displays information about each disk attached to the system, including the block device name (/dev/dsk/c
x
t
y
d
z
) and the character raw device name (/dev/rdsk/c
x
t
y
d
z
).
If the ioscan
command does not display device name information for a device that you want to use, enter the following command to install the special device files for any new devices:
# /usr/sbin/insf -e
For each disk that you want to add to a disk group, enter the following command on any node to verify that it is not already part of an LVM volume group:
# /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.
Note:
If you are using different volume management software, for example VERITAS Volume Manager, refer to the appropriate documentation for information about verifying that a disk is not in use.Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk that you want to add to a disk group:
# chown oracle:dba /dev/rdsk/cxtydz # chmod 660 /dev/rdsk/cxtydz
Note:
If you are using a multi-pathing disk driver with ASM, make sure that you set the permissions only on the correct logical device name for the disk.If the nodes are configured differently, the device name for a particular device might be different on some nodes. Make sure that you specify the correct device names on each node.
If you also want to use raw devices for storage, then refer to the following section, "Configuring Storage for Oracle Clusterware Files on Raw Devices" section.
The following subsections describe how to configure raw partitions for database files:
Identifying Partitions and Configuring Raw Devices for Database Files
Creating the Oracle Database Configuration Assistant Raw Device Mapping File
Table 3-3 lists the number and size of the raw disk devices that you must configure for database files.
Note:
Because each file requires exclusive use of a complete disk device, Oracle recommends that, if possible, you use disk devices with sizes that closely match the size requirements of the files that they will store. You cannot use the disks that you choose for these files for any other purpose.Table 3-5 Raw Disk Devices Required for Database Files on HP-UX
If you intend to use raw disk devices for database file storage, then choose a name for the database that you want to create.
The name that you choose must start with a letter and have no more than four characters, for example, orcl
.
Identify or configure the required disk devices.
The disk devices must be shared on all of the cluster nodes.
To ensure that the disks are available, enter the following command on every node:
# /usr/sbin/ioscan -fun -C disk
The output from this command is similar to the following:
Class I H/W Path Driver S/W State H/W Type Description ========================================================================== disk 0 0/0/1/0.6.0 sdisk CLAIMED DEVICE HP DVD-ROM 6x/32x /dev/dsk/c0t6d0 /dev/rdsk/c0t6d0 disk 1 0/0/1/1.2.0 sdisk CLAIMED DEVICE SEAGATE ST39103LC /dev/dsk/c1t2d0 /dev/rdsk/c1t2d0
This command displays information about each disk attached to the system, including the block device name (/dev/dsk/c
x
t
y
d
z
) and the character raw device name (/dev/rdsk/c
x
t
y
d
z
).
If the ioscan
command does not display device name information for a device that you want to use, then enter the following command to install the special device files for any new devices:
# /usr/sbin/insf -e
For each disk that you want to use, enter the following command on any node to verify that it is not already part of an LVM volume group:
# /sbin/pvdisplay /dev/dsk/cxtydz
If this command displays volume group information, then the disk is already part of a volume group. The disks that you choose must not be part of an LVM volume group.
Note:
If you are using different volume management software, for example VERITAS Volume Manager, then refer to the appropriate documentation for information about verifying that a disk is not in use.If the ioscan
command shows different device names for the same device on any node, then:
Change directory to the /dev/rdsk
directory.
Enter the following command to list the raw disk device names and their associated major and minor numbers:
# ls -la
The output from this command is similar to the following for each disk device:
crw-r--r-- 1 bin sys 188 0x032000 Nov 4 2003 c3t2d0
In this example, 188 is the device major number and 0x32000 is the device minor number.
Enter the following command to create a new device file for the disk that you want to use, specifying the same major and minor number as the existing device file:
Note:
Oracle recommends that you use the alternative device file names shown in the previous table.# mknod ora_ocr_raw_100m c 188 0x032000
Repeat these steps on each node, specifying the correct major and minor numbers for the new device files on each node.
Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk device that you want to use:
Note:
If you are using a multi-pathing disk driver with Automatic Storage Management, then ensure that you set the permissions only on the correct logical device name for the disk.If you created an alternative device file for the device, then set the permissions on that device file.
OCR:
# chown root:oinstall /dev/rdsk/cxtydz # chmod 640 /dev/rdsk/cxtydz
Oracle Clusterware voting disk or database files:
# chown oracle:dba /dev/rdsk/cxtydz # chmod 660 /dev/rdsk/cxtydz
If you are using raw disk devices for database files, then follow these steps to create the Oracle Database Configuration Assistant raw device mapping file:
Note:
You must complete this procedure only if you are using raw devices for database files. The Oracle Database Configuration Assistant raw device mapping file enables Oracle Database Configuration Assistant to identify the appropriate raw disk device for each database file. You do not specify the raw devices for the Oracle Clusterware files in the Oracle Database Configuration Assistant raw device mapping file.Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
C shell:
% setenv ORACLE_BASE /u01/app/oracle
Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
In this example, dbname
is the name of the database that you chose previously.
Change directory to the $ORACLE_BASE/oradata/
dbname
directory.
Using any text editor, create a text file similar to the following that identifies the disk device file name associated with each database file.
Oracle recommends that you use a file name similar to dbname
_raw.conf
for this file.
Note:
The following example shows a sample mapping file for a two-instance RAC cluster. Some of the devices use alternative disk device file names. Ensure that the device file name that you specify identifies the same disk device on all nodes.system=/dev/rdsk/c2t1d1 sysaux=/dev/rdsk/c2t1d2 example=/dev/rdsk/c2t1d3 users=/dev/rdsk/c2t1d4 temp=/dev/rdsk/c2t1d5 undotbs1=/dev/rdsk/c2t1d6 undotbs2=/dev/rdsk/c2t1d7 redo1_1=/dev/rdsk/c2t1d8 redo1_2=/dev/rdsk/c2t1d9 redo2_1=/dev/rdsk/c2t1d10 redo2_2=/dev/rdsk/c2t1d11 control1=/dev/rdsk/c2t1d12 control2=/dev/rdsk/c2t1d13 spfile=/dev/rdsk/dbname_spfile_raw_5m pwdfile=/dev/rdsk/dbname_pwdfile_raw_5m
In this example, dbname
is the name of the database.
Use the following guidelines when creating or editing this file:
Each line in the file must have the following format:
database_object_identifier=device_file_name
The alternative device file names suggested in the previous table include the database object identifier that you must use in this mapping file. For example, in the following alternative disk device file name, redo1_1
is the database object identifier:
rac_redo1_1_raw_120m
For a RAC database, the file must specify one automatic undo tablespace datafile (undotbs
n
) and two redo log files (redo
n
_1
, redo
n
_2
) for each instance.
Specify at least two control files (control1
, control2
).
To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs
) instead of the automatic undo management tablespace data files.
Save the file and note the file name that you specified.
When you are configuring the oracle
user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.
When you are installing Oracle Clusterware, you must enter the paths to the appropriate device files when prompted for the path of the OCR and Oracle Clusterware voting disk, for example:
/dev/rdsk/cxtydz
Note:
You must complete this procedure only if you are using raw logical volumes for database files.To enable Oracle Database Configuration Assistant to identify the appropriate raw device for each database file, you must create a raw device mapping file, as follows:
Set the ORACLE_BASE environment variable to specify the Oracle base directory that you identified or created previously:
Bourne, Bash, or Korn shell:
$ ORACLE_BASE=/u01/app/oracle ; export ORACLE_BASE
C shell:
% setenv ORACLE_BASE /u01/app/oracle
Create a database file subdirectory under the Oracle base directory and set the appropriate owner, group, and permissions on it:
# mkdir -p $ORACLE_BASE/oradata/dbname
# chown -R oracle:oinstall $ORACLE_BASE/oradata
# chmod -R 775 $ORACLE_BASE/oradata
In this example, dbname
is the name of the database that you chose previously.
Change directory to the $ORACLE_BASE/oradata/
dbname
directory.
Enter a command similar to the following to create a text file that you can use to create the raw device mapping file:
# find /dev/vg_name -user oracle -name 'r*' -print > dbname_raw.conf
Edit the dbname_raw.conf file in any text editor to create a file similar to the following:
Note:
The following example shows a sample mapping file for a two-instance RAC cluster.system=/dev/vg_name/rdbname_system_raw_500m sysaux=/dev/vg_name/rdbname_sysaux_raw_800m example=/dev/vg_name/rdbname_example_raw_160m users=/dev/vg_name/rdbname_users_raw_120m temp=/dev/vg_name/rdbname_temp_raw_250m undotbs1=/dev/vg_name/rdbname_undotbs1_raw_500m undotbs2=/dev/vg_name/rdbname_undotbs2_raw_500m redo1_1=/dev/vg_name/rdbname_redo1_1_raw_120m redo1_2=/dev/vg_name/rdbname_redo1_2_raw_120m redo2_1=/dev/vg_name/rdbname_redo2_1_raw_120m redo2_2=/dev/vg_name/rdbname_redo2_2_raw_120m control1=/dev/vg_name/rdbname_control1_raw_110m control2=/dev/vg_name/rdbname_control2_raw_110m spfile=/dev/vg_name/rdbname_spfile_raw_5m pwdfile=/dev/vg_name/rdbname_pwdfile_raw_5m
In this example:
vg_name
is the name of the volume group
dbname
is the name of the database
Use the following guidelines when creating or editing this file:
Each line in the file must have the following format:
database_object_identifier=logical_volume
The logical volume names suggested in this manual include the database object identifier that you must use in this mapping file. For example, in the following logical volume name, redo1_1
is the database object identifier:
/dev/oracle_vg/rrac_redo1_1_raw_120m
The file must specify one automatic undo tablespace datafile (undotbs
n
) and two redo log files (redo
n
_1
, redo
n
_2
) for each instance.
Specify at least two control files (control1
, control2
).
To use manual instead of automatic undo management, specify a single RBS tablespace datafile (rbs
) instead of the automatic undo management tablespace data files.
Save the file and note the file name that you specified.
When you are configuring the oracle
user's environment later in this chapter, set the DBCA_RAW_CONFIG environment variable to specify the full path to this file.
To upgrade a 10.1 database on raw devices to a 10.2.0.2 database on block devices, use the following procedure:
Perform Oracle Clusterware and Oracle Real Application Clusters (RAC) upgrade steps (including database upgrade), as described in Oracle Database Upgrade Guide, 10g Release 2 (10.2), part number B14238-01.
Using the following procedure, stop all processes:
Shut down all processes in the Oracle home that can access a database, such as Oracle Enterprise Manager Database Control or iSQL*Plus.
Shut down all RAC instances on all nodes. To shut down all RAC instances for a database, enter the following command, where db_name is the name of the database:
$ oracle_home/bin/srvctl stop database -d db_name
Shut down all ASM instances on all nodes. To shut down an ASM instance, enter the following command, where node is the name of the node where the ASM instance is running:
$ oracle_home/bin/srvctl stop asm -n node
Stop all node applications on all nodes. To stop node applications running on a node, enter the following command, where node is the name of the node where the applications are running:
$ oracle_home/bin/srvctl stop nodeapps -n node