Oracle® Database User's Guide 10g Release 2 (10.2) for IBM z/OS (OS/390) Part Number B25396-01 |
|
|
PDF · Mobi · ePub |
This chapter discusses z/OS-specific issues and considerations with various Oracle tool and utility programs that application developers are likely to use. (Tools used chiefly by database administrators or system administrators, such as RMAN, are covered in the Oracle Database System Administration Guide for IBM z/OS (OS/390). Before using this chapter you should be familiar with the material in Chapter 2, "Oracle Software Interaction with z/OS and Language Environment", which describes features and considerations common to most or all Oracle tools and utilities. You should also have access to the base or generic documentation for each tool or utility you are using since the material in this chapter is supplemental to that.
Oracle supplies JCL procedures for some of the tools and utilities that support batch job execution. Consult with your system administrator to determine if these procedures are installed in your system and the procedure library and member names used.
This chapter contains the following sections:
SQL*Plus is a general purpose tool for issuing both prepared and ad hoc SQL statements and PL/SQL procedure calls. It also is used for database administration and operations functions, including creating database objects and structures, managing Oracle userids and object security, and starting up and shutting down database instances. When you run SQL*Plus on z/OS you can interact with Oracle servers on z/OS or on any other Oracle platform. Refer to the SQL*Plus User's Guide and Reference for complete generic information on SQL*Plus.
SQL*Plus on z/OS supports batch job, TSO, and z/OS UNIX shell execution environments. A JCL procedure for batch execution (default name ORASQL) is supplied by Oracle and may be installed on your system with the same or a different name. In TSO, both CALL and command processor (CP) invocation are supported. For the non-POSIX environments, the load module or program object name is SQLPLUS. In a z/OS UNIX shell (including OMVS under TSO), use the sqlplus
(lower case) command to invoke this tool.
SQL*Plus reads C standard input (described in Chapter 2) as the primary source of input commands and SQL statements. If you do not supply an Oracle userid/password (or the special value /nolog) as an invocation parameter, SQL*Plus prompts for a userid and password and reads those from standard input before reading for normal commands. If you do this in a batch job, the prompt responses must appear in the first records or lines of standard input, ahead of any input commands or SQL statements. In interactive usage (TSO or a shell), these prompts and responses normally occur at the terminal. In this case, if the password is entered separately from the userid, it is not displayed at the terminal.
For certain database control operations (such as Oracle startup and shutdown), SQL*Plus requires a special userid/password parameter that contains spaces, such as "/ AS SYSDBA." Refer to the section "Parameters Containing Spaces" for information on how to specify such values.
SQL*Plus output (including the aforementioned prompts) is written to C standard output. There are various options to control exactly what gets written as well as a SPOOL command to write output to a separate data set or file.
Interruption refers to using the terminal ATTN or PA1 key (in TSO) or a POSIX signal mechanism such as Ctl-C to interrupt in progress activity. Besides the normal interrupt handler that is established when connecting to Oracle from TSO or a shell, SQL*Plus establishes its own handler to allow termination of display output. If you generate an interrupt while using SQL*Plus, the behavior depends on whether SQL*Plus or the target Oracle server was in control when the interrupt was processed. If the Oracle server was in control (it was processing a SQL statement or PL/SQL procedure call from SQL*Plus) the interrupt halts the in progress statement with an ORA-01013 error, as described in Chapter 2. If the interrupt occurs while SQL*Plus is in control, driving the SQL*Plus-specific handler, it causes current fetches for any SELECT statement to stop. The typical occurrence of the latter situation is where you have issued a SQL SELECT statement that produces more output than expected. This provides an alternative to canceling or killing the entire SQL*Plus session to halt unwanted output.
SQL*Plus reads optional profile files during initialization (after logging on to Oracle). The files can contain SQL*Plus commands (such as SET commands) as well as SQL statements and PL/SQL blocks.
When SQL*Plus runs in a shell on z/OS, there are two profile files with the same names and locations as on other UNIX platforms. The site profile applies to all users and is read from $ORACLE_HOME/sqlplus/admin/glogin.sql
. The user profile is read from login.sql
in the first HFS directory (of multiple possible directories) in which a file of that name is found. The directories that are searched for the user profile are the current working directory followed by any directories listed in the SQLPATH environment variable. (Refer to the SQL*Plus User's Guide and Reference for details on using the SQLPATH environment variable.)
In z/OS batch and TSO (POSIX OFF), there is no site profile file. The user profile filespec is //DD:SQLLOGIN which means the SQLLOGIN DD is read for profile statements. The DD can specify any of the z/OS data set or file types permitted for SQL files, discussed in the section "SQL Files".
Profile files are optional. If a file can't be opened, profile processing is skipped and no error message is issued.
Various SQL*Plus commands operate on files containing SQL, PL/SQL, and/or SQL*Plus commands. On z/OS, these files can be a sequential data set, a PDS member, an instream (DD *) data set, or an HFS file. A sequential or partitioned data set can have a record format of F, FB, V, or VB. Records in a data set (including instream) must not contain sequence numbers or other data not intended for SQL*Plus to read.
If the file you want to use is counter to the POSIX expectation (for example, an HFS file when POSIX is OFF), you must use an unambiguous filespec when specifying the file. When a file is specified as a data set name or HFS file name, the name is subject to extension processing (as discussed in Chapter 2) with the type suffix sql
. The SQL*Plus commands @, @@, EDIT, GET, SAVE, and START all apply this extension to supplied filespecs.
In POSIX OFF environments, you can use FNA to manipulate data set filespecs produced by name extension or to associate disk space and DCB attributes with files that are written. Refer to Chapter 2 for a complete description of FNA. Use of FNA is required when you run Oracle catalog creation or upgrade scripts using SQL scripts that are members of a partitioned data set. In this case, FNA is used to transform simple 1-segment script names into PDS member names inside parentheses following a DD or data set name.
The SQL*Plus @@ operator requires additional explanation on z/OS. This operator is shorthand for the START command, similar to @ but with a slight difference: when an @@ command is coded in a file and the filespec supplied on the command is not absolute; the file is read from the same directory that contains the file in which the @@ command was coded. With data sets, "directory" refers to the high-level (leftmost) data set name qualifier, so the behavior of @@ with data sets might not be what you expected. In particular, you cannot code @@foo within a PDS member and assume that foo will be treated as another member name within that PDS. Without FNA, @@foo will end up trying to access a data set named hlq.FOO.SQL where hlq is the high-level dsname qualifier of the data set in which the @@foo command is supplied. To get different behavior, such as treating foo as a member name, you must use FNA.
The SQL*Plus EDIT command is designed to invoke an external editor of your choosing. In z/OS TSO (POSIX OFF), the only external editor supported is IBM's ISPF editor. In this environment, the SQL*Plus variable _EDITOR is automatically set to ISPF.
To be able to invoke the ISPF editor in TSO, SQL*Plus requires that the ISPF environment already be established. To accomplish this, invoke SQL*Plus from within an ISPF session (from the ISPF "Command Shell" panel). Keep in mind that a data set name filespec supplied on an EDIT command is subject to extension (and FNA) processing. HFS files cannot be edited with the ISPF editor using SQL*Plus EDIT.
In a z/OS UNIX shell, _EDITOR defaults to vi. To use a different editor, define the _EDITOR variable to the exact name of the editor executable or shell script. Whether the editor you use in a shell is capable of processing data sets (in addition to HFS files) depends on the editor. Editing data sets with the default z/OS UNIX editor (vi) is not supported.
Neither vi nor ISPF work in the TSO OMVS shell. To use EDIT when running SQL*Plus in OMVS, you must define _EDITOR to oedit before issuing the EDIT command, as follows:
SQL> def _editor=oedit SQL> ===> edit /tmp/load_it.sql
The editor, oedit, is a form of the ISPF editor that works in POSIX environments. It is the only editor supported when SQL*Plus runs in the OMVS shell. Only HFS files can be accessed with oedit.
The EDIT command is not available when SQL*Plus runs as a z/OS batch job.
The SPOOL command causes SQL*Plus to begin writing output (including command responses as well as SQL statement results) to a specified file. On z/OS, the filespec you supply on the spool command can specify a sequential data set, PDS member, SYSOUT, or an HFS file. If the file you want to use is counter to the POSIX expectation, you must use an unambiguous filespec. If the filespec you supply is a data set or HFS file name, it is subject to extension with the suffix first. When running SQL*Plus in batch or TSO, you can use FNA to manipulate the resulting filespec or to associate space or DCB attributes with the data set.
If you spool to a SYSOUT filespec, as in the following:
SQL> spool //s:*,,MCMDEV3 SQL> …
the output is freed (becomes eligible for routing or printing) as soon as the file is closed. The spool file is closed when you issue another SPOOL command (including the special command SPOOL OFF) or when you exit SQL*Plus.
The special command SPOOL OUT is not supported on z/OS. Attempting to use it will produce an error message.
The HOST command allows you to invoke an arbitrary external program or script from within a SQL*Plus session. This feature is implemented on z/OS using the C function system
, described in the IBM manual C/C++ Run-Time Library Reference. Refer to this manual for details on the capabilities and limitations of the system function.
The behavior of system()
(and thus of the HOST command) depends on the POSIX indicator and, in the POSIX OFF case, on command syntax and whether the environment is batch or TSO. In TSO, HOST can be used to invoke TSO commands, CLISTs, and REXX EXECs. To do this, code the command text after HOST exactly as it would be entered at the TSO READY prompt, as in the following example:
SQL> host listd misc.sql JSMITH.MISC.SQL --RECFM-LRECL-BLKSIZE-DSORG VB 255 4096 PO --VOLUMES-- PNW042 SQL>
In addition, in both TSO and batch, HOST can be used to invoke jobstep z/OS programs such as IBM utilities. To do this, special syntax (similar to the JCL EXEC statement) is required to indicate the program name and, optionally, PARM data in the command text as illustrated here:
SQL> host pgm=iebcopy,parm=COPY IEBCOPY MESSAGES AND CONTROL STATEMENTS PAGE 1 IEB1035I JSMITH $TS1 $TS1 13:59:38 THU 02 OCT 2003 PARM='COPY' IEB1099I *** IEBCOPY IS NOT APF AUTHORIZED *** $TS1 COPY INDD=SYSUT1,OUTDD=SYSUT2 GENERATED STATEMENT …
With this technique, letter case is preserved in any PARM data that you supply, which is why COPY
is in upper case in the example. Refer to the IBM manual C/C++ Run-Time Library Reference for additional details on this command syntax.
In non-POSIX TSO and batch environments, you cannot use SQL*Plus HOST to invoke a an HFS executable or shell script. Also, you cannot issue HOST alone (with no command text) in order to be placed at a general command prompt.
The system()
function and HOST behave differently when you run SQL*Plus in a z/OS UNIX shell. The command text that you supply with HOST must resolve to a valid executable or shell script file in the HFS, or be a built-in shell command. Commands are processed with the shell indicated by the SHELL environment variable; if SHELL is not set, the default shell /bin/sh
is used. Be aware that commands issued through HOST are executing in a subshell relative to the shell in which SQL*Plus was invoked, so issuing commands that change the state of the current shell has no effect on the SQL*Plus session. For example, using HOST to issue a cd
(change directory) command or to set an environment variable does not change the current working directory or the value of that environment variable as seen by SQL*Plus.
When SQL*Plus runs in a z/OS UNIX shell, HOST can be issued alone, without command text, causing the user to be placed at a general shell prompt. In this situation multiple commands can be issued in succession. Control returns to SQL*Plus when you end execution of the nested shell. For example:
SQL> host $ cd /oradev/js1/src $ ls -l *.c -rw-rw-r-- 1 JSMITH PDRP88 7438 Nov 5 15:55 ocidemo3.c $ rm ocidemo3.c $ exit SQL>
In this case, the cd command is effective in terms of setting the directory used by the subsequent commands within the subshell.
There is no direct way to invoke non-HFS z/OS programs or TSO commands using SQL*Plus HOST in a POSIX ON environment.
The SQL*Plus timing feature, activated with the command SET TIMING ON, reports elapsed time for server SQL and PL/SQL operations. On z/OS, the timing feature also reports the processor (CPU) time consumed by SQL*Plus during these operations. As described in Chapter 3, when you use XM protocol to connect to a local Oracle server, this time figure includes the Oracle server processing, which is executed by the SQL*Plus task in cross-memory mode. This allows the timing feature to be used to collect approximate server processing times for SQL and PL/SQL operations.
Be aware that server caching of data, SQL, and PL/SQL, and other workload-related factors, mean that processor time for a particular operation can vary significantly over multiple trials. Also, the processor time reported by the timing feature on z/OS does not include time consumed by server "slave" tasks used in parallel query and other parallel operations. Finally, when using TCP protocol rather than XM, the processor time reported by the timing feature does not include any Oracle server processor time.
Except for initialization failures, SQL*Plus normally sets a SUCCESS (zero) return code regardless of the success or failure of any SQL, PL/SQL, or SQL*Plus commands processed. To change this behavior, for example to set a nonzero return code if any SQL operation fails, you must use the EXIT command, possibly in conjunction with the WHENEVER command. EXIT allows you to set any valid numerical return code you want using either a specific number or the current value of a SQL*Plus variable. Remember, however, that in non-POSIX environments return codes are limited to the range 0-4095. If you issue EXIT with a number higher than 4095, the resulting return code is unpredictable.
You can also issue EXIT with one of three keywords whose numerical value depends on the platform. On z/OS, the values associated with these keywords are shown in the following table:
Keyword | Value |
---|---|
SUCCESS | 0 (the default when EXIT has no parameter) |
WARNING | 4 |
FAILURE | 8 |
On an initialization failure, SQL*Plus ends as though EXIT FAILURE occurred, producing a return code 8 on z/OS.
Refer to the Oracle Database SQL*Plus User's Guide and Reference for complete details on the EXIT and WHENEVER commands.
The following SQL*Plus features or commands are not available on z/OS:
Site profile file, when running in POSIX OFF environments.
SPOOL OUT command. This command results in an error message on z/OS.
SET NEWPAGE 0 command. This command is accepted but does not clear the terminal screen (as it does on some ports) when issued in TSO or a shell.
RUNFORM command. This command results in an error message on z/OS.
The following example is SQL*Plus executed as a batch job step with both the user profile file and standard input provided as instream data sets. An Oracle database instance with SID ORA1 is accessed using cross-memory (XM) protocol, with the connection specified using an ORA@ DD statement.
//PLUS EXEC PGM=SQLPLUS,PARM='/nolog',REGION=0M //STEPLIB DD DISP=SHR,DSN=ORACLE.V10G.CMDLOAD //ORA$LIB DD DISP=SHR,DSN=ORACLE.V10G.MESG //ORA@ORA1 DD DUMMY //SQLLOGIN DD * whenever oserror exit 16 whenever sqlerror exit 12 set pagesize 0 /* //SYSIN DD * connect conklin/grommet6 create table big_acct ( acct number, balance number, desc varchar2(200)) as select account_primary, balance_fwd, account_pdesc from pmds011.master where (balance_fwd > 100000.00) and (account_pdesc not null); commit; select * from sub_acct1 order by balance descending; exit /*
The following example shows SQL*Plus used in line-mode TSO and assumes the following:
The CMDLOAD data set is provided as STEPLIB, TSOLIB, or a linklist library.
The MESG data set is already allocated to ORA$LIB.
A TNSNAMES DD is allocated and contains a valid entry with the identifier ORALNX1. This is a TCP/IP connection to an Oracle 10g instance running on a z/Linux system.
READY sqlplus pdbr/asdfghj@oralnx1 SQL*Plus: Release 10.1.0.2.0 - Production on Tue Mar 16 12:48:12 2004 Copyright (c) 1982, 2004, Oracle. All rights reserved. Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0 - Production With the Partitioning, Oracle Label Security and Data Mining options SQL> host listd test.sql mem [Text entered by user] host listd test.sql mem JCONKLI.TEST.SQL --RECFM-LRECL-BLKSIZE-DSORG VB 255 27998 PO --VOLUMES-- DSM119 --MEMBERS-- RVNT030 RVNT040 SQL> get test(rvnt030) [Text entered by user] get test(rvnt030) 1 SELECT DNAME, DACCT, DDESC FROM ANWR.MASTER WHERE 2 (DLCODE IN (12, 43, 663, 900)) AND 3* (DDESC NOT NULL); SQL> l 1 [Text entered by user] l 1 1* SELECT DNAME, DACCT, DDESC FROM ANWR.MASTER WHERE SQL> c /ANWR./TEMP./ [Text entered by user] c /ANWR./TEMP./ 1* SELECT DNAME, DACCT, DDESC FROM TEMP.MASTER WHERE SQL> / [Text entered by user] / [SELECT output omitted] SQL> exit [Text entered by user] exit Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0 - Production With the Partitioning, Oracle Label Security and Data Mining options READY
SQL*Loader is an Oracle utility that loads external data into Oracle database tables. When you run SQL*Loader on z/OS, the external data can come from z/OS data sets (including VSAM) and HFS files, and can include character, zoned decimal, packed decimal, binary integer, and hexadecimal floating point (HFP or System/370 floating point) data. SQL*Loader on z/OS can load data into an Oracle database on z/OS or on any other Oracle platform. Refer to Oracle10g Database Utilities for complete generic information on SQL*Loader.
SQL*Loader on z/OS supports batch job, TSO, and shell execution environments. A JCL procedure for batch execution (default name ORALDR) is supplied by Oracle and may be installed on your system with the same or a different name. In TSO, both CALL and command processor (CP) invocation are supported. For the non-POSIX environments, the load module or program object name is SQLLDR. In a shell (including OMVS under TSO), use the sqlldr
(lower case) command to invoke this utility.
Unlike most tools and utilities, SQL*Loader does not read C standard input as a primary source of input. Loader processing details are specified in a text control file whose filespec usually is supplied on the command line or PARM field. If you don't supply this parameter but you do supply a userid/password parameter, Loader prompts for the control filespec and reads it from C standard input. This is intended mainly for interactive (TSO or shell) execution; if you do this in a batch job, the prompt response must be the first record or line of standard input. If you supply neither a control file nor a userid/password as invocation parameters, SQL*Loader displays a help message and ends normally.
The parser used by SQL*Loader for command line parameters has its own syntax conventions (described in the generic documentation) and is sensitive to apostrophes (single quotes). If you use apostrophes in a command line filespec parameter, you must "escape" them with a preceding backslash in order to keep the parser from trying to interpret them as syntax elements. For example, to specify the CONTROL= parameter with the data set name filespec //'DMWONG.ORA.CTL(LOAD2)'
in a batch job, code
//LOAD EXEC PGM=SQLLDR,PARM='control=//\''DMWONG.ORA.CTL(LOAD2)\'''
Notice that the filespec apostrophes in the example are doubled, which is required to get an apostrophe through the JCL PARM mechanism, as well as being escaped with a preceding backslash for Loader's parser.
Command line parameters for SQL*Loader can easily exceed the 100-character maximum of the JCL PARM field or TSO CALL command. You can use the "++" parameter file mechanism described in Chapter 2 to supply Loader parameters in a data set. SQL*Loader also has a generic parameter file feature that is similar to the "++" mechanism but is available on all platforms (and also works in a POSIX shell, which is not true of "++"). If you use the generic parameter file, the filespec supplied for the PARFILE parameter is not subject to name extension before opening. This file can be a sequential or instream (DD *) data set, a PDS member, or an HFS file. The file must not contain record sequence numbers or other data that is not a SQL*Loader command line parameter.
SQL*Loader message output (including the aforementioned prompt and help message) is written to C standard output. Detailed information about load processing is discussed in the sections that follow.
SQL*Loader uses several types of files. In a single execution of Loader, multiple files of some types may be used. In some cases, if you don't supply a filespec for a file that is required, Loader derives a filespec for the file based on some other filespec that you did supply. What gets derived depends on the supplied filespec and on the environment in which Loader is running with POSIX OFF or ON.
The filespec derivation schemes mean care must be taken when running SQL*Loader with filespecs that are counter to the POSIX expectation: HFS files with POSIX OFF or data sets with POSIX ON. Deriving a data set filespec from an HFS filespec, or an HFS filespec from a data set filespec, is not supported. In these situations, filespecs must be supplied explicitly rather than being derived.
Exactly one SQL*Loader input control file, mentioned earlier, is required. The control file can be a sequential data set, PDS member, instream (DD *) data set, or HFS file. A data set must have a record format of F, FB, V, or VB. Records in a data set must not have sequence numbers or other data that is not part of Loader's control file syntax. There is no default filespec for the control file; if it is not supplied on the command line or PARM, a prompt is issued and the filespec is read from C standard input. The filespec for the control file is subject to extension processing with the suffix ctl
. As discussed in Chapter 2, this may affect filespecs that specify a data set name or HFS file name; it does not affect a //DD: filespec.
An execution of Loader also requires one output log file, to which load statistics, informational, and error messages are written (separate from what is written to C standard output). The log filespec can be supplied on the command line or PARM and can specify a sequential data set, PDS member, SYSOUT, HFS file, or a //DD:
filespec that resolves to any of these. If none is supplied, the log filespec is derived from the control filespec as follows:
If the control filespec is a //DD: type, the log filespec is //DD:LOG.
If the control filespec is a data set name type, the log file is written as a data set named root.LOG, where root is the control file data set name with any "directory" and extension suffix (as defined for data set names) removed. If the control filespec included a PDS member name in parentheses, the same member name specification is included at the end of the log filespec. This data set name acquires a high-level qualifier (TSO PROFILE PREFIX or, elsewhere, the z/OS userid) when opened, so the ultimate data set name used is hlq.root.LOG.
If the control filespec is an HFS file name, the log file is written as an HFS file named root.log, where root is the control file name with any directory path and extension suffix (as defined for HFS file names) removed. This file is written in the current working directory.
In the latter two cases, it is the file name extension mechanism that supplies the .LOG or .log suffix. This means FNA processing can be applied when the log file is a data set.
If your control filespec is counter to the POSIX expectation, the log filespec must not be derived-it must be supplied explicitly.
An execution of SQL*Loader uses one or more input data files containing data to be loaded into Oracle database tables. One data filespec can be supplied on the command line or PARM, or in the first INFILE or INDDN clause in the Loader control file input. Additional data filespecs can be supplied using additional INFILE or INDDN clauses. If you do not supply any data filespec, a default is derived from the control filespec. The derivation is similar to that for log files, just described, but uses the suffix .DAT or .dat. When the control filespec is a //DD: type, the derived data filespec is //DD:DATA.
If your control filespec is counter to the POSIX expectation, the data filespec must not be derived-you must supply it explicitly on the command line or in the Loader control file.
On z/OS, a SQL*Loader data file can be a sequential data set, PDS member, instream (DD *) data set, VSAM entry-sequenced (ESDS), key-sequenced (KSDS), or relative record (RRDS) cluster, or an HFS file. Sequential or partitioned data sets must have a record format of F, FB, V, or VB. The special control file notation "INFILE *" is used to indicate that input data is in the Loader control file itself, following the control statements. In this situation, "*" is not interpreted as a filespec for the TSO terminal (as it normally would be in a TSO POSIX OFF environment). To specify a data file as the TSO terminal you must use //* or a DD statement allocated to the terminal.
When you supply a data filespec on the command line or PARM or using an INFILE clause, any of the available valid filespec types can be used, including //DD: or DD:. If you supply a data file with the INDDN clause, the value you supply must be a valid 1-character to 8-character DD name without a //DD: or DD: prefix. Only DD names can be supplied using INDDN.
The control file allows you to supply something called a "file-processing options string" in the INFILE or INDDN clause. On z/OS, this string is used only with HFS files and takes the same options used on UNIX platforms, described in the Oracle Database Utilities manual. (It specifies any of several ways to parcel an HFS file into records, which is not a native HFS concept.) With z/OS data sets the processing options string is ignored. Records are a native concept with data sets, and what the generic SQL*Loader documentation calls a physical record maps to what z/OS calls a logical record in a data set.
For each data file that is processed, SQL*Loader may write a bad file and a discard file. The former contains copies of input records that contained data errors or were rejected by the Oracle server due to data-related errors (such as a unique key constraint violation). The latter file contains records that did not meet selection criteria specified in the WHEN clause in the control file.
Both files are used conditionally and are not opened unless circumstances require it. The bad file is opened if one or more input records from a data file contain data errors. The discard file is opened if one or more input records fail to meet WHEN clause criteria and you supplied a discard filespec or specified the DISCARDMAX option.
You can specify the bad file using BADFILE or BADDN and you can specify the discard file using DISCARDFILE or DISCARDDN. When you use the "-FILE" keyword form, any of the supported filespec types can be supplied, including ones beginning with //DD: or DD:. If you use the "-DN" keyword form, the value you supply must be a valid 1-character to 8-character DD name with no //DD: or DD: prefix. Only DD names can be specified using BADDN and DISCARDDN.
When you supply filespecs for these files and Loader encounters input records that cause them to be opened, the filespecs are subject to extension processing. The bad file uses the extension bad
and the discard file uses dsc
.
When you do not supply filespecs for these files and the load needs to use them, Loader derives the filespecs based on the filespec of the associated data file. When the data filespec is derived from that of the control file, as described previously, this derivation is based on the derived data filespec. Bad and discard filespec derivation works as follows:
For the first (or only) data file in a load, if the data filespec is a //DD: type, the derived bad filespec is //DD:BAD and the derived discard filespec is //DD:DISCARD. When a load involves multiple data files (with multiple INFILE or INDDN clauses), if the second or subsequent data filespec is a //DD: type, the derived bad filespec is //DD:BADn and the derived discard filespec is //DD:DISCARn where n is a 1-digit or 2-digit decimal relative data file number within the control file ("2" for the second data file specified, "3" for the third, "11" for the eleventh, and so on). At most 99 bad and discard files are supported by this scheme. All data files in a load (including non-DD types) are counted for purposes of determining the relative data file numbers.
If a data filespec is a data set name type, the bad filespec is derived as a data set named ctlhlq.root.BAD and the discard filespec is derived as ctlhlq.root.DSC, where ctlhlq
is the "directory" (high-level qualifier) from the control filespec and root is the data file data set name with any "directory" and extension suffix removed. If the data filespec included a PDS member name in parentheses, the same member name specification is included at the end of the bad and/or discard filespec.
Use of the control filespec "directory" in these derivations mirrors what is in the control filespec: if the filespec contains a quoted data set name with an explicit high level qualifier, the same qualifier (and quotes) are used in the derived filespecs. If the control filespec is not quoted and subject to PROFILE PREFIX or userid prefixing, the same is true of the derived filespecs.
If a data filespec is an HFS file name, the bad file is derived as an HFS file named root.bad and the discard filespec is derived as root.dsc, where root is the data file name with any directory path and extension suffix (as defined for HFS file names) removed. This file is written in the directory associated with the control file, not the directory of the data file.
If either your control or data filespec is counter to the POSIX expectation, the bad and discard filespecs must not be derived-they must be supplied explicitly on the command line or in the Loader control file.
The following considerations pertain to loads where a data file is one of the supported VSAM cluster types (KSDS, ESDS or RRDS).
Input records are read in normal forward sequence for the cluster type: key sequence for KSDS, RBA sequence for ESDS, and record number sequence for RRDS. Empty slots in an RRDS cluster are not read and do not contribute to Loader SKIP or LOAD counts.
While you can specify an AIX PATH for a load, causing records to be read in alternate key sequence, this impacts load performance and is not recommended.
Loading from a VSAM object that is also open for update in another z/OS job or address space (including CICS TS and IMS TM address spaces) is not supported. Record Level Sharing (RLS) is not supported.
Loading from VSAM objects containing invalid Control Intervals (lacking valid CIDF/RDF fields) is not supported.
SQL*Loader provides no mechanism for specifying VSAM cluster passwords.
The bad and discard files, when written, contain subsets of the data from the associated data file. On z/OS, these files must be the same major type (data set or HFS file) as the associated data file. When using HFS files, the bad and discard files are written with the same line terminator and related attributes that you specified (or defaulted) in the file-processing options string for the data file.
When you load data from a data set, you can allow the DCB attributes of the bad and discard files to default or you can override them by coding them explicitly on a DD statement or by using an existing data set with established DCB attributes (which Loader will preserve). When allowed to default, the RECFM and LRECL will match those of the associated data file. If you override these attributes you must ensure that the output record length is sufficient. For example, if the input data file has fixed record format (F or FB) and LRECL=100 and you want to use RECFM=VB for the bad file, you must use an LRECL of at least 104 for the bad file (100 bytes for data and 4 bytes for the record descriptor word).
When an input data file is a VSAM data set, the bad and discard files will be nonVSAM sequential data sets. (Loader cannot create a VSAM bad or discard file.) As with nonVSAM data set input, the bad and discard file DCB attributes will default to values that are appropriate for the VSAM cluster and its maximum RECORDSIZE attribute. If you override these attributes you must ensure that the LRECL is adequate.
If you override bad or discard DCB attributes with a fixed (F or FB) format and a data file record to be written is shorter than the fixed record length, the record is padded with binary zeroes to the required length.
Refer to the Oracle Database Utilities manual for information on conditions in which return codes are set. The keyword return codes listed there have the following values on z/OS:
Keyword Return Code | Description | Value |
---|---|---|
EX_SUCC | success | 0 |
EX_WARN | warning | 4 |
EX_FAIL | fatal error | 8 |
In the following examples, Example 1 is a z/OS UNIX shell execution of SQL*Loader, loading data from an HFS file into an Oracle instance on the same z/OS system. It assumes the environment variables ORACLE_HOME
, PATH
, and LIBPATH
are already set appropriately. ORACLE_SID
is set to enable cross-memory (XM) connection to the instance whose SID is TDB3.
$ export ORACLE_SID=TDB3 [Text entered by user] $ ls -l mydata.* [Text entered by user] -rw-r--r-- 1 RJONES PGACCT 181203 Nov 5 15:52 mydata.dat $ cat myload.ctl [Text entered by user] LOAD DATA INFILE 'mydata' DISCARDMAX 10 INTO TABLE SITE_REVIEW WHEN DATA3 != '500' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS (DATA1, DATA2 CHAR "NVL(:C,'***')", DATA3) $ sqlldr control=myload userid=bsmyth/pinot [Text entered by user] SQL*Loader: Release 10.1.0 - Production on Wed Mar 17 10:26:06 2004 Copyright (c) 1982, 2004, Oracle. All rights reserved. Commit point reached - logical record count 1 Commit point reached - logical record count 2 Commit point reached - logical record count 3 Commit point reached - logical record count 6 Commit point reached - logical record count 7 $ ls -l mydata.* *.log [Text entered by user] -rw-r--r-- 1 RJONES PGACCT 181203 Nov 5 15:52 mydata.dat -rw-r--r-- 1 RJONES PGACCT 594 Mar 17 10:26 mydata.dsc -rw-r--r-- 1 RJONES PGACCT 6288 Mar 17 10:26 myload.log $
Example 2 is a batch jobstep execution of SQL*Loader. It is similar to the previous example except for the environment. All filespecs except the control file have been allowed to default.
//LOAD EXEC PGM=SQLLDR,REGION=0M, // PARM='userid=bsmyth/pinot control=DD:ctl' //STEPLIB DD DISP=SHR,DSN=ORACLE.V10G.CMDLOAD //ORA$LIB DD DISP=SHR,DSN=ORACLE.V10G.MESG //ORA@TDB3 DD DUMMY //CTL DD * LOAD DATA DISCARDMAX 10 INTO TABLE SITE_REVIEW WHEN DATA3 != '500' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS (DATA1, DATA2 CHAR "NVL(:C,'***')", DATA3) /* //DATA DD DISP=SHR,DSN=GKSC.MASTER.SPR //LOG DD SYSOUT=* //DISCARD DD DISP=(,CATLG),DSN=BSMYTH.LOAD.DSC,UNIT=SYSDA, // SPACE=(TRK,(10,10),RLSE)
Export and Import are complementary Oracle utilities used chiefly to transport Oracle database data between Oracle databases and between systems. Export writes a sequential file that is a "transportable copy" of database tables, indexes, and other objects along with their metadata descriptions, such as table and column names and data types. Import reads the sequential file produced by export, defining the database objects in the target database and then loading the data (rows, index entries, or other contents).
Datapump Export and Import utilities, described in the section "Datapump Export and Import" provide functions similar to Export and Import with additional features and capabilities.
For more information on Export and Import and Datapump Export and Import, refer to Oracle Database Utilities.
On z/OS, Export and Import support batch job, TSO, and shell execution environments. JCL procedures for batch execution (default names ORAEXP and ORAIMP) are supplied by Oracle and may be installed on your system with the same or different names. In TSO, both CALL and command processor (CP) invocation are supported. For the non-POSIX environments, the load module or program object names are EXP and IMP. In a z/OS UNIX shell, use the exp
and imp
(lower case) commands to invoke these utilities.
Parameters for Export and Import can be supplied on the command line or PARM field or in a file. If parameters are supplied in neither place, these utilities prompt for required inputs; prompts are written to C standard output and the responses are read from standard input.
The parser used by Export and Import for parameters has its own syntax conventions (described in the generic documentation) and is sensitive to apostrophes (single quotes). If you use apostrophes in a filespec parameter, you must "escape" them with a preceding backslash in order to keep the parser from trying to interpret them as syntax elements. For example, to specify the FILE= parameter with the data set name filespec //'JSMITH.ORADB1.DMP'
in a batch Export job, use the following code:
//EXPORT EXEC PGM=EXP, // PARM='scott/tiger file=//\''JSMITH.ORADB1.DMP\'''
Notice that the filespec apostrophes in the example are doubled, which is required to get an apostrophe through the JCL PARM mechanism, as well as being escaped with a preceding backslash for Export's parser.
If command line parameters for Export and Import exceed the 100-character maximum of the JCL PARM field or TSO CALL command, you can use the "++" parameter file mechanism described in the section "Parameters in Batch (JCL)" to supply parameters in a data set. Export and Import also have a generic parameter file feature that is similar to the "++" mechanism but is available on all platforms (and also works in z/OS UNIX shell, which is not true of "++"). If you use the generic parameter file, the filespec supplied for the PARFILE parameter is not subject to name extension before opening. This file can be a sequential or instream (DD *) data set, a PDS member, or an HFS file. The file must not contain record sequence numbers or other data not part of Export or Import's command line parameters.
For certain operations, Export and Import require a special userid/password parameter that contains spaces, such as "/ AS SYSDBA". Refer to the section "Parameters Containing Spaces" for information on how to specify such values.
The file written by Export and read by Import is called an export file or dump file. Filespecs that you supply for this purpose are subject to extension processing with the suffix dmp. If you don't supply a filespec (you omit the FILE parameter and/or supply an empty line at the file prompt) the default expdat.dmp is used in both POSIX OFF and POSIX ON environments. In a POSIX OFF environment this is processed as a data set named EXPDAT.DMP (with prefixing implied). In a shell, it is treated as an HFS file named expdat.dmp in the current working directory.
When the export file is a data set, DCB attributes must be established when the Export utility opens the file. If you don't specify DCB attributes on a DD statement or TSO ALLOC command, and you do not use an existing data set with established attributes (which Export will preserve), LE default attributes are used. Unlike most Oracle tool and utility files, the export file is opened in "binary" mode and is subject to different default attributes than those described in the section "Data Set DCB Attributes" . For both disk and tape devices, the default attributes for output binary files are RECFM=FB and LRECL=80.
If you override the DCB attribute defaults by supplying them on a DD or ALLOC command or by using an existing data set, you can use either fixed- or variable-length record formats and any LRECL allowed by LE. However, if you are creating an export file that will be processed by Import running on a non-z/OS system, you may want to avoid variable record formats (V or VB) depending on how the data will be moved to the target system. Either the transporting software (for example, FTP) or the Import utility on the target system may have difficulty with the imbedded record and block descriptors used by V and VB formats. Refer to the section "Cross-System Export/Import" for additional details.
The RECORDLENGTH parameter of Export and Import does not affect or relate to the DCB attributes of the export file.
One of the strengths of Export and Import is that they can be used to move Oracle data between dissimilar platforms without an Oracle Net connection between the two. This may be faster, and in some cases more secure, than running one utility or the other over an Oracle Net connection.
When Import reads data that was created by Export running on a different platform, the data must be unmodified from what was written. (Translation of data between formats and character sets is handled automatically by Import.) If you use something like File Transfer Protocol (FTP) software to move the data to the target system, specify a "binary mode" or similar processing option to prevent attempts to translate character data. This requirement exists when moving non-z/OS Export data to z/OS and when moving z/OS Export data to a non-z/OS system. If you fail to do this and the data is translated, Import typically issues the following message:
IMP-00010: not a valid export file, header failed verification
Refer to Oracle Database Utilities for information on conditions in which return codes are set. The keyword return codes listed there have the following values on z/OS:
Keyword Return Code | Description | Value |
---|---|---|
EX_SUCC | success | 0 |
EX_WARN | warning | 4 |
EX_FAIL | fatal error | 8 |
Cross-platform Export/Import of a partitioned table between EBCDIC (z/OS) and ASCII systems will fail if the partitioning is based on character ranges that are incompatible between EBCDIC and ASCII. If your partition ranges are not compatible and retain the same sequence between ASCII and EBCDIC, you must manually create the table with different partition criteria (or with no partitioning) in the target database and then run Import with the IGNORE=Y option so the error on Import's table create is ignored.
The following example shows a batch jobstep execution of Export. This is a simple export of all objects owned by a userid:
//EXP EXEC PGM=EXP, // PARM='hrmdba/satchel file=dd:expout' //STEPLIB DD DISP=SHR,DSN=ORACLE.V10G.CMDLOAD //ORA$LIB DD DISP=SHR,DSN=ORACLE.V10G.MESG //ORA@RM5 DD DUMMY //EXPOUT DD DISP=(,CATLG),DSN=BMANRY.EXPTEST2.DMP, // UNIT=SYSDA,SPACE=(TRK,(10,10),RLSE), // DCB=(RECFM=FB,LRECL=120)
Datapump Export and Import are functionally similar to Export and Import discussed previously, but all of the I/O processing for dump files is done in the Oracle database server rather than in the client utility session.
Datapump Export and Import provide capabilities that Export and Import do not, in particular an easy way to spread processing over multiple tasks to improve parallelism. For more information on Datapump Export and Import, refer to Oracle Database Utilities.
Besides using the Datapump client utilities, you can invoke Datapump Export and Import programmatically using the DBMS_DATAPUMP PL/SQL package. For more information on the DBMS_DATAPUMP package, refer to the PL/SQL Packages and Types Reference.
On z/OS, whether invoked using a utility or the DBMS_DATAPUMP package, Datapump Export and Import can only process files in the HFS. This includes the dump files written by Datapump Export or read by Datapump Import as well as the text log file written by both components.
Security for Datapump file operations is provided using database directory objects, which must be created and granted to users by a database administrator before Datapump Export and Import can be used. If you are unable or choose not to use the HFS file system for exporting or importing or if your database administrator has not created directory objects, you can use the Export and Import utilities described in the previous section.
On z/OS, Datapump Export and Import support batch job, TSO, and shell execution environments. JCL procedures for batch execution (default names ORAEXD and ORAIMD) are supplied by Oracle and may be installed on your system with the same or different names. In TSO, both CALL and command processor (CP) invocation are supported. For the non-POSIX environments, the load module or program object names are EXPDP and IMPDP. In a z/OS UNIX shell, use the expdp
and impdp
(lower case) commands to invoke these utilities.
Parameters for Datapump Export and Import can be supplied on the command line or PARM field or in a file. If parameters are supplied in neither place, these utilities may prompt for certain inputs. Prompts are written to C standard output and the responses are read from standard input.
If command line parameters for Datapump Export and Import exceed the 100-character maximum of the JCL PARM field or TSO CALL command, you can use the "++" parameter file mechanism described in the section "Parameters in Batch (JCL)" to supply parameters in a data set. Datapump Export and Import also have a generic parameter file feature that is similar to the "++" mechanism but is available on all platforms (and also works in a shell, which is not true of "++"). If you use the generic parameter file, the filespec supplied for the PARFILE parameter is not subject to name extension before opening. This file can be a sequential or instream (DD *) data set, a PDS member, or an HFS file. The file must not contain record sequence numbers or other data not part of Datapump Export or Import's command line parameters.
For certain operations, Datapump Export and Import require a special userid/password parameter that contains spaces, such as "/ AS SYSDBA". Refer to the section "Parameters Containing Spaces" for information on how to specify such values.
The file that contains the database data written by Datapump Export and read by Datapump Import is called an export file or dump file. While conceptually similar to the export files of the Export and Import utilities described previously, the two types are not interchangeable. For example, you cannot read a plain Export file with Datapump Import and you cannot read a Datapump Export file with plain Import.
Besides writing or reading an export file, the execution of Datapump Export or Import normally writes a log file. This is just a text log reporting processing details.
On z/OS, the export and log files used by Datapump Export and Import must be HFS files; they cannot be z/OS data sets.
Because Datapump export and log file processing is done in the Oracle database server rather than the client job or session, access to both types of file is controlled with database directory objects, which are created by a database administrator. Although there are forms of directory object for both HFS files and data sets, as discussed in Chapter 5, "Oracle Server Considerations on z/OS", only the HFS form of directory object can be used in a Datapump operation.
The user-supplied names for the export and log files are required to begin with "./" (dot-slash) so that they are seen an unambiguous HFS filespecs. This requirement is unique to z/OS Oracle database servers and is not accepted by Oracle database servers on other platforms. (Oracle normally prohibits any kind of path prefix on user-supplied filenames that are processed in the database server.)
When supplied by the user, the export and log file names are subject to extension processing with the suffixes dmp
and log
, respectively. Because these are always treated as HFS files, extension processing is under HFS rules, as described in Chapter 2, "Oracle Software Interaction with z/OS and Language Environment", and FNA processing is not available. If omitted, the default name for the export file is ./expdat.dmp
and for the log file it is ./export.log
(Datapump Export) or ./import.log
(Datapump Import). These files are always accessed in an HFS directory from a database directory object. The directory object name is supplied with the file name (separated by a colon) or separately, in the DIRECTORY command line parameter.
The following example shows Datapump Export being invoked in TSO to dump tables ACCT1_MST and ACCT1_DET for user RACHELQ:
READY expdp rachelq/qow00a tables=(acct1_mst,acct1_det) directory=acctdir - dumpfile=./acctexp
Export: Release 10.1.0.2.21 - Production on Monday,…
Both the export file and the log are written in the HFS directory associated with the ACCTDIR directory object, to which RACHELQ was previously granted access by a database administrator. The export file has root name acctexp
. After extension processing the name is acctexp.dmp
. The log file defaults to ./export.log
.
When you use Datapump, much of the server processing is performed by auxiliary processes (z/OS subtasks) in the server address space. The number of processes involved depends on the degree of parallelism requested or permitted by the database administrator. Processing by auxiliary subtasks runs in the server's WLM goal and is accounted to the server address space, not to the client.
Datapump Export and Import utilities have an interactive mode that is initiated by interrupting the utility after server processing begins. In TSO, this is accomplished with an ATTN or PA1 signal. In a shell, use Ctl-C or a similar mechanism to interrupt the utility. Interactive mode is not supported in z/OS batch environments.
Datapump Export and Import can end with normal success, success with one or more warning conditions, or be terminated by an error. The return codes associated with these conditions are as follows:
Return Code | Value |
---|---|
Success | 0 |
Warning | 4 |
Fatal Error | 8 |
In the event of a warning or error return code, check the Datapump log file for messages describing the warning or error condition.
TKPROF is a utility used in tuning Oracle SQL statements. It reads trace files produced by the Oracle database server and formats the trace data into a report on the execution path and processing statistics associated with SQL statements from an Oracle database session. Optionally, it can connect to a target server and issue EXPLAIN PLAN requests for the subject SQL statements, and it can write a separate file containing SQL statements used by an application. For more information on TKPROF, refer to the Oracle Database Performance Tuning Guide.
TKPROF on z/OS supports batch job, TSO, and shell execution environments. No batch JCL procedure is provided, but one can be created easily, if necessary. In TSO, both CALL and command processor (CP) invocation are supported. For non-POSIX environments, the program object name is TKPROF. In a z/OS UNIX shell, use the tkprof
command (lower case) to invoke this tool.
TKPROF requires two command line parameters, the first identifying the input Oracle trace file to be read and the second supplying a filespec for the output report. If no parameters are supplied, TKPROF displays a brief help message and ends normally. If the input trace file is specified but not the output file, TKPROF prompts (to standard output) for the output filespec and reads it from standard input. Additional command line parameters are permitted for various options.
Input to TKPROF is a flat file written by the Oracle database server when SQL tracing is active in a client session. For information on methods for activating and deactivating the trace with an ALTER SESSION statement or the DBMS_SESSION.SET_SQL_TRACE procedure, refer to the Oracle Database Performance Tuning Guide.
On z/OS, Oracle database server trace files are written in one of two forms: as SYSOUT (JES) spool files or as sequential (DSORG=PS) disk data sets. TKPROF cannot read JES spool files directly, so if your installation is using SYSOUT for trace files, you will need to identify the trace file containing the SQL trace and copy it to a sequential disk data set in order to use TKPROF. Copying can be done with a tool such as the PRINT command of the IBM Spool Display and Search Facility (SDSF) or a comparable mechanism. Whether the trace files for your database instance are written to SYSOUT or directly to a data set, you may need to work with your database administrator to obtain access to the trace data.
It is a limitation of IBM access methods that you cannot run TKPROF (or any other program) to directly access an Oracle database server trace that is being written as a disk data set while the database server still has the data set open for output. To ensure that a trace file is closed, the database server session associated with the trace must end (disconnect from the database server). This is one of the advantages in using spool files for database server tracing; tools such as IBM SDSF can read and copy a spool file while it is still open to the database server.
Note:
The Oracle Database Performance Tuning Guide discusses several Oracle database server initialization parameters that are relevant to SQL tracing and TKPROF. Two of these, MAX_DUMP_FILE_SIZE and USER_DUMP_DEST, are not meaningful on z/OS and can be ignored. The third, TIMED_STATISTICS, is meaningful and may need to be set to get useful performance data. TIMED_STATISTICS is a dynamic parameter and can be set at the session level.Although the input trace file is normally going to be a sequential data set on z/OS, TKPROF will also read a PDS member or an HFS file as its input. File name extension processing with the suffix trc
is done on the trace filespec before it is opened. If your trace data set name does not end in .TRC, include two slashes in front of the name or use a DD: filespec to avoid extension processing.
The report file written by TKPROF can be a sequential disk data set, a PDS member, a SYSOUT (spool) data set, or a file in the HFS. The filespec you supply for the output file is subject to extension processing with the suffix prf
. By default, records in this report are at most 80 data bytes long; if you use the WIDTH command line option to request report lines that will exceed the LE default LRECL (1028, or 1024 data bytes), you must override the file's LRECL using DD statement parameters or FNA.
Optionally, using the RECORD= parameter, TKPROF will write a file containing all user SQL statements from the session. Like the report, this file can be a sequential disk data set or PDS member, a SYSOUT data set, or a file in the HFS. Filespecs supplied using the RECORD parameter are subject to extension processing with the suffix sql
.
TKPROF either processes successfully or encounters errors and fails, there are no warning conditions. Return codes from TKPROF are as follows:
Return Code | Value |
---|---|
Success | 0 |
Failure | 8 |
The following example shows a batch job step to run TKPROF to analyze a trace file in a z/OS data set named JSMITH.ORAPDB1.SESS3.TRACE:
//TKPROF EXEC PGM=TKPROF,REGION=8192K, // PARM='DD:TRACE S:C' //STEPLIB DD DISP=SHR,DSN=ORACLE.V10.CMDLOAD //ORA$LIB DD DISP=SHR,DSN=ORACLE.V10.MESG //TRACE DD DISP=SHR,DSN=JSMITH.ORAPDB1.SESS3.TRACE The report is written directly to SYSOUT class C using a SYSOUT filespec.