Oracle® Database Utilities 10g Release 2 (10.2) Part Number B14215-01 |
|
|
PDF · Mobi · ePub |
The Data Pump API, DBMS_DATAPUMP
, provides a high-speed mechanism to move all or part of the data and metadata for a site from one database to another. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.
You should read this chapter if you want more details about how the Data Pump API works. The following topics are covered:
Examples of Using the Data Pump API
See Also:
Oracle Database PL/SQL Packages and Types Reference for a detailed description of the procedures available in the DBMS_DATAPUMP
package
Chapter 1, "Overview of Oracle Data Pump" for additional explanation of Data Pump concepts
The main structure used in the client interface is a job handle, which appears to the caller as an integer. Handles are created using the DBMS_DATAPUMP
.OPEN
or DBMS_DATAPUMP
.ATTACH
function. Other sessions can attach to a job to monitor and control its progress. This allows a DBA to start up a job before departing from work and then watch the progress of the job from home. Handles are session specific. The same job can create different handles in different sessions.
There is a state associated with each phase of a job, as follows:
Undefined - before a handle is created
Defining - when the handle is first created
Executing - when the DBMS_DATAPUMP
.START_JOB
procedure is executed
Completing - when the job has finished its work and the Data Pump processes are ending
Completed - when the job is completed
Stop Pending - when an orderly job shutdown has been requested
Stopping - when the job is stopping
Stopped - when DBMS_DATAPUMP
.STOP_JOB
is performed against an executing job
Idling - the period between the time that a DBMS_DATAPUMP.ATTACH
is executed to attach to a stopped job and the time that a DBMS_DATAPUMP.START_JOB
is executed to restart that job.
Not Running - when a master table exists for a job that is not running (has no Data Pump processes associated with it)
Performing DBMS_DATAPUMP
.START_JOB
on a job in an Idling state will return it to an Executing state.
If all users execute DBMS_DATAPUMP
.DETACH
to detach from a job in the Defining state, the job will be totally removed from the database.
When a job abnormally terminates or when an instance running the job is shut down, the job is placed in the Not Running state if it was previously executing or idling. It can then be restarted by the user.
The master control process is active in the Defining, Idling, Executing, Stopping, Stop Pending, and Completing states. It is also active briefly in the Stopped and Completed states. The master table for the job exists in all states except the Undefined state. Worker processes are only active in the Executing and Stop Pending states, and briefly in the Defining state for import jobs.
Detaching while a job is in the Executing state will not halt the job. You can be explicitly or implicitly detached from a job. An explicit detachment occurs when you execute the DBMS_DATAPUMP
.DETACH
procedure. An implicit detachment occurs when a session is run down, an instance is started, or the STOP_JOB
procedure is called.
The Not Running state indicates that a master table exists outside the context of an executing job. This will occur if a master table has been explicitly retained upon job completion, if a job has been stopped (probably to be restarted later), or if a job has abnormally terminated. This state can also be seen momentarily during job state transitions at the beginning of a job, and at the end of a job before the master table is dropped. Note that the Not Running state is shown only in the DBA_DATAPUMP_JOBS
view and the USER_DATAPUMP_JOBS
view. It is never returned by the GET_STATUS
procedure.
Table 5-1 shows the valid job states in which DBMS_DATAPUMP
procedures can be executed. The states listed are valid for both export and import jobs, unless otherwise noted.
Table 5-1 Valid Job States in Which DBMS_DATAPUMP Procedures Can Be Executed
Procedure Name | Valid States | Description |
---|---|---|
ADD_FILE |
Defining (valid for both export and import jobs)
Executing and Idling (valid only for specifying dump files for export jobs) |
Specifies a file for the dump file set, the log file, or the SQL_FILE output. |
ATTACH |
Defining, Executing, Idling, Stopped, Completed, Completing, Not Running | Allows a user session to monitor a job or to restart a stopped job. The attach will fail if the dump file set or master table for the job have been deleted or altered in any way. |
DATA_FILTER |
Defining | Restricts data processed by a job. |
DETACH |
All | Disconnects a user session from a job. |
GET_DUMPFILE_INFO |
All | Retrieves dump file header information |
GET_STATUS |
All, except Completed, Not Running, Stopped, and Undefined | Obtains the status of a job. |
LOG_ENTRY |
Defining, Executing, Idling, Stop Pending, Completing | Adds an entry to the log file. |
METADATA_FILTER |
Defining | Restricts metadata processed by a job. |
METADATA_REMAP |
Defining | Remaps metadata processed by a job. |
METADATA_TRANSFORM |
Defining | Alters metadata processed by a job. |
OPEN |
Undefined | Creates a new job. |
SET_PARALLEL |
Defining, Executing, Idling | Specifies parallelism for a job. |
SET_PARAMETER |
Defining | Alters default processing by a job. |
START_JOB |
Defining, Idling | Begins or resumes execution of a job. |
STOP_JOB |
Defining, Executing, Idling, Stop Pending | Initiates shutdown of a job. |
WAIT_FOR_JOB |
All, except Completed, Not Running, Stopped, and Undefined | Waits for a job to end. |
To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP
package. The following steps list the basic activities involved in using the Data Pump API. The steps are presented in the order in which the activities would generally be performed:
Execute the DBMS_DATAPUMP
.OPEN
procedure to create a Data Pump job and its infrastructure.
Define any parameters for the job.
Start the job.
Optionally, monitor the job until it completes.
Optionally, detach from the job and reattach at a later time.
Optionally, stop the job.
Optionally, restart the job, if desired.
These concepts are illustrated in the examples provided in the next section.
See Also:
Oracle Database PL/SQL Packages and Types Reference for a complete description of theDBMS_DATAPUMP
packageThis section provides the following examples to help you get started using the Data Pump API:
Example 5-2, "Importing a Dump File and Remapping All Schema Objects"
Example 5-3, "Using Exception Handling During a Simple Schema Export"
The examples are in the form of PL/SQL scripts. If you choose to copy these scripts and run them, you must first do the following, using SQL*Plus:
Create a directory object and grant READ
and WRITE
access to it. For example, to create a directory object named dmpdir
to which you have access, do the following. Replace user
with your username.
SQL> CREATE DIRECTORY dmpdir AS '/rdbms/work';
SQL> GRANT READ, WRITE ON DIRECTORY dmpdir TO user
Ensure that you have the EXP_FULL_DATABASE
and IMP_FULL_DATABASE
roles. To see a list of all roles assigned to you within your security domain, do the following:
SQL> SELECT * FROM SESSION_ROLES;
If you do not have the necessary roles assigned to you, contact your system administrator for help.
Turn on server output if it is not already on. This is done as follows:
SQL> SET SERVEROUTPUT ON
If you do not do this, you will not see any output to your screen. You must do this in the same session in which you invoke the example. Also, if you exit SQL*Plus, this setting is lost and must be reset when you begin a new session.
Example 5-1 Performing a Simple Schema Export
The PL/SQL script provided in this example shows how to use the Data Pump API to perform a simple schema export of the HR
schema. It shows how to create a job, start it, and monitor it. Additional information about the example is contained in the comments within the script. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS
to retrieve more detailed error information when a failure occurs.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a schema export.
h1 := DBMS_DATAPUMP.OPEN('EXPORT','SCHEMA',NULL,'EXAMPLE1','LATEST');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');
-- A metadata filter is used to specify the schema that will be exported.
DBMS_DATAPUMP.METADATA_FILTER(h1,'SCHEMA_EXPR','IN (''HR'')');
-- Start the job. An exception will be generated if something is not set up
-- properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The export job should now be running. In the following loop, the job
-- is monitored until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
Example 5-2 Importing a Dump File and Remapping All Schema Objects
This example imports the dump file created in Example 5-1 (an export of the hr
schema). All schema objects are remapped from the hr
schema to the blake
schema. To keep the example simple, exceptions from any of the API calls will not be trapped. However, in a production environment, Oracle recommends that you define exception handlers and call GET_STATUS
to retrieve more detailed error information when a failure occurs.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a "full" import (everything
-- in the dump file without filtering).
h1 := DBMS_DATAPUMP.OPEN('IMPORT','FULL',NULL,'EXAMPLE2');
-- Specify the single dump file for the job (using the handle just returned)
-- and directory object, which must already be defined and accessible
-- to the user running this procedure. This is the dump file created by
-- the export operation in the first example.
DBMS_DATAPUMP.ADD_FILE(h1,'example1.dmp','DMPDIR');
-- A metadata remap will map all schema objects from HR to BLAKE.
DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA','HR','BLAKE');
-- If a table already exists in the destination schema, skip it (leave
-- the preexisting table alone). This is the default, but it does not hurt
-- to specify it explicitly.
DBMS_DATAPUMP.SET_PARAMETER(h1,'TABLE_EXISTS_ACTION','SKIP');
-- Start the job. An exception is returned if something is not set up properly.
DBMS_DATAPUMP.START_JOB(h1);
-- The import job should now be running. In the following loop, the job is
-- monitored until it completes. In the meantime, progress information is
-- displayed. Note: this is identical to the export example.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- If any work-in-progress (WIP) or Error messages were received for the job,
-- display them.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and gracefully detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
END;
/
Example 5-3 Using Exception Handling During a Simple Schema Export
This example shows a simple schema export using the Data Pump API. It extends Example 5-1 to show how to use exception handling to catch the SUCCESS_WITH_INFO
case, and how to use the GET_STATUS
procedure to retrieve additional information about errors. If you want to get status up to the current point, but a handle has not yet been obtained, you can use NULL for DBMS_DATAPUMP
.GET_STATUS
.
CONNECT SYSTEM/password
DECLARE
ind NUMBER; -- Loop index
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
BEGIN
-- Create a (user-named) Data Pump job to do a schema export.
h1 := dbms_datapump.open('EXPORT','SCHEMA',NULL,'EXAMPLE3','LATEST');
-- Specify a single dump file for the job (using the handle just returned)
-- and a directory object, which must already be defined and accessible
-- to the user running this procedure.
dbms_datapump.add_file(h1,'example3.dmp','DMPDIR');
-- A metadata filter is used to specify the schema that will be exported.
dbms_datapump.metadata_filter(h1,'SCHEMA_EXPR','IN (''HR'')');
-- Start the job. An exception will be returned if something is not set up
-- properly.One possible exception that will be handled differently is the
-- success_with_info exception. success_with_info means the job started
-- successfully, but more information is available through get_status about
-- conditions around the start_job that the user might want to be aware of.
begin
dbms_datapump.start_job(h1);
dbms_output.put_line('Data Pump job started successfully');
exception
when others then
if sqlcode = dbms_datapump.success_with_info_num
then
dbms_output.put_line('Data Pump job started with info available:');
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end if;
else
raise;
end if;
end;
-- The export job should now be running. In the following loop, we will monitor
-- the job until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- Display any work-in-progress (WIP) or error messages that were received for
-- the job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
-- Any exceptions that propagated to this point will be captured. The
-- details will be retrieved from get_status and displayed.
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
END;
/