1] How to copy a Schema without any Dump ( Network_Link )
A] The goal was to move Data quickly between databases
My team was asked several times per week to refresh non-Production environment on a Schema level basis.
I find KSH+SQL very easy to write when it's about the commands : initialization - execution - then mail the log.
But we thought that KSH+SQL scripts may be, someday, rewritten as only PL/SQL, so we decided that Datapump API programming would make this process faster.
It was a bit more complicated than using 'impdp' command line, but we all like challenge ...
Hereunder is the SQL part.
This script has been run more than a hundred times last year ( 2010 ) manually on our 150 applications : [ update May 29 2015 : run 1 981 times since early 2010 !! ]
- between Oracle 10.2.0.3+ dbs
- in Datapump Network Link mode
- it may copy a Schema as a whole or without the Data ( equivalent to the old "rows=n" )
- and it can change Schema Name using Dbms_DataPump.METADATA_REMAP 'REMAP_SCHEMA' argument
- See restrictions at the end, when using Network_Link
B] The arguments of the SQL script are :
1. Data Pump log Directory Oracle Directory created during execution
Log Name : impdp_<Source_DB>_<Source_Schema>_<target_DB>_<Target_Schema>_<Date>.log
2. Source Database - a Database link to Source DB is created during execution
3. Source Schema
4. Target Database
5. Target Schema
6.7. Database Link Source User and Password - usually SYSTEM will do
8. Password New User
if the target user already exists, should be set to EXIST
if the target user doesn't exist, user will be created by Datapump (see note c.), and
argument 8 is the new password, or 'idem' if should be set the same as source user password
9. Specifies the ROWS option : 0 ROWS=Y , 1 ROWS=N
10. SCN given to the FLASHBACK_SCN argument of Datapump
can be taken from source DB :
col current_scn for
9999999999999999
select current_scn from
v$database ;
( To get a consistent view of Data Selection, I was not able to plug in DP API
( To get a consistent view of Data Selection, I was not able to plug in DP API
FLASHBACK_TIME =
"to_timestamp(to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD
HH24:MI:SS')" )
C] Notes
a. I modified a template SQL script found on Internet, that manages correctly error ending of Datapump execution.
b. At the end of script, Schema Compilation and Statistics Gathering occur ,
statistics are excluded from Datapump because of :
Bug 5071931 DATAPUMP IMPORT WITH REMAP TABLESPACE, AND SCHEMA IS VERY SLOW
c. DP has the ability to create target user, keeping Roles , System Privileges & Tablespace Quota.
Grants to owned objects are kept, but grants to OTHER schemas objects are LOST.
I mean that,
when a schema gets copied to a target, the grants it owned to others schemas in
the source database are gone. Only the grants attached to its objets are kept
in this schema datapump copy.
Case of the
REMAP_SCHEMA :
We’ll exclude
grants when the SCHEMA changes over the Copy, as we may grant rights to
Users/Roles we don’t want to.
Ex : this
would be equivalent to REMAP_ROLE
DBMS_DATAPUMP.METADATA_FILTER(handle=> h1, name =>
'EXCLUDE_PATH_EXPR', value => '=''GRANT''');
COPY with EXCLUDE=GRANTS, then generate the SQL File (impdp w/ SQLFILE=), modify it and run it in target Instance.
COPY with EXCLUDE=GRANTS, then generate the SQL File (impdp w/ SQLFILE=), modify it and run it in target Instance.
d. Datapump Import arguments
TABLE_EXISTS_ACTION' => 'SKIP ( We expect objects to be removed prior to impdp. )
Parallel
degree is set to 2 ( the magic number ! )
To get the old exp/imp COMPACT=N behaviour ( respects INITIAL table segment clause ) :
To get the old exp/imp COMPACT=N behaviour ( respects INITIAL table segment clause ) :
dbms_datapump.metadata_transform (
h1, 'STORAGE' , 0 , null ) ;
e. In case of failure in execution, Datapump jobs may remain in target instance and should be removed as follow, execute under SYSTEM user :
e. In case of failure in execution, Datapump jobs may remain in target instance and should be removed as follow, execute under SYSTEM user :
$ cat kill_dp_job.sql
-- For jobs that were stopped in the past and won't be restarted, delete the master table.
SELECT 'drop table '||o.owner||'."'||object_name||'" ;'
FROM dba_objects o, dba_datapump_jobs j
WHERE o.owner=j.owner_name AND o.object_name=j.job_name and state = 'NOT RUNNING'
AND j.job_name NOT LIKE 'BIN$%' order by 1 ;
D] The script, most important part in bold
calling example
: time sqlplus -s system @<script>.sql <args>
$ cat copy_to_empty.sql
-- copy schema to existing empty user or non existing user ( new tablespace & password provided )
define logd=&1
define base_s=&2
define user_s=&3
define base_c=&4
define user_c=&5
define usr=&6
define psw=&7
define psw_n=&8
define no_r=&9
define scn=&10
set lines 150 pages 5000
set head off
set feedback off
set serveroutput on
set verify off
-- spool
COLUMN timecol new_value logfile NOPRINT
SELECT 'impdp_&base_s'||'_&user_s'||'_&base_c'||'_&user_c'||'_'||TO_char(sysdate,'DD-MM-YYYY_HH24MISS')||'.log' AS timecol FROM sys.dual;
exec dbms_output.put_line('** Demarrage copie ... ' || TO_char(sysdate,'DD/MM/YYYY HH24:MI:SS') );
-- execution
create database link &base_s.TEC connect to &usr identified by &psw using '&base_s' ;
create directory DP_DIR_&user_c as '&logd' ;
DECLARE
ind NUMBER; -- Loop index
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
--v_psw dba_users.password%type;
v_psw varchar2(30) ; -- password new user
v_row varchar2(8) ; -- mode rows=N si '1'
cur_scn number ; -- source
BEGIN
v_psw:='&psw_n' ;
v_row:='&no_r' ;
cur_scn:='&scn' ;
--cur_scn:=6030373057 ;
--select password into v_psw from dba_users where username = '&user_c';
h1 := Dbms_DataPump.Open(operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'impdp_&base_s'||'_&user_s'||'_&base_c'||'_&user_c', version => 'COMPATIBLE', remote_link => '&base_s.TEC' );
-- bug 10.2.0.4 and 5 Bug 8282214
-- dbms_datapump.set_parallel(handle => h1, degree => 1);
dbms_datapump.set_parallel(handle => h1, degree => 2);
Dbms_DataPump.Add_File(handle => h1, filename => '&logfile', directory => 'DP_DIR'||'_&user_c', filetype => 3);
dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''&user_s'')');
-- cause bug Bug 5071931 DATAPUMP IMPORT WITH REMAP TABLESPACE, AND SCHEMA IS VERY SLOW , exclure le calcul des stats
DBMS_DATAPUMP.METADATA_FILTER(handle=> h1, name => 'EXCLUDE_PATH_EXPR', value => '=''TABLE_STATISTICS''');
DBMS_DATAPUMP.METADATA_FILTER(handle=> h1, name => 'EXCLUDE_PATH_EXPR', value => '=''INDEX_STATISTICS''');
Dbms_DataPump.METADATA_REMAP( h1 , 'REMAP_SCHEMA', '&user_s' , '&user_c' );
Dbms_DataPump.Set_Parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'SKIP');
Dbms_DataPump.Set_Parameter(handle => h1, name => 'FLASHBACK_SCN', value => cur_scn);
-- Initial extent clause omit
dbms_datapump.metadata_transform ( h1, 'STORAGE' , 0 , null ) ;
-- dbms_datapump.metadata_transform ( handle => h1, name => 'SEGMENT_ATTRIBUTES' , value => 'n' ) ;
-- test equivalent to ROWS=N
IF v_row = '1' THEN
dbms_datapump.data_filter(handle=> h1, name=> 'INCLUDE_ROWS' , value=>0);
END IF;
-- Start the job. An exception will be returned if something is not set up
-- properly.One possible exception that will be handled differently is the
-- success_with_info exception. success_with_info means the job started
-- successfully, but more information is available through get_status about
-- conditions around the start_job that the user might want to be aware of.
begin
dbms_datapump.start_job(h1);
dbms_output.put_line('Data Pump job started successfully');
exception
when others then
if sqlcode = dbms_datapump.success_with_info_num
then
dbms_output.put_line('Data Pump job started with info available:');
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end if;
else
raise;
end if;
end;
-- The export job should now be running. In the following loop, we will monitor
-- the job until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- Display any work-in-progress (WIP) or error messages that were received for
-- the job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
-- test if v_psw != 'EXIST' and != 'idem' , then set this psw , else password would be the original one from target user
IF v_psw != 'EXIST' and v_psw != 'idem' THEN
execute immediate 'alter user &user_c identified by '||v_psw||' ' ;
END IF;
-- Any exceptions that propagated to this point will be captured. The
-- details will be retrieved from get_status and displayed.
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
END;
/
-- end
-- compile & stats
exec dbms_output.put_line('** Compilation and Statistics gathering ...') ;
EXEC DBMS_UTILITY.compile_schema(schema => '&user_c');
exec DBMS_STATS.GATHER_SCHEMA_STATS ( '&user_c',DBMS_STATS.AUTO_SAMPLE_SIZE,null,'FOR ALL INDEXED COLUMNS SIZE AUTO',2,'ALL',TRUE) ;
drop directory DP_DIR_&user_c ;
drop database link &base_s.TEC ;
exit
-- script ends here
$ cat copy_to_empty.sql
-- copy schema to existing empty user or non existing user ( new tablespace & password provided )
define logd=&1
define base_s=&2
define user_s=&3
define base_c=&4
define user_c=&5
define usr=&6
define psw=&7
define psw_n=&8
define no_r=&9
define scn=&10
set lines 150 pages 5000
set head off
set feedback off
set serveroutput on
set verify off
-- spool
COLUMN timecol new_value logfile NOPRINT
SELECT 'impdp_&base_s'||'_&user_s'||'_&base_c'||'_&user_c'||'_'||TO_char(sysdate,'DD-MM-YYYY_HH24MISS')||'.log' AS timecol FROM sys.dual;
exec dbms_output.put_line('** Demarrage copie ... ' || TO_char(sysdate,'DD/MM/YYYY HH24:MI:SS') );
-- execution
create database link &base_s.TEC connect to &usr identified by &psw using '&base_s' ;
create directory DP_DIR_&user_c as '&logd' ;
DECLARE
ind NUMBER; -- Loop index
spos NUMBER; -- String starting position
slen NUMBER; -- String length for output
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job complete
job_state VARCHAR2(30); -- To keep track of job state
le ku$_LogEntry; -- For WIP and error messages
js ku$_JobStatus; -- The job status from get_status
jd ku$_JobDesc; -- The job description from get_status
sts ku$_Status; -- The status object returned by get_status
--v_psw dba_users.password%type;
v_psw varchar2(30) ; -- password new user
v_row varchar2(8) ; -- mode rows=N si '1'
cur_scn number ; -- source
BEGIN
v_psw:='&psw_n' ;
v_row:='&no_r' ;
cur_scn:='&scn' ;
--cur_scn:=6030373057 ;
--select password into v_psw from dba_users where username = '&user_c';
h1 := Dbms_DataPump.Open(operation => 'IMPORT', job_mode => 'SCHEMA', job_name => 'impdp_&base_s'||'_&user_s'||'_&base_c'||'_&user_c', version => 'COMPATIBLE', remote_link => '&base_s.TEC' );
-- bug 10.2.0.4 and 5 Bug 8282214
-- dbms_datapump.set_parallel(handle => h1, degree => 1);
dbms_datapump.set_parallel(handle => h1, degree => 2);
Dbms_DataPump.Add_File(handle => h1, filename => '&logfile', directory => 'DP_DIR'||'_&user_c', filetype => 3);
dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''&user_s'')');
-- cause bug Bug 5071931 DATAPUMP IMPORT WITH REMAP TABLESPACE, AND SCHEMA IS VERY SLOW , exclure le calcul des stats
DBMS_DATAPUMP.METADATA_FILTER(handle=> h1, name => 'EXCLUDE_PATH_EXPR', value => '=''TABLE_STATISTICS''');
DBMS_DATAPUMP.METADATA_FILTER(handle=> h1, name => 'EXCLUDE_PATH_EXPR', value => '=''INDEX_STATISTICS''');
Dbms_DataPump.METADATA_REMAP( h1 , 'REMAP_SCHEMA', '&user_s' , '&user_c' );
Dbms_DataPump.Set_Parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'SKIP');
Dbms_DataPump.Set_Parameter(handle => h1, name => 'FLASHBACK_SCN', value => cur_scn);
-- Initial extent clause omit
dbms_datapump.metadata_transform ( h1, 'STORAGE' , 0 , null ) ;
-- dbms_datapump.metadata_transform ( handle => h1, name => 'SEGMENT_ATTRIBUTES' , value => 'n' ) ;
-- test equivalent to ROWS=N
IF v_row = '1' THEN
dbms_datapump.data_filter(handle=> h1, name=> 'INCLUDE_ROWS' , value=>0);
END IF;
-- Start the job. An exception will be returned if something is not set up
-- properly.One possible exception that will be handled differently is the
-- success_with_info exception. success_with_info means the job started
-- successfully, but more information is available through get_status about
-- conditions around the start_job that the user might want to be aware of.
begin
dbms_datapump.start_job(h1);
dbms_output.put_line('Data Pump job started successfully');
exception
when others then
if sqlcode = dbms_datapump.success_with_info_num
then
dbms_output.put_line('Data Pump job started with info available:');
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end if;
else
raise;
end if;
end;
-- The export job should now be running. In the following loop, we will monitor
-- the job until it completes. In the meantime, progress information is
-- displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- Display any work-in-progress (WIP) or error messages that were received for
-- the job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
-- Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
-- test if v_psw != 'EXIST' and != 'idem' , then set this psw , else password would be the original one from target user
IF v_psw != 'EXIST' and v_psw != 'idem' THEN
execute immediate 'alter user &user_c identified by '||v_psw||' ' ;
END IF;
-- Any exceptions that propagated to this point will be captured. The
-- details will be retrieved from get_status and displayed.
exception
when others then
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
END;
/
-- end
-- compile & stats
exec dbms_output.put_line('** Compilation and Statistics gathering ...') ;
EXEC DBMS_UTILITY.compile_schema(schema => '&user_c');
exec DBMS_STATS.GATHER_SCHEMA_STATS ( '&user_c',DBMS_STATS.AUTO_SAMPLE_SIZE,null,'FOR ALL INDEXED COLUMNS SIZE AUTO',2,'ALL',TRUE) ;
drop directory DP_DIR_&user_c ;
drop database link &base_s.TEC ;
exit
-- script ends here
E] Restrictions: Metalink 553337.1
1. Tables with a LONG column are not supported in an Export
Data Pump job with NETWORK_LINK parameter. An ORA-31679 error will be generated
and the export will move on to the next table.
2. Tables with object_type columns are not supported in an Export Data Pump job with NETWORK_LINK parameter. An ORA-22804 error will be generated and the export will move on to the next table.
3. When both the NETWORK_LINK and the TABLES parameters are used, then only whole tables can be exported (not partitions of tables). Trying to export a partition of a table over a database link will fail with ORA-39203 (Partition selection is not supported over a network link) or in older releases with ORA-14100 (partition extended table name cannot refer to a remote object).
2. Tables with object_type columns are not supported in an Export Data Pump job with NETWORK_LINK parameter. An ORA-22804 error will be generated and the export will move on to the next table.
3. When both the NETWORK_LINK and the TABLES parameters are used, then only whole tables can be exported (not partitions of tables). Trying to export a partition of a table over a database link will fail with ORA-39203 (Partition selection is not supported over a network link) or in older releases with ORA-14100 (partition extended table name cannot refer to a remote object).
F] Other syntax examples
-- table mode
h1 :=
Dbms_DataPump.Open(operation => 'IMPORT', job_mode => 'TABLE', job_name
=> 'impdp_&base_s'||'_&user_s'||'_&base_c'||'_&user_c',
version => 'COMPATIBLE', remote_link => '&bas
e_s.&user_s.TEC' );
-- table expr
dbms_datapump.metadata_filter(handle=>
h1, name => 'NAME_EXPR', value =>'LIKE ''AD%''', object_type =>
'TABLE');
-- table expr
dbms_datapump.metadata_filter(handle=>
h1, name => 'NAME_EXPR', value =>'IN (''ZX00'',''ZXM9'',''ZX3Y'',''ZX37'',''ZX40'')',
object_type => 'TABLE');
-- remap tablespace
dbms_datapump.metadata_remap(h1,'REMAP_TABLESPACE','HRZX','HR_DAT');
-- table expr
dbms_datapump.metadata_filter(handle=>
h1, name => 'NAME_EXPR', value =>'NOT LIKE ''%LG10''', object_type =>
'TABLE');
dbms_datapump.metadata_filter(handle=>
h1, name => 'NAME_EXPR', value =>'NOT LIKE ''%AB10%''', object_type =>
'TABLE');
G] 11G Update
Since we are using the COMPRESS 11G option, I added the following lines to
disable the emission of the Segment Storage clauses to the target DB when this
one is a 11G ( in that way, we’ll get the Target Storage-Compressed Definitions
)
-- 4/ METADATA_TRANSFORM
-- Source Initial extent clause is not used
dbms_datapump.metadata_transform (
h1, 'STORAGE' , 0 , null ) ;
select substr(version,1,2) into v_vers from
v$instance ;
-- dbms_output.put_line ('Oracle
Version: '|| v_vers);
if v_vers = '11' then -- 11g version
transform=SEGMENT_ATTRIBUTES:n:table
dbms_datapump.metadata_transform ( handle
=> h1, name => 'SEGMENT_ATTRIBUTES' , value => 0 , object_type =>
'TABLE' ) ;
end if;
-- If zero, inhibits the assignment
of the exported OID during type or table creation. Instead, a new OID will be
assigned.
-- dbms_datapump.metadata_transform
( h1, 'OID' , 0 , null ) ;
Table 27-15 Transforms Provided by the
METADATA_TRANFORM Procedure
Name
|
Datatype
|
Object
Type
|
Meaning
|
PCTSPACE
|
NUMBER
|
TABLE
INDEX
TABLESPACE
|
Specifies a
percentage multiplier used to alter extent allocations and datafile sizes.
Used to shrink large tablespaces for testing purposes.
Defaults to 100.
|
SEGMENT_ATTRIBUTES
|
NUMBER
|
TABLE, INDEX
|
If nonzero (TRUE), emit storage segment parameters.
Defaults to 1.
|
STORAGE
|
NUMBER
|
TABLE
|
If nonzero (TRUE), emit storage clause. (Ignored if SEGMENT_ATTRIBUTES is zero.)
Defaults to nonzero
(TRUE).
|
OID
|
NUMBER
|
TYPE
TABLE
|
If zero, inhibits
the assignment of the exported OID during type or table creation. Instead, a
new OID will be assigned.
Use of this
transform on Object Tables will cause breakage in REF columns that point to
the table.
Defaults to 1.
|
2] FULL DDL Export
It’s important to get a FULL NO_ROWS
Export to save objects definition, grants and so on.
A] Pre requisiste as SYS
grant select on SYS.v_$database to MY_OWNER
;
grant select on SYS.v_$database to SYSTEM ;
B] Create Procedure
AS SYSTEM
drop procedure EXPDP_DB_FULL_METADATA_ONLY
;
CREATE OR REPLACE procedure MY_OWNER.EXPDP_DB_FULL_METADATA_ONLY (RETOUR OUT NUMBER) authid current_user
--
invoker's right ( SYSTEM )
as
ind NUMBER; -- Loop index
spos NUMBER; -- String starting
position
slen NUMBER; -- String length for
output
h1 NUMBER; -- Data Pump job handle
percent_done NUMBER; -- Percentage of job
complete
job_state VARCHAR2(30); -- To keep track of job
state
le ku$_LogEntry; -- For WIP and error
messages
js ku$_JobStatus; -- The job status from
get_status
jd ku$_JobDesc; -- The job description
from get_status
sts ku$_Status; -- The status object
returned by get_status
v_DBC varchar2(32) ;
v_e_date varchar2(16) ;
v_ora_ver number ; -- 10 or 11+
-- expdp
system FULL=y DIRECTORY=PRDUN_DP_DIR DUMPFILE=PRDUN_DDL.dmp PARALLEL=2
LOGFILE=e_PRDUN_DDL.log CONTENT=METADATA_ONLY
-- ( impdp
system directory=PRDUN_DP_DIR dumpfile=PRDUN_DDL.dmp
logfile=import_prdun_ddl.txt sqlfile=PRDUN_DDL.sql )
BEGIN
RETOUR:=0;
select name into v_DBC from v$database ;
select replace(trunc(sysdate),'/','_') into v_e_date from dual ;
select substr(REGEXP_REPLACE(banner, '([a-zA-Z ])', ''),1,2) into v_ora_ver from v$version where banner like '%Database%' ;
h1 := Dbms_DataPump.Open(operation => 'EXPORT', job_mode => 'FULL', job_name => ''||v_DBC||'_MY_OWNER.FULL_DDL.'||v_e_date||'', version => 'COMPATIBLE' );
dbms_datapump.set_parallel(handle => h1, degree => 2);
IF v_ora_ver = '10' THEN
dbms_output.put_line('DMP_FILE = '''||v_DBC||'_MY_OWNER.FULL_DDL.JOUR.'||v_e_date||'.dp.dmp') ;
Dbms_DataPump.Add_File(handle => h1, filename => ''||v_DBC||'_MY_OWNER.FULL_DDL.JOUR.'||v_e_date||'.dp.dmp', directory => ''||v_DBC||'_DP_DIR', filetype => 1);
ELSE
Dbms_DataPump.Add_File(handle => h1, filename => ''||v_DBC||'_MY_OWNER.FULL_DDL.JOUR.'||v_e_date||'.dp.oz.dmp', directory => ''||v_DBC||'_DP_DIR', filetype => 1);
Dbms_DataPump.Set_Parameter(handle => h1, name => 'COMPRESSION', value => 'ALL'); --
11g
END IF;
Dbms_DataPump.Add_File(handle => h1, filename => ''||v_DBC||'_MY_OWNER.FULL_DDL.JOUR.'||v_e_date||'.dp.log', directory => ''||v_DBC||'_DP_DIR', filetype => 3);
dbms_datapump.data_filter(handle=> h1, name => 'INCLUDE_ROWS' , value => 0);
-- Start
the job. An exception will be returned if something is not set up
--
properly.One possible exception that will be handled differently is the
--
success_with_info exception. success_with_info means the job started
--
successfully, but more information is available through get_status about
--
conditions around the start_job that the user might want to be aware of.
begin
dbms_datapump.start_job(h1);
dbms_output.put_line('Data Pump job started
successfully');
exception
when others then
if sqlcode = dbms_datapump.success_with_info_num
then
dbms_output.put_line('Data Pump job started with
info available:');
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end if;
else
raise;
end if;
end;
-- The
export job should now be running. In the following loop, we will monitor
-- the job
until it completes. In the meantime, progress information is
--
displayed.
percent_done := 0;
job_state := 'UNDEFINED';
while (job_state != 'COMPLETED') and (job_state != 'STOPPED') loop
dbms_datapump.get_status(h1,
dbms_datapump.ku$_status_job_error +
dbms_datapump.ku$_status_job_status +
dbms_datapump.ku$_status_wip,-1,job_state,sts);
js := sts.job_status;
-- If the
percentage done changed, display the new value.
if js.percent_done != percent_done
then
dbms_output.put_line('*** Job percent done = ' ||
to_char(js.percent_done));
percent_done := js.percent_done;
end if;
-- Display
any work-in-progress (WIP) or error messages that were received for
-- the
job.
if (bitand(sts.mask,dbms_datapump.ku$_status_wip) != 0)
then
le := sts.wip;
else
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
else
le := null;
end if;
end if;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
dbms_output.put_line(le(ind).LogText);
ind := le.NEXT(ind);
end loop;
end if;
end loop;
--
Indicate that the job finished and detach from it.
dbms_output.put_line('Job has completed');
dbms_output.put_line('Final job state = ' || job_state);
dbms_datapump.detach(h1);
-- Any
exceptions that propagated to this point will be captured. The
-- details
will be retrieved from get_status and displayed.
exception
when others then
RETOUR:=1;
dbms_output.put_line('Exception in Data Pump job');
dbms_datapump.get_status(h1,dbms_datapump.ku$_status_job_error,0,
job_state,sts);
if (bitand(sts.mask,dbms_datapump.ku$_status_job_error) != 0)
then
le := sts.error;
if le is not null
then
ind := le.FIRST;
while ind is not null loop
spos := 1;
slen := length(le(ind).LogText);
if slen > 255
then
slen := 255;
end if;
while slen > 0 loop
dbms_output.put_line(substr(le(ind).LogText,spos,slen));
spos := spos + 255;
slen := length(le(ind).LogText) + 1 - spos;
end loop;
ind := le.NEXT(ind);
end loop;
end if;
end if;
END;
/
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
job_name => 'EXPDP_DB_F_METADATA_ONLY',
job_type => 'PLSQL_BLOCK',
job_action
=> 'DECLARE num number ; BEGIN
MY_OWNER.EXPDP_DB_FULL_METADATA_ONLY(num); END;',
start_date => trunc(sysdate)+05/24, /* 07:00
*/
repeat_interval => 'FREQ=DAILY; BYDAY=TUE,WED,THU,FRI,SAT',
enabled => TRUE,
comments => 'AUC 14/03/2014: Export DB DLL');
END;
/