Goal
|
DP SCHEMAS / TABLES mode
|
Version
|
10gR2 11gR2 12cR1
|
Executable
|
expdp impdp
|
Log as
|
SYSTEM
|
Look at the
dumpfile header:
SET serveroutput
on SIZE 1000000
exec
show_dumpfile_info('DP_DIRECTORY','APP.dp.dmp') ;
1] Whole Schema
Here are the
syntax, accroding that we are in EE edition with Advanced
Compression Option in 11g and 12c ; if not, remove the compression arguments.
Version
|
DATAPUMP SCHEMA EXPORT Syntax
|
Comment
|
10gR2
|
expdp system SCHEMAS=SCH1,SCH2,SCH3
DIRECTORY=<BASE>_DP_DIR DUMPFILE=SCHEM.dp.dmp LOGFILE= SCHEM.dp.log
parallel=2 flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
|
PARALLEL
Mode ;
CONSISTENT
|
11gR2
|
expdp system schemas=${i}
DIRECTORY=${BDc}_DP_DIR DUMPFILE=${file_gen}.dmp LOGFILE=${file_gen}.log
parallel=2 flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
compression=all
|
Same and
DUMPFILE
COMPRESSION
|
12cR1
|
expdp system schemas=${i}
DIRECTORY=${BDc}_DP_DIR DUMPFILE=${file_gen}.dmp LOGFILE=${file_gen}.log
parallel=2 flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
compression=all compression_algorithm=high
|
Same and
COMPRESSION
Level High
|
Version
|
DATAPUMP SCHEMA IMPORT Syntax
|
Comment
|
10gR2
|
impdp system
DUMPFILE=<BASE>_DP_DIR:expdp_owner.dmp
LOGFILE=<BASE>_DP_DIR:impdp_owner.log PARALLEL=2
|
PARALLEL
Mode
|
11gR2
|
impdp system
DUMPFILE=<BASE>_DP_DIR:expdp_owner.dmp
LOGFILE=<BASE>_DP_DIR:impdp_owner.log PARALLEL=2 transform=SEGMENT_ATTRIBUTES:n:table
|
PARALLEL
Mode ;
IGNORE
STORAGE CLAUSE to Reorganize Segments
|
12cR1
|
impdp system DUMPFILE=<BASE>_DP_DIR:expdp_owner.dmp
LOGFILE=<BASE>_DP_DIR:impdp_owner.log PARALLEL=2 transform=SEGMENT_ATTRIBUTES:n:table
transform=disable_archive_logging:y
|
Same and
DISABLE
REDO GENERATION ( NOLOGGING )
|
See also
Segment Compression / 4] Datapump Copy & Compress from 10G to 11G.
1.0 Native export translation to expdp
exp system owner=SCH1,SCH2,SCH3 file=SCHEMAS.dmp log= SCHEM.JOUR.log
direct=y consistent=y
=>
expdp system SCHEMAS= SCH1,SCH2,SCH3 DIRECTORY=<BASE>_DP_DIR
DUMPFILE= SCHEM.dp.dmp LOGFILE= SCHEM.JOUR.dp.log parallel=2
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
1.1 How to find out applicative schemas in a DB
Example in
11G.
-- schema's list : users having objects others than Synonyms
select distinct
instance_name,substr(username,1,3) from (
select instance_name,username,nvl2(nb_obj,'Y','N') sch from
(
with A as ( select distinct owner , count(1) over ( partition by owner ) nb_obj from
dba_objects where
object_type not in ('SYNONYM', 'VIEW') and object_name not like 'TMP%' )
select instance_name , username
, nb_obj from dba_users b LEFT OUTER JOIN A on b.username = A.owner , v$instance
)
) where sch = 'Y'
-- exclude 11G technical schemas
and username not in ('APEX_030200','APPQOSSYS','CTXSYS','DBSNMP','EXFSYS','FLOWS_FILES','MDSYS','OLAPSYS','ORACLE_OCM','ORDDATA','ORDPLUGINS','ORDSYS','OUTLN','OWBSYS','SCOTT','SYS','SYSMAN','SYSTEM','WMSYS','XDB')
order by 1,2
1.2 KSH : Use this SQL in a loop
#!/bin/ksh
# Datapump Billetique 11G
# retour
0 ok 1 ko
# test args
if [ $# -lt 1 ] ; then
echo "Missing arguments"
echo "$0 TargetDB"
return 1 ; fi
function get_sch {
con_str=$1
sql_fil=$2
SCHE=$( sqlplus -s ${con_str} @${sql_fil} )
print ${SCHE}
}
BDc=$1
sys_psw="psw$( echo ${BDc}| tr '[:upper:]'
'[:lower:]' |cut -c3 )$( echo ${BDc}| tr '[:upper:]' '[:lower:]' |cut -c5
)"
Subj="[${BDc}] Billetique - Export Datapump"
# 1/ INIT
. /etc/profile.d/oracle.sh
export ORACLE_SID=${BDc}
usr_scr_dir=/ORA_UTILS/oracle/dba/MOV
usr_sql_dir=/ORA_UTILS/oracle/dba/MOV/sql
sql_path=${usr_sql_dir}/liste_schemas_11g.sql
dpu_path="/WORK_EXP/oracle/${BDc}"
date_dp=$( date '+%y%m%d_%H%M%S' )
# 1'/ Remove old dumps
find ${dpu_path} -name "*.dmp" -mtime +7
-exec rm {} \;
# 2/ boucle
li_sch=$( get_sch
system/${sys_psw}@${BDc} ${sql_path} )
##print ${li_sch} ; return 0
for i in ${li_sch} ; do
file_gen=expdp_${BDc}_${i}_${date_dp} ; print "**
${file_gen} ** "
expdp system/${sys_psw}@${BDc} schemas=${i}
DIRECTORY=${BDc}_DP_DIR DUMPFILE=${file_gen}.dmp LOGFILE=${file_gen}.log
parallel=2 flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\"" compression=all
ret_char=$?
if (( ${ret_char} != 0 )); then
cat ${dpu_path}/${file_gen}.log | mailx -s
"${Subj} ${i} status: ${ret_char}" 'dba@comp_name.fr'
fi
done
return 0
1.3 DP Export
- Create directory, then Export
consistent, example = SPE5 schema
SQL> CREATE OR REPLACE DIRECTORY
<MYAPP>_DP_DIR AS '<dir_path>';
Directory created.
$ expdp system schemas=SPE5
DUMPFILE=<BASE>_DP_DIR:expdp_owner.dmp LOGFILE=<BASE>_DP_DIR:expdp_owner.log
parallel=2 flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
Export: Release 10.2.0.5.0 - 64bit Production on
Tuesday, 07 February, 2012 14:31:11
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Password:
Connected to: Oracle Database 10g Enterprise Edition
Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real
Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** schemas=SPE5 DUMPFILE=<BASE>_DP_DIR:expdp_owner.dmp
LOGFILE=<BASE>_DP_DIR:expdp_owner.log parallel=2
flashback_time="to_timestamp(to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'),
'YYYY-MM-DD HH24:MI:SS')"
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 802.3 MB
Processing object type SCHEMA_EXPORT/USER
. . exported "SPE5"."PRE" 61.01 MB 297904 rows
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
…
. . exported
"SPE5"."VERSCOU_VERSCOU" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01"
successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
/WORK_EXP/oracle/BASE/expdp_owner.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01"
successfully completed at 14:34:03
- Import schema back
$ time impdp system DUMPFILE=<BASE>_DP_DIR:expdp_owner.dmp
LOGFILE=<BASE>_DP_DIR:impdp_owner.log PARALLEL=2
2] Only Selected Tables
2.1 TABLES=
Export
UDE-00012: table
mode exports only allow objects from one schema
$ expdp system DUMPFILE=<SCH>_DP_DIR:expdp_sel_HR_<SCH>.dmp
LOGFILE=<SCH>_DP_DIR:expdp_sel_HR_<SCH>.log parallel=2
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\"" tables=APP.zd4f,APP.zd00,APP.ze00,APP.<SCH>_zeorcg,APP.ze01,ze0a,zd01,zd0f
Export: Release 10.2.0.5.0 - 64bit Production on
Wednesday, 23 May, 2012 16:25:29
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Password:
Connected to: Oracle Database 10g Enterprise Edition
Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** DUMPFILE=<SCH>_DP_DIR:expdp_sel_HR_<SCH>.dmp
LOGFILE=APPLI01_DP_DIR:expdp_sel_HR_<SCH>.log parallel=2
flashback_time="to_timestamp(to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'),
'YYYY-MM-DD HH24:MI:SS')" tables=APP.zd4f,APP.zd00,APP.ze00,APP.<SCH>_zeorcg,APP.ze01,ze0a,zd01,zd0f
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 66.18 MB
. . exported «APP»."ZD01" 46.59 MB 56905 rows
. . exported «APP»."ZD00" 3.513 MB 36467 rows
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported «APP»."<SCH>_ZEORCG" 2.304 MB
5384 rows
. . exported «APP»."ZE01" 254.9 KB 1728 rows
. . exported «APP»."ZD0F" 234.9 KB 808 rows
. . exported «APP»."ZE00" 166.7 KB 1728
rows
. . exported «APP»."ZE0A" 130.5 KB 2402 rows
. . exported «APP»."ZD4F" 6.398 KB 26 rows
Processing object type
TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type
TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type
TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type
TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type
TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Master table "SYSTEM"."SYS_EXPORT_TABLE_01"
successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/work/oracle/<BASE>/exp/<SCH>/expdp_sel_HR_<SCH>.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01"
successfully completed at 16:25:44
Import
Without
remap goes to same shema
time impdp system
DUMPFILE=<SCH>_DP_DIR:expdp_sel_HR_<SCH>.dmp LOGFILE=DEVQU_DP_DIR:impdp_sel_HR_<SCH>.log
PARALLEL=2 TABLE_EXISTS_ACTION=TRUNCATE
2.2 Import Selected tables from a dumpfile
TABLE_EXISTS_ACTION The possible values have the following
effects:
SKIP leaves the table as is and moves on to the next object. This is not a
valid option if the CONTENT parameter
is set to DATA_ONLY.
APPEND loads rows from the source and leaves existing rows unchanged.
TRUNCATE deletes existing rows and then loads rows from the source.
REPLACE drops the existing table and then creates and loads it from the source.
This is not a valid option if the CONTENT parameter
is set to DATA_ONLY.
The following considerations apply when you are
using these options:
When you use TRUNCATE or REPLACE, make sure that
rows in the affected tables are not targets of any referential constraints.
When you use SKIP, APPEND, or TRUNCATE, existing table-dependent
objects in the source, such as indexes, grants, triggers, and constraints, are
ignored. For REPLACE, the dependent objects are dropped and re-created from the
source, if they were not explicitly or implicitly excluded (using EXCLUDE) and
they exist in the source dump file or system.
When you use APPEND or TRUNCATE, checks are made to
ensure that rows from the source are compatible with the existing table prior
to performing any action.
The existing table is loaded using the external
tables access method if the existing tables have active constraints and
triggers. However, be aware that if any row violates an active constraint, the
load fails and no data is loaded.
If you have data that must be loaded, but may cause
constraint violations, consider disabling the constraints, loading the data,
and then deleting the problem rows before reenabling the constraints.
When you use APPEND, the data is always loaded into
new space; existing space, even if available, is not reused. For this reason,
you may wish to compress your data after the load.
$ impdp system DUMPFILE=<BASE>_DP_DIR:<BASE>.<schema>.preprod.dp.dmp
LOGFILE=<BASE>_DP_DIR:impdp_<BASE>.<schema>.preprod.dp.log
tables=<schema>.<sch>_cploc,<schema>.<sch>_cedex,<schema>.<sch>_loc
TABLE_EXISTS_ACTION=TRUNCATE
Import: Release 10.2.0.5.0 - 64bit Production on
Monday, 28 November, 2011 14:44:02
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Password:
Connected to: Oracle Database 10g Enterprise Edition
Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real
Application Testing options
Master table "SYSTEM"."SYS_IMPORT_TABLE_01"
successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TABLE_01": system/******** DUMPFILE=<BASE>_DP_DIR:<BASE>.<schema>.preprod.dp.dmp
LOGFILE=<BASE>_DP_DIR:impdp_<BASE>.<schema>.preprod.dp.log
tables=<schema>.<sch>_cploc,<schema>.<sch>_cedex,<schema>.<sch>_loc
TABLE_EXISTS_ACTION=TRUNCATE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39153: Table "<SCHEMA>"."<SCH>_CEDEX"
exists and has been truncated. Data will be loaded but all dependent metadata
will be skipped due to table_exists_action of truncate
ORA-39153: Table "<SCHEMA>"."<SCH>_CPLOC"
exists and has been truncated. Data will be loaded but all dependent metadata
will be skipped due to table_exists_action of truncate
ORA-39153: Table "<SCHEMA>"."<SCH>_LOC"
exists and has been truncated. Data will be loaded but all dependent metadata will
be skipped due to table_exists_action of truncate
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "<SCHEMA>"."<SCH>_CEDEX" 1.050 MB 24586 rows
. . imported "<SCHEMA>"."<SCH>_CPLOC" 1.521 MB
39382 rows
. . imported "<SCHEMA>"."<SCH>_LOC" 1.325 MB 39056 rows
Processing object type
SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type
SCHEMA_EXPORT/TABLE/GRANT/WITH_GRANT_OPTION/OBJECT_GRANT
Processing object type
SCHEMA_EXPORT/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type
SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type
SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_TABLE_01"
completed with 3 error(s) at 14:45:08
Note :
the warning error code
$ echo $?
5
- Import with REMAP_TABLESPACE
impdp system DUMPFILE=<BASE>_DP_DIR:expdp_sel_HR_<SCH>.dmp
LOGFILE=<BASE>_DP_DIR:impdp_sel_HR_<SCH>.log TABLE_EXISTS_ACTION=TRUNCATE
remap_tablespace=HRZD:HR_DAT remap_tablespace=HRZE:HR_DAT remap_tablespace=APPLIT:HR_DAT
2.2 INCLUDE - TABLE_NAME NOT LIKE, parfile for
$ cat expdp_MiniAPP.par
flashback_time="to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')"
INCLUDE=TABLE:"NOT LIKE
'%LG10'"
INCLUDE=TABLE:"NOT LIKE
'%AB10%'"
INCLUDE=TABLE:"NOT LIKE
'%TD%'"
INCLUDE=TABLE:"NOT LIKE
'%YW%'"
INCLUDE=TABLE:"NOT LIKE
'%ZO%'"
INCLUDE=TABLE:"NOT LIKE
'%GE%'"
INCLUDE=TABLE:"NOT LIKE
'%VS%'"
INCLUDE=TABLE:"NOT LIKE
'%GO%'"
INCLUDE=TABLE:"NOT LIKE
'%MX%'"
INCLUDE=TABLE:"NOT LIKE
'%TR%'"
INCLUDE=TABLE:"NOT LIKE
'PORTLET%'"
2.3 INCLUDE - TABLE_NAME LIKE, command
line arg
- Export only these Tables ( not
the whole schema )
$ expdp system schemas=APP_OWNER DUMPFILE=<BASE>_DP_DIR:expdp_srs_td.dmp
LOGFILE=<BASE>_DP_DIR:expdp_srs_td.log parallel=2
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
INCLUDE=TABLE:\""LIKE 'TD_%'\""
Export: Release 10.2.0.5.0 -
64bit Production on Tuesday, 07 February, 2012 14:43:05
Copyright (c) 2003, 2007,
Oracle. All rights reserved.
Password:
Connected to: Oracle Database
10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP,
Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** schemas=APP_OWNER DUMPFILE=<BASE>_DP_DIR:expdp_srs_td.dmp
LOGFILE=<BASE>_DP_DIR:expdp_srs_td.log parallel=2
flashback_time="to_timestamp(to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'),
'YYYY-MM-DD HH24:MI:SS')" INCLUDE=TABLE:"LIKE 'TD_%'"
Estimate in progress using
BLOCKS method...
Processing object type
SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using
BLOCKS method: 228 MB
. . exported "APP_OWNER"."TD_TAB_14" 67.15 MB 519094 rows
. . exported "APP_OWNER"."TD_TAB_08" 29.60 MB 239444 rows
Processing object type
SCHEMA_EXPORT/TABLE/TABLE
. . exported "APP_OWNER"."TD_TAB_01" 21.52 MB 260923 rows
Processing object type
SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
. . exported "APP_OWNER"."TD_DET_RESS_MENS" 12.88 MB
178765 rows
Processing object type
SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "APP_OWNER"."TD_TAB_19" 10.34 MB 145556 rows
. . exported "APP_OWNER"."TD_TAB_18" 10.04 MB 154897 rows
. . exported "APP_OWNER"."TD_TAB_15" 9.315 MB 113587 rows
. . exported "APP_OWNER"."TD_TAB_11" 8.077 MB 113587 rows
. . exported "APP_OWNER"."TD_TAB_04" 6.988 MB 113473 rows
. . exported "APP_OWNER"."TD_TAB_07" 6.587 MB 82070 rows
. . exported "APP_OWNER"."TD_TAB_02" 25.62 KB 190 rows
. . exported "APP_OWNER"."TD_TAB_03" 8.664 KB 3 rows
. . exported "APP_OWNER"."TD_TAB_10" 6.835 KB 1 rows
. . exported "APP_OWNER"."TD_TAB_12" 269.2 KB 2850 rows
. . exported "APP_OWNER"."TD_TAB_13" 21.85 KB 147 rows
. . exported "APP_OWNER"."TD_TAB_17" 23.29 KB 163 rows
. . exported "APP_OWNER"."TD_TAB_20" 103.0 KB 2196 rows
. . exported "APP_OWNER"."TD_TAB_21" 536.4 KB 12635 rows
. . exported "APP_OWNER"."TD_TAB_22" 14.42 KB 1
rows
. . exported "APP_OWNER"."TD_EVENEMENT" 0 KB 0 rows
. . exported "APP_OWNER"."TD_TAB_09" 0 KB 0 rows
Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01"
successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01
is:
/WORK_EXP/oracle/<BASE>/expdp_srs_td.dmp
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01"
successfully completed at 14:43:19
- Import
$ impdp system DUMPFILE=<BASE>_DP_DIR:expdp_srs_td.dmp LOGFILE=<BASE>_DP_DIR:impdp_srs_td.log
TABLE_EXISTS_ACTION=TRUNCATE
Import: Release 10.2.0.5.0 - 64bit Production on
Tuesday, 07 February, 2012 14:45:14
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Password:
Connected to: Oracle Database 10g Enterprise Edition
Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real
Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01"
successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** DUMPFILE=<BASE>_DP_DIR:expdp_srs_td.dmp
LOGFILE=<BASE>_DP_DIR:impdp_srs_td.log TABLE_EXISTS_ACTION=TRUNCATE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39153: Table "APP_OWNER"."TD_TAB_09"
exists and has been truncated. Data will be loaded but all dependent metadata
will be skipped due to table_exists_action of truncate
…
ORA-39153: Table "APP_OWNER"."TD_TAB_12"
exists and has been truncated. Data will be loaded but all dependent metadata
will be skipped due to table_exists_action of truncate
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "APP_OWNER"."TD_TAB_14" 67.15 MB 519094 rows
. . imported "APP_OWNER"."TD_TAB_08" 29.60 MB
239444 rows
. . imported "APP_OWNER"."TD_TAB_01" 21.52 MB 260923 rows
. . imported "APP_OWNER"."TD_DET_RESS_MENS" 12.88 MB 178765 rows
. . imported "APP_OWNER"."TD_TAB_19" 10.34 MB
145556 rows
. . imported "APP_OWNER"."TD_TAB_18" 10.04 MB 154897 rows
. . imported "APP_OWNER"."TD_TAB_15" 9.315 MB 113587 rows
. . imported "APP_OWNER"."TD_TAB_11" 8.077 MB 113587 rows
. . imported "APP_OWNER"."TD_TAB_04" 6.988 MB 113473 rows
. . imported "APP_OWNER"."TD_TAB_07" 6.587 MB 82070 rows
. . imported "APP_OWNER"."TD_TAB_02" 25.62 KB 190 rows
. . imported "APP_OWNER"."TD_TAB_03" 8.664 KB 3 rows
. . imported "APP_OWNER"."TD_TAB_10" 6.835 KB 1 rows
. . imported "APP_OWNER"."TD_TAB_12" 269.2 KB 2850 rows
. . imported "APP_OWNER"."TD_TAB_13" 21.85 KB 147 rows
. . imported "APP_OWNER"."TD_TAB_17" 23.29 KB 163 rows
. . imported "APP_OWNER"."TD_TAB_20" 103.0 KB 2196 rows
. . imported "APP_OWNER"."TD_TAB_21" 536.4 KB 12635 rows
. . imported "APP_OWNER"."TD_TAB_22" 14.42 KB 1 rows
. . imported "APP_OWNER"."TD_EVENEMENT" 0 KB 0 rows
. . imported "APP_OWNER"."TD_TAB_09" 0 KB 0 rows
Processing object type
SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type
SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_01"
completed with 21 error(s) at 14:45:45
Another one :
with 2 INCLUDEs arguments
$ impdp system
directory=<BASE>_DP_DIR dumpfile=<dumpname>.dp.dmp logfile=<dumpname>.log
SCHEMAS=BI_DTM INCLUDE=TABLE:"\"like 'SOURCE_%_2013%'"\"
INCLUDE=TABLE:"\"not in
('SOURCE_2013_0201','SOURCE_2013_S4','SOURCE_2013')"\"
Others
INCLUDE=TABLE:"\"IN('INV_AVI','INV_AVI_CHOIX','INV_FAISCEAUX','INV_FERMETURE_SITE')\""
\
INCLUDE=SEQUENCE:"\"IN('INV_AVI_CHOIX_SEQ','INV_AVI_SEQ','INV_FAISCEAU_SEQ','INV_FERMETURE_SEQ','INV_GUIDES_SEQ')\""
\
INCLUDE=VIEW:"\"IN('INV_CONS_GUIDE_PILOTE')\""
2.4 Import only Tables Data and Sequences from a
Dumpfile
see list of
all possible DP Export Elements : SELECT object_path, comments FROM
schema_export_objects
see DP
Export Logfile to see which were really exported : FUNCTION , PROCEDURE , PACKAGE
, SYNONYM
then exclude
them : impdp parfile
SCHEMAS=OPA_DWH
directory=<BASE>_DP_DIR
dumpfile=expdp_<BASE>_OPA_DWH.dmp
logfile=impdp_expdp_<BASE>_OPA_DWH.dmp.log
PARALLEL=2
TABLE_EXISTS_ACTION=TRUNCATE
EXCLUDE=FUNCTION
EXCLUDE=PROCEDURE
EXCLUDE=PACKAGE
EXCLUDE=SYNONYM
2.5 EXCLUDE - TABLE_NAME LIKE, command
line arg
- Export whole Schema without
those Tables
$ expdp system schemas=APP DUMPFILE=<BASE>_DP_DIR:expdp_app_only.dmp
LOGFILE=<BASE>_DP_DIR:expdp_app_only.log parallel=2
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
EXCLUDE=TABLE:\""LIKE 'APP_SCH_%'\""
- Export whole Schema with
only those Tables
$ expdp system schemas=APP DUMPFILE=<BASE>_DP_DIR:expdp_app_only_2.dmp
LOGFILE=<BASE>_DP_DIR:expdp_app_only_2.log parallel=2
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
EXCLUDE=TABLE:\""NOT LIKE 'APP_SCH_%'\""
2.6 EXCLUDE – TABLE IN, command
line arg
expdp system schemas=APP DUMPFILE=APP_DP_DIR:expdp_<BASE>_APP_281114.dmp
LOGFILE=APP_DP_DIR:expdp_<BASE>_APP_281114.log parallel=2
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
EXCLUDE=TABLE:\""IN('TABLE_01','TABLE_02','TABLE_03')\""
2.7 Bring back only those Tables from a whole
schema dump
$ cat tab.par
TABLES=APP.AGENTS, APP. DIRECTIONS
$ time impdp system DUMPFILE=<dmp>
LOGFILE=<log> PARALLEL=2 parfile=tab.par
3] Only Selected Tables and Rows
3.0 Native export with Query
exp ija_2011090${i}/latitude
file=ija_lattitudev5_ABONNES_CHEQUIER_2011090${i}.dmp tables=ABONNES_CHEQUIER
parfile=ija_2011090${i}.par log=e_ABONNES_CHEQUIER_2011090${i}.log
$ cat *3.par
Query="where IDABONNE in ('2502', '2522', '2524')"
3.1 QUERY – WHERE CLAUSE , a simple case
SQL> create directory <SCH>_DP_DIR
as '/work/oracle/<BASE>/exp/<SCH>' ;
Directory created.
SQL> grant read,write on directory <SCH>_DP_DIR
to HR ;
Grant succeeded.
$ cat
> <SCH>_YW00_MT2011.par
FLASHBACK_TIME
= "to_timestamp(to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD
HH24:MI:SS')"
INCLUDE=TABLE:"LIKE
'YW%'"
QUERY="WHERE
nudoss IN (SELECT nudoss FROM zx00 WHERE perpai LIKE 'MT2011%')"
$ export NLS_DATE_FORMAT='YYYY-MM-DD'
$ time expdp hr DUMPFILE=<SCH>_DP_DIR:expdp_<SCH>_YW00_MT2011.dmp
LOGFILE=<SCH>_DP_DIR:expdp_<SCH>_YW00_MT2011.log PARALLEL=2
parfile=<SCH>_YW00_MT2011.par
Export: Release 10.2.0.5.0 - 64bit Production on Thursday, 17 November,
2011 16:24:20
Copyright (c) 2003, 2007, Oracle.
All rights reserved.
Password:
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 -
64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing
options
Starting «APP»."SYS_EXPORT_SCHEMA_01": hr/******** DUMPFILE=<SCH>_DP_DIR:expdp_<SCH>_YW00_MT2011.dmp
LOGFILE=<SCH>_DP_DIR:expdp_<SCH>_YW00_MT2011.log PARALLEL=2
parfile=<SCH>_YW00_MT2011.par
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 87.64 GB ??? FALSE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
. . exported «APP»."YW5C" 1.613 GB 21
929 639 rows
Instead of
the full table :
select count(1) from «APP»."YW5C" -- 87 233 222 rows
Import
$ mv expdp_<SCH>_YW00_MT2011.dmp
/WORK_EXP/oracle/DEVQU
$ export
NLS_DATE_FORMAT='YYYY-MM-DD'
DEVQU
sr-dev-1:/WORK_EXP/oracle/DEVQU > time impdp system
DUMPFILE=DEVQU_DP_DIR:expdp_<SCH>_YW00_MT2011.dmp
LOGFILE=DEVQU_DP_DIR:impdp_<SCH>_YW00_MT2011.log PARALLEL=4 TABLE_EXISTS_ACTION=APPEND
Another One,
command line with Partitions
$ time expdp system DUMPFILE=BAT_DP_DIR:expdp_source_tickets.dmp
LOGFILE=PPRQU_DP_DIR:expdp_source_tickets.log TABLES=SCH.source_tickets
PARALLEL=2 QUERY="\"WHERE
to_char(date_heure_debut,'RRRR')>2014\""
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
Estimate in
progress using BLOCKS method...
Processing object
type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation
using BLOCKS method: 2.320 GB
Processing object
type TABLE_EXPORT/TABLE/TABLE
Processing object
type TABLE_EXPORT/TABLE/INDEX/INDEX
. . exported
"SCH"."TICKETS_TABLE":"P201409" 9.375 KB 0 rows
Processing object
type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object
type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported
"SCH"."TICKETS_TABLE":"P201403" 9.375 KB 0 rows
. . exported
"SCH"."TICKETS_TABLE":"P201406" 9.375 KB 0 rows
. . exported
"SCH"."TICKETS_TABLE":"P201410" 9.375 KB 0 rows
. . exported
"SCH"."TICKETS_TABLE":"P201503" 124.6 MB 1129801 rows
. . exported
"SCH"."TICKETS_TABLE":"P201402" 9.375 KB
0 rows
. . exported
"SCH"."TICKETS_TABLE":"P201404" 9.375 KB 0 rows
. . exported
"SCH"."TICKETS_TABLE":"P201501" 122.6 MB 1112868 rows
. . exported
"SCH"."TICKETS_TABLE":"P201504" 121.9 MB 1109083 rows
$ impdp system@BASE1D DUMPFILE=BAT_DP_DIR:expdp_source_tickets.dmp
LOGFILE=BASE1_DP_DIR:impdp_source_tickets.log TABLE_EXISTS_ACTION=REPLACE
. . imported
"SCH"."TICKETS_TABLE":"P201409" 9.375 KB 0 rows
. . imported
"SCH"."TICKETS_TABLE":"P201403" 9.375 KB 0 rows
. . imported
"SCH"."TICKETS_TABLE":"P201406" 9.375 KB 0 rows
. . imported
"SCH"."TICKETS_TABLE":"P201410" 9.375 KB 0 rows
. . imported
"SCH"."TICKETS_TABLE":"P201503" 124.6 MB 1129801 rows
. . imported
"SCH"."TICKETS_TABLE":"P201402" 9.375 KB 0 rows
. . imported
"SCH"."TICKETS_TABLE":"P201404" 9.375 KB 0 rows
. . imported
"SCH"."TICKETS_TABLE":"P201501" 122.6 MB 1112868 rows
. . imported
"SCH"."TICKETS_TABLE":"P201504" 121.9 MB 1109083 rows
3.2 QUERY – WHERE CLAUSE , a more complex case
Be careful
as same table_name may be exported if they exist in the listed schema.
$ cat expdp_sel_APPLI01.par
schemas=FCS,HSY,INSER,MPL5,MTB_2,HR,APPLI01
include=TABLE:"IN
('ECDG','EPDG','HSY_PERIODE','ORGANISME','NATURE','TYPESTAGE','PERSONNEL','PLAN','PLANSESSION','STAGE','REALISE','REALISESESSION','MPL5_T_SITE_ADRESSE','COMMANDE','PARAMETRE','MARCHE_COMPL','T_APP_SCH_TIER','ZD00','ZD01','ZY3B','ZY00','ZY39','ZC00','ZC01','ZE00','ZE01','ZYEI','ZYPO','VM_HR_ZY3B','VM_HR_ZYAG','VM_HR_ZYFL','VM_HR_ZYGF','VM_HR_ZYGR','VM_HR_ZYPO','VM_HR_ZYTL','VM_HR_ZYYJ')"
QUERY=fcs.ecdg:"WHERE
TIERNUM IS NOT NULL AND SERVCOD LIKE '153510%'"
QUERY=fcs.epdg:"WHERE
TIERNUM IS NOT NULL AND SERVCOD LIKE '153510%'"
QUERY=hsy.hsy_periode:"where
trim(hsy_agt_mat) in (SELECT trim(zy00.matcle) FROM app.zy3b b3, app.zy00 WHERE
zy00.nudoss = b3.nudoss AND zy00.socdos
= '099' AND b3.idjb00 LIKE 'TOS%')"
QUERY=inser.personnel:"where
trim(codepersonnel) in (SELECT trim(zy00.matcle) FROM app.zy3b b3, app.zy00
WHERE zy00.nudoss = b3.nudoss AND zy00.socdos
= '099' AND b3.idjb00 LIKE 'TOS%')"
QUERY=inser.plan:"where
numpersonnel in ( select personnel.numpersonnel from inser.personnel where
trim(codepersonnel) in (SELECT trim(zy00.matcle) FROM app.zy3b b3, app.zy00
WHERE zy00.nudoss = b3.nudoss AND zy00.socdos
= '099' AND b3.idjb00 LIKE 'TOS%'))"
QUERY=inser.plansession:"where
numsession in ( select plan.numsession from inser.plan where PLAN.numpersonnel
in ( select personnel.numpersonnel from inser.personnel where
trim(codepersonnel) in (SELECT trim(zy00.matcle) FROM app.zy3b b3, app.zy00
WHERE zy00.nudoss = b3.nudoss AND zy00.socdos
= '099' AND b3.idjb00 LIKE 'TOS%')))"
$ cat expdp_sel_APPLI01_2.par
schemas=APP_01,APP_02
include=TABLE:"IN
('STAGE','ZD00','ZD01','ZY00')"
QUERY=inser.stage:"where
numstage in (select realisesession.numstage from INSER.realisesession where
realisesession.numsession in (select realise.numsession from inser.realise
where realise.numpersonnel in (select personnel.numpersonnel from
inser.personnel where trim(codepersonnel) in (SELECT trim(zy00.matcle) FROM app.zy3b
b3, app.zy00 WHERE zy00.nudoss = b3.nudoss AND zy00.socdos = '099' AND b3.idjb00 LIKE 'TOS%') )
))"
QUERY=APP.ZD00:"where
cdstco = 'YLP' AND cdregl = 'FR1'"
QUERY=APP.ZD01:"where
nudoss in (select zd00.nudoss from app.zd00 where zd00.cdstco = 'YLP' AND zd00.cdregl = 'FR1')"
QUERY=APP.ZY00:"where
socdos = '099' AND matcle IN (SELECT zy00.matcle FROM app.zy3b b3, app.zy00
WHERE zy00.nudoss = b3.nudoss AND zy00.socdos
= '099' AND b3.idjb00 LIKE 'TOS%')"
3.3 Selected Partitions
- Export Partitions by Name with
TABLES=
$ cat expdp_MT201112.par
flashback_time="to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')"
tables=app.YW00:MT201112
$ expdp system DUMPFILE=<BASE>_DP_DIR:expdp_MT201112.dmp LOGFILE=<BASE>_DP_DIR:expdp_MT201112.log
parfile=expdp_MT201112.par parallel=2
/WORK_EXP/oracle/<BASE>/expdp_MT201112.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01"
successfully completed at 14:31:27
- Import a schema and selected partitions
from a schema dump
$ impdp system DUMPFILE=<BASE>_DP_DIR:INST1.APPLI.JOUR.dp.dmp
LOGFILE=QUAUN_DP_DIR:impdp_INST1.APPLI.JOUR.dp.log PARALLEL=2
tables=hrt.comptp:P1_2013,hrt.comptj:P1_2013,hrt.pt:P1_2013
Import: Release 10.2.0.5.0 - 64bit Production on Tuesday, 26 March, 2013
16:11:14
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 -
64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing
options
Master table "SYSTEM"."SYS_IMPORT_TABLE_01"
successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TABLE_01": system/******** DUMPFILE=<BASE>_DP_DIR:INST1.APPLI.JOUR.dp.dmp
LOGFILE=QUAUN_DP_DIR:impdp_INST1.APPLI.JOUR.dp.log PARALLEL=2
tables=hrt.comptp:P1_2013,hrt.comptj:P1_2013,hrt.pt:P1_2013
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "APPLI"."TABLE":"P1_2013" 253.1 MB 565463 rows
. . imported "APPLI"."COMPTP":"P1_2013" 300.8 MB 565463 rows
. . imported "APPLI"."PT":"P1_2013" 84.48 MB 792442 rows
3.4 Selected Rows/Partitions with QUERY=
according to Business Rules
we want to
exclude 100 GB out of 157 total, that is the closed periods before 2013 :
select
table_name , sum(blocks*8192/1024/1024/1024) Sz_GB from dba_tab_partitions
where table_name in ('YW8K','YW8C','YWMD')
and partition_name
not like '%2014%' and partition_name not like '%2013%'
and
table_owner = 'APP' group by table_name order by 2 desc
the
partition key is
select
count(1) from APP.YW8K where PERPAI not like '%2014%' and PERPAI not like
'%2013%'
the parfile
to export only the partitions starting from 2013 on :
flashback_time="to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')" PARALLEL=4
INCLUDE=TABLE:"LIKE 'YW%'"
QUERY="WHERE
nudoss IN ( SELECT NUDOSS FROM YW00 WHERE to_date(substr(PERPAI,3,4),'YYYY')
> to_date('2012','YYYY') )"
4] Structure or Data only
4.1 Structure CONTENT=METADATA_ONLY
$ expdp system schemas=SCHEMA DUMPFILE=<BASE>_DP_DIR:expdp_SCHEMA_no_rows.dmp
LOGFILE=<BASE>_DP_DIR:expdp_SCHEMA_no_rows.log CONTENT=METADATA_ONLY
parallel=2 flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
Warning :
when importing, statistics get locked at target database. It’s better not to
export them, then gather schema statistics after import is done.
ð content=metadata_only
exclude=statistics
Another
way : only export the structure without any rows with QUERY clause.
expdp system schemas=SCHEMA DUMPFILE=<BASE>_DP_DIR:expdp_ppr_SCHEMA_anonym.dmp
LOGFILE=<BASE>_DP_DIR:expdp_ppr_SCHEMA_anonym.log parallel=2
flashback_time="\"to_timestamp(to_char(sysdate, 'YYYY-MM-DD
HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\"" QUERY=SCHEMA.R_DOC:"\"WHERE
1=2\"" QUERY=SCHEMA.TD039:"\"WHERE 1=2\""
4.2 Import only Data from a Schema Dump
Disable FKs
impdp system DIRECTORY=<BASE>_DP_DIR DUMPFILE=INST1.<SCH>.JOUR.dp.dmp
LOGFILE=impdp_INST1.<SCH>.JOUR.dp.log PARALLEL=2 CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=TRUNCATE
Enable FKs
4.3 Export Selected Sequences
$ expdp APP/APP
DUMPFILE=DP_DIR_APP:expdp_sel_SCH_SEQ.dmp LOGFILE=DP_DIR_APP:expdp_sel_SCH_SEQ.log
parallel=2 flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\""
INCLUDE=SEQUENCE:"\"IN('APP_COM_SEQ','APP_HAB_SEQ','APP_LIGNE_SEQ')"\"
Export: Release 10.2.0.5.0 -
64bit Production on Monday, 11 May, 2015 15:16:49
Copyright (c) 2003, 2007,
Oracle. All rights reserved.
Connected to: Oracle
Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP,
Data Mining and Real Application Testing options
Starting "APP"."SYS_EXPORT_SCHEMA_02": APP/******** DUMPFILE=DP_DIR_APP:expdp_sel_SCH_SEQ.dmp
LOGFILE=DP_DIR_APP:expdp_sel_SCH_SEQ.log parallel=2
flashback_time="to_timestamp(to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'),
'YYYY-MM-DD HH24:MI:SS')" INCLUDE=SEQUENCE:"IN('APP_COM_SEQ','APP_HAB_SEQ','APP_LIGNE_SEQ')"
Estimate in progress using
BLOCKS method...
Processing object type
SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using
BLOCKS method: 0 KB
Processing object type
SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Master table "APP"."SYS_EXPORT_SCHEMA_02"
successfully loaded/unloaded
******************************************************************************
Dump file set for APP.SYS_EXPORT_SCHEMA_02
is:
/path/expdp_sel_SCH_SEQ.dmp
Job "APP"."SYS_EXPORT_SCHEMA_02"
successfully completed at 15:16:55
4.4 Oracle Datapump Export Selected Views
$
time expdp system SCHEMAS=APP DIRECTORY=APP_DP_DIR DUMPFILE=APP_VIEWS.dmp
LOGFILE=e_APP_VIEWS.log INCLUDE=VIEW:\""IN('APP_V1','APP_V2','APP_V3')\""
4.4 Oracle Datapump Export Selected Roles
$
expdp system FULL=Y DIRECTORY=BAT_DP_DIR DUMPFILE=ROLES.dmp LOGFILE=e_ROLES.log
INCLUDE=ROLE:\""IN('ROLE_01','ROLE_02','ROLE_03')\""
5] Miscellanous
5.1 Directory
CREATE OR REPLACE DIRECTORY <APP>_DP_DIR AS '/work/oracle/<base>/exp/<APP>';
5.2 Disable FKs
Disable
constraints before an import when tables exist
If
not :
ORA-39120: Table "DSS"."ELEVE"
can't be truncated, data will be skipped. Failing error is:
ORA-02266: unique/primary keys in table referenced by
enabled foreign keys
- Schema Level, Disable
select distinct 'alter table '||owner||'.'||table_name||' disable
constraint '||constraint_name||' ; ' from dba_constraints
where
r_constraint_name in ( select
constraint_name from dba_constraints )
and r_owner = '<OWNER>' and status = 'ENABLED'
order by 1 ;
Enable
select distinct 'alter table '||owner||'.'||table_name||' enable
constraint '||constraint_name||' ; ' from dba_constraints
where
r_constraint_name in ( select
constraint_name from dba_constraints )
and r_owner = '<OWNER>' and status = 'DISABLED'
order by 1 ;
- Table Level, Disable
select distinct 'alter
table '||owner||'.'||table_name||' disable
constraint '||constraint_name||' ; ' from
dba_constraints
where r_constraint_name in ( select constraint_name from dba_constraints where owner = '<OWNER>'
and table_name in ('TYPERM','FREQMDS','PERMDS','DETPERM','DETRESS','ENTETEPERM','PERM_AGT','BUREAU','PREFECTURE','SITE')
)
and r_owner = '<OWNER>' and status = 'ENABLED'
order by 1 ;
5.3 Error ORA-01466 when exporting
ORA-31693:
Table data object "APP"."TABLE01" failed to load/unload and
is being skipped due to error:
ORA-02354:
error in exporting/importing data
ORA-01466: unable to read data - table definition has
changed
Can happened
if a Grant on this table is done while DP exporting.
5.4 Error ORA-01732 when importing
impdp system DIRECTORY=<BASE>_DP_DIR DUMPFILE=expdp_INST1_<SCH>_reference_JS.dmp LOGFILE=impdp_expdp_INST1_<SCH>_reference_JS.log PARALLEL=2 CONTENT=DATA_ONLY TABLE_EXISTS_ACTION=TRUNCATE
ORA-31693: Table data object
"APP03"."VM_ETUDE_TX" failed to load/unload and is being
skipped due to error:
ORA-01732: data manipulation
operation not legal on this view
The MVs are
not rebuilt , use :
execute dbms_mview.refresh('SCHEMA.VM_ETUDE_TX','C',atomic_refresh=>FALSE)
;
5.5 Convert "imp" vintage Import
utility to new "Datapump" syntax
imp system fromuser=APP1303
touser=APP1303 file=W-APPPP-1_<BASE>_APP1303.DMP log=i_W-APPPP-1_<BASE>_APP1303.log
->
impdp system dumpfile=APP_DP_DIR:W-APPPP-1_<BASE>_APP1303.DMP
logfile=APP_DP_DIR:i_W-APPPP-1_<BASE>_APP1303.log
5.6 Data masking
[11G]
Syntax
REMAP_DATA=[(schema.tablename.column_name:schema.pkg.function)]
impdp
user/pass
TABLES=SCHEMA_NAME.TABLE_NAME
TABLE_EXISTS_ACTION=replace
DUMPFILE=EXP.DPUMP
DIRECTORY=IMPORT REMAP_TABLESPACE=(SOURCE_TABLESPACE:TARGET_TABLESPACE) REMAP_DATA=SCHEMA_NAME.TABLE_NAME.ADDRESS:SCHEMA_NAME.REMAP_UTILS.MASKVARCHAR
TABLES=SCHEMA_NAME.TABLE_NAME
TABLE_EXISTS_ACTION=replace
DUMPFILE=EXP.DPUMP
DIRECTORY=IMPORT REMAP_TABLESPACE=(SOURCE_TABLESPACE:TARGET_TABLESPACE) REMAP_DATA=SCHEMA_NAME.TABLE_NAME.ADDRESS:SCHEMA_NAME.REMAP_UTILS.MASKVARCHAR
[10G] Datapump
Data Masking is not available
- Masking Package example
CREATE OR REPLACE PACKAGE REMAP_UTILS AS
FUNCTION maskvarchar(ADDRESS VARCHAR2) return VARCHAR2;
FUNCTION maskintege1(ADDRESS VARCHAR2) return VARCHAR2;
FUNCTION maskintege2(ADDRESS VARCHAR2) return VARCHAR2;
END;
/
CREATE OR REPLACE PACKAGE BODY REMAP_UTILS AS
FUNCTION maskvarchar(ADDRESS VARCHAR2) return VARCHAR2
IS
v_string VARCHAR2(120 BYTE) :='';
BEGIN
v_string := dbms_random.string('A', 10);
RETURN v_string;
END;
FUNCTION maskintege1(ADDRESS VARCHAR2) return VARCHAR2
IS
v_string VARCHAR2(120 BYTE) :='';
BEGIN
--v_string := (ABS(MOD(dbms_random.random,10000000000)));
v_string := ceil(dbms_random.value(1000000000,9999999999));
RETURN v_string;
END;
FUNCTION maskintege2(ADDRESS VARCHAR2) return VARCHAR2
IS
v_string VARCHAR2(120 BYTE) :='';
BEGIN
v_string := ceil(dbms_random.value(100000,9999999));
RETURN v_string;
END;
END REMAP_UTILS ;
/
- Update the data using the package
update owner.patient set
NOMPATIENT=REMAP_UTILS.maskvarchar(NOMPATIENT) ;
update owner.patient set
NOMNAISSANCE=REMAP_UTILS.maskvarchar(NOMNAISSANCE) ;
update owner.patient set
PRENOMPATIENT=REMAP_UTILS.maskvarchar(PRENOMPATIENT) ;
update owner.patient set
AUTREPRENOMPATIENT=REMAP_UTILS.maskvarchar(AUTREPRENOMPATIENT) ;
update owner.patient set NOMVOIE=REMAP_UTILS.maskvarchar(NOMVOIE) ;
update owner.patient set
COMPADRESSE1=REMAP_UTILS.maskvarchar(COMPADRESSE1) ;
update owner.patient set
COMPADRESSE2=REMAP_UTILS.maskvarchar(COMPADRESSE2) ;
commit ;
5.7 Drop wrongly created objects
select * from dba_objects where to_char(created,'DD/MM/YYYY hh24:mi') = '12/05/2015 10:45' or to_char(created,'DD/MM/YYYY hh24:mi') = '12/05/2015 10:46' order by created desc ;
select 'drop '||object_type||' '||owner||'.'||object_name||' ; '
from dba_objects where to_char(created,'DD/MM/YYYY hh24:mi') = '12/05/2015 10:45' or to_char(created,'DD/MM/YYYY hh24:mi') = '12/05/2015 10:46' order by created desc ;
select
owner,object_name,created,object_type from dba_objects where owner = 'SYSTEM'
and trunc(created,'DAY') = '08/02/2016' order by created desc
select 'drop '||object_type||'
'||object_name||' ; ' from dba_objects where owner = 'SYSTEM' and
trunc(created,'DAY') = '08/02/2016' order by created desc
5.8 DBMS_FILE_TRANSFER
Thsi 10G+
Package can copy a file between 2 database Oracle machines [ SOURCE ->
TARGET ]
File types
allowed are :
- DBF
- Compressed
Datapump dmp is OK ( with COMPRESSION 11G argument )
- but Gzipped
dmp is KO : ORA-27046:
file size is not a multiple of logical block size
CREATE DATABASE LINK <DB_TARGET> CONNECT TO system IDENTIFIED BY
<psw> USING '<DB_TARGET>';
BEGIN
DBMS_FILE_TRANSFER.PUT_FILE(
source_directory_object => '<DB_SOURCE>_DP_DIR',
source_file_name => '<DB_SOURCE>.OWBSYS.JOUR.dp.dmp.oz',
destination_directory_object => '<DB_TARGET>_DP_DIR',
destination_file_name => '<DB_SOURCE>.OWBSYS.JOUR.dp.dmp.oz',
destination_database => '<DB_TARGET>');
END;
/
/WORK_EXP/oracle/<DB_TARGET> > ls -l *.oz
-rw-r----- 1 oracle oraprod
175443968 Mar 20 16:05 <DB_SOURCE>.OWBSYS.JOUR.dp.dmp.oz
# CREATE DATABASE
LINK BASE CONNECT TO system IDENTIFIED BY pswdr USING 'BASE';
# select count(1)
from user_objects@BASE ;
# test
file_gen=expdp_TARG_DB_APP_130321_011517
print
"BEGIN" > ${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"DBMS_FILE_TRANSFER.PUT_FILE(" >>
${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"source_directory_object => 'TARG_DB_DP_DIR'," >>
${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"source_file_name => '${file_gen}.dmp'," >>
${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"destination_directory_object => 'BASE_DP_DIR_2'," >>
${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"destination_file_name =>
'expdp_${BDc}_${i}_DBMS_FILE_TRANSFER.dmp'," >>
${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"destination_database => 'BASE');" >>
${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"END;" >> ${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"/" >> ${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
print
"exit" >> ${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
sqlplus -s
system/${sys_psw}@${BDc} @${usr_sql_dir}/DBMS_FILE_TRANSFER.sql
# add error code
trap
if (( $? != 0 ));
then
echo "erreur
copie ${file_gen}.dmp vers BASE" | mailx -s "${Subj} ${i} status:
${ret_char}" 'dba@comp_name.fr'
codret=1
fi
5.9 Remove the Dumpfile from SQLPLUS
It is
interesting to remove the Dumpfile from source Server after Copy.
$ sqlplus system@BASE
SQL> exec utl_file.fremove('BASE_DP_DIR', 'expdp_WDump.dmp');
PL/SQL procedure successfully completed.
5.10 Versions
When
importing to a 10G DB, we reach the error :
ORA-39142: incompatible version number 3.1 in
dump file "/work/oracle/<base>/exp/APP01/CORPTEST.DMP"
Number 3.1
means the dumpfile had been exported in 11G : this is no supported.
Version Written by Can be imported into Target:
Data Pump database with 10gR1 10gR2 11gR1 11gR2
Dumpfile Set compatibility 10.1.0.x 10.2.0.x 11.1.0.x 11.2.0.x
------------ --------------- ---------- ---------- ---------- ----------
0.1 10.1.x supported supported supported supported
1.1 10.2.x no supported supported supported
2.1 11.1.x no no supported supported
3.1 11.2.x no no no supported
Data Pump database with 10gR1 10gR2 11gR1 11gR2
Dumpfile Set compatibility 10.1.0.x 10.2.0.x 11.1.0.x 11.2.0.x
------------ --------------- ---------- ---------- ---------- ----------
0.1 10.1.x supported supported supported supported
1.1 10.2.x no supported supported supported
2.1 11.1.x no no supported supported
3.1 11.2.x no no no supported
Hopefully,
Export may be run again against the 11G DB with the VERSION parameter :
expdp system
... VERSION=10.2
5.11 Get DDL from DP Dump
Get every
DDL commands from a Datapump dumpfile :
$ impdp system
directory=DP_DIR dumpfile=APP.dp.dmp logfile=import_APP_ddl.txt sqlfile=APP_DDL.sql
Only the
USER related DDL
$ impdp system
directory=DP_DIR dumpfile=APP.dp.dmp logfile=import_APP_ddl.txt sqlfile=APP_DDL.sql
INCLUDE=USER,ROLE,GRANT,ROLE_GRANT
Processing
object type DATABASE_EXPORT/SYS_USER/USER
Processing
object type DATABASE_EXPORT/SCHEMA/USER
Processing
object type DATABASE_EXPORT/ROLE
Processing
object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/DIRECTORY/GRANT/WITH_GRANT_OPTION/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/DIRECTORY/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/TYPE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/CROSS_SCHEMA/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/WITH_GRANT_OPTION/OBJECT_GRANT
Processing
object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/CROSS_SCHEMA/OBJECT_GRANT
5.12 Datapump export prevents Talend job to
truncate table, giving ORA-00054
Even if DP
« expdp » is a READ only process, it locks objects during the
export :
Run expdp then
observ locks and try to truncate the locked table.
select
b.username username, c.object_type object_type , c.owner owner , c.object_name
object_name , b.sid sid, b.serial# , locked_mode
from
v$locked_object a , v$session b , dba_objects c
where
a.object_id = c.object_id and b.sid = a.session_id -- and a.locked_mode = '3'
-- and owner = 'AUC'
USERNAM OBJECT_TYPE OWNER OBJECT_NAME SID SERIAL# LOCKED_MODE
SYSTEM TABLE APPLI TABLE1 188 30965 2
$ sqlplus system
SQL*Plus: Release
11.2.0.3.0 Production on Fri Feb 10 11:23:15 2017
Copyright (c) 1982, 2011,
Oracle. All rights reserved.
Connected to:
Oracle Database 11g
Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning,
OLAP, Data Mining and Real Application Testing options
SQL> truncate table APPLI.TABLE1
;
truncate table APPLI.TABLE1
*
ERROR at line 1:
ORA-00054: resource busy
and acquire with NOWAIT specified or timeout expired