Terminology
Term
|
Description
|
CB
|
Compressed Block
|
CF
|
Compression Factor =
# blocks for
Non_Compressed_table / # blocks for Compressed_table
|
Data Block Oracle
|
Smaller unit of
RDBMS I/O – multiple of OS block
|
SCN
|
System Change Number Oracle
|
Symbol table
|
In block header,
stores duplicate values
|
1/ Oracle TABLE Compression 9i 10g
1.1
Concepts
1.1.1 New Oracle 9i technique
Goal
|
§ Space
savings
§ Improve
query performance
|
Technique
|
§
Eliminates duplicate values at the Block level :
a symbol table in the
block header stores duplicate values ;
redundant information
is replaced by links
to a common reference
in the symbol table
§
Lossless algorithm
§
Self-contained
§
Entire columns or sequences of columns are compressed
§
Transparent to the user/application
|
Segment types
|
Tables
Partitions /
Subpartitions
Materialized Views
|
1.1.2 Best Practices
·
Only Direct Path
operations are able to compress data
Create Table As Select
Serial INSERT using APPEND hint
Parallel INSERT
Direct Path SQL*Loader
Alter table MOVE
[10g] Online redefinition with parallel mode
·
The best CF is
achieved when maximizing the number of redundant data :
Compression Factor
|
Maximum benefits
|
Database
|
Bigger blockize
|
Block
|
PCTFREE = 0
|
Data distribution
|
Sort to collocate data
|
Schema configuration
|
Datawarehouse Star
schema ( FACT table ) better than Normalized
|
1.2 Pros
and Cons
1. Gain
·
Space savings
Reduces LIO /
PIO
Buffer cache
keeps more data
Speeds up backup
& recovery
·
Improve query performance , according to [Table Compression in Oracle 9i
Release 2: A Performance Analysis]
Type
|
Operation
|
Elapsed
|
Comment
|
Query
|
FTS
Index
|
-50%
|
No expensive cost to uncompress data block
Index are more efficient because more data stored in table blocks
Note : predicates against compressed columns
are evaluated only once / block versus evaluation once / row
|
2. Pain
·
Performance
Type
|
Operation
|
Elapsed
|
Comment
|
LOAD
|
Direct Path
|
+100%
|
Need to load first in uncompressed block, then create compressed
block, evaluate CF to decide if compressed block should be kept
|
DML
|
Delete on CB
|
Same
|
May cause block fragmentation as further inserts do not
reuse empty existing compressed blocks after delete
|
|
Update on CB
|
+20%
|
Overhead to maintain symbol table - may update value in following
mode :
§
Compressed - if updated value already exists in
the symbol table
§
Uncompressed – if not
this can cause row
chaining if updated value doesn’t fit in compressed block
|
|
Insert
|
Same
|
Always Uncompressed
|
·
[9i] ALTER TABLE MOVE COMPRESS restrictions
on composite table
While attempting to move a subpartition of a
composite table with "compress" option, it fails with the following
errors:
ORA-14160: this physical attribute may not be specified for a table subpartition
Trying to move at the partition level with "compress" option fails with
ORA-14020: this physical attribute may not be specified for a table partition
ORA-14160: this physical attribute may not be specified for a table subpartition
Trying to move at the partition level with "compress" option fails with
ORA-14020: this physical attribute may not be specified for a table partition
* a little bit tricky is the compression on
the composite (e.g. range / list) partitioned table if the intention is to
insert the data uncompressed and to compress them later on.
* on a table with composite partition is not possible to direct compress
the partition or subpartition with an alter table statement
ALTER TABLE t MOVE (sub)partition x compress;
-- gives ora-14160 / ora-14257
* the workaround is to flag the whole table as compressed and move the
subpartition without compress.
alter table t compress;
alter table t move subpartition
x; -- this subpartition will be compressed
* the bottom line is that on a table with composite partitioning you
may control the compression on the subpartition level (e.g. current part of the
table is not compressed; history is compressed) but the COMPRESSION column in
dba_tab_subpartition doesn't say anything about the actual status of the
compression of the subpartition.
·
[10g]
SHRINK table is not supported
·
COMPRESS
& Bitmap Indexes
The bitmap indexes should be marked UNUSABLE before compression.
·
ADD
or DROP COLUMN on COMPRESSED Table
You cannot add a column with a default value
to a compressed table or to a partitioned table
containing any compressed partition, unless you first disable compression for the table or partition.
You cannot specify the DROP clause to drop a
column from a compressed table or a partitioned table
containing any compressed partition, unless
you first disable data compression for the table or partition.
You can only use the SET UNUSED clause
9i
|
Result
|
Solution
|
Drop column
|
ORA-12996: cannot drop system-generated
virtual column
|
Use method to recreate table uncompressed
– see below
|
Add column
|
ORA-22856: cannot add columns to object tables
|
Use method to recreate table uncompressed
– see below
|
10g
|
Result
|
Solution
|
Drop column
|
ORA-39726:
unsupported add/drop column operation on compressed tables
|
alter
table t1 set unused column c3 ;
alter
table t1 drop unused columns;
|
Add column
|
OK
|
NA
|
Add column with default values
|
ORA-39726:
unsupported add/drop column operation on compressed tables
|
Use
method to recreate table uncompressed - see below (1)
|
1.3 Method
( 1 ) Method to recreate
table uncompressed with application downtime
CTAS
exp / imp
alter table move
[10g]
DBMS_REDEFINITION
Example :
CTAS create same
table structure
Alter table add/drop
column
Modify DDL to
COMPRESS mode
Insert as Select
Direct path
Drop old table
Rename new table
Create index on new
table
(2) READ ONLY PARTITIONS /
SUBPARTITIONS
FACT table - Example
To COMPRESS sub-partitions
of accounting closed period based on “v_per_rep”
don’t compress already compressed partitions –
after number of lines / block ratio
a) ALTER TABLE safran_dm.FACT_ECR_SAFRAN COMPRESS ;
b) Execute sql generated by following command : PARALLEL degree = 16
SELECT 'alter table safran_dm.fact_ecr_safran move
subpartition '
|| subpartition_name
|| ';'
FROM dba_tab_subpartitions
WHERE blocks > 1000 AND blocks IS NOT NULL
AND table_name = 'FACT_ECR_SAFRAN'
AND table_owner = 'SAFRAN_DM'
AND SUBSTR (subpartition_name, 1, 2) IN ('PM')
AND SUBSTR (subpartition_name, LENGTH (subpartition_name) - 5, 6) <=
(SELECT TO_CHAR (ADD_MONTHS (TO_DATE (id_periode_rep || '01', 'YYYYMMDD'), -1 ), 'YYYYMM')
FROM safran_dm.v_per_rep
WHERE date_cloture_safran IS NOT NULL)
and (num_rows*avg_row_len/blocks) < 8192
ORDER BY subpartition_name;
c) ALTER TABLE FACT_ECR_SAFRAN NOCOMPRESS ;
2/ INDEX Key Compression 9i 10g
Key compression
lets you compress portions of the primary key column values in an index or
index-organized table, which reduces the storage overhead of repeated values.
Generally, keys in
an index have two pieces, a grouping piece and a unique piece. If the key is not
defined to have a unique piece, Oracle provides one in the form of a rowid
appended to the grouping piece. Key compression is a method of breaking off the
grouping piece and storing it so it can be shared by multiple unique pieces.
Key compression
breaks the index key into a prefix entry (the grouping piece) and a suffix
entry (the unique piece). Compression is achieved by sharing the prefix entries
among the suffix entries in an index block. Only keys in the leaf blocks of a B-tree
index are compressed. In the branch blocks the key suffix can be truncated, but
the key is not compressed.
Key compression is
done within an index block but not across multiple index blocks. Suffix entries
form the compressed version of index rows. Each suffix entry references a
prefix entry, which is stored in the same index block as the suffix entry.
By default, the
prefix consists of all key columns excluding the last one. For example, in a
key made up of three columns (column1, column2, column3) the default prefix is
(column1, column2). For a list of values (1,2,3), (1,2,4), (1,2,7), (1,3,5),
(1,3,4), (1,4,4) the repeated occurrences of (1,2), (1,3) in the prefix are
compressed.
Alternatively, you
can specify the prefix length, which is the number of columns in the prefix.
For example, if you specify prefix length 1, then the prefix is column1 and the
suffix is (column2, column3). For the list of values (1,2,3), (1,2,4), (1,2,7),
(1,3,5), (1,3,4), (1,4,4) the repeated occurrences of 1 in the prefix are
compressed.
The maximum prefix
length for a nonunique index is the number of key columns, and the maximum
prefix length for a unique index is the number of key columns minus one.
Prefix entries are
written to the index block only if the index block does not already contain a
prefix entry whose value is equal to the present prefix entry. Prefix entries
are available for sharing immediately after being written to the index block
and remain available until the last deleted referencing suffix entry is cleaned
out of the index block.
In some cases, however, key compression cannot be used.
For example, in a unique index with a single attribute key, key compression is
not possible, because even though there is a unique piece, there are no
grouping pieces to share.
analyze index <index_name> validate structure;
select opt_cmpr_count, opt_cmpr_pctsave from index_stats where name = '<index_name>';
Then
compress selected columns :
SQL> Alter index X2YW8K rebuild compress 2;
Index altered.
SQL> Alter index X3YW8K rebuild compress 3;
Index altered.
3/ Tests
[9i] Results
CTAS with NOLOGGING & COMPRESS options :
Operations
|
NOCOMPRESS
|
COMPRESS
|
CTAS
|
28 min
|
39 min
|
Create Index
|
20 min
|
5 min
|
Stats collection
|
18 min
|
11 min
|
Total
|
66 min
|
55 min
|
Table size
|
10 GB
|
6 GB
|
[10g] Results
Table YW8K
|
NOCOMPRESS
|
COMPRESS
|
Create Table PARALLEL 4
|
02m01
|
01m12
|
Table Size
|
2428 MB
|
280 MB
|
Create 3 Index
|
3m28
|
2m04
|
Index Size ( 2 index COMPRESS )
|
632 MB
|
488 MB
|
Select ( FTS all partitions ) NOPARALLEL
|
45 sec
|
16 sec
|
Select ( FTS all partitions ) PARALLEL 4
|
19 sec
|
04 sec
|
Create
Table As Select NOCOMPRESS
PARALLEL 4
Table
created.
Elapsed:
00:02:01.48
Create
Table As Select COMPRESS PARALLEL
4
Table
created.
Elapsed:
00:01:12.03
--
Check COMPRESS Mode
SELECT TABLE_NAME, PARTITION_NAME, COMPRESSION FROM dba_TAB_PARTITIONS where table_owner ='SYSTEM' and table_name in ('YW8K' , 'YW8C' , 'YW37') order by 1,2
4] Datapump Copy & Compress from 10G to 11G
Goal
DP Schemas
|
Source
|
Target
|
Version
|
10.2.0.5
NOARCHIVELOG
Blocksize = 8 KB
|
11.2.0.2
NOARCHIVELOG
Blocksize = 8 KB
|
Executable
|
expdp
PARALLEL=2
|
impdp
PARALLEL=2
|
Log as
|
SYSTEM
|
SYSTEM
|
The idea is to
migrate Data from 10G to 11G and to use the new TABLE Compression feature of
11G : COMPRESS FOR OLTP ( = FOR ALL OPERATIONS = DML+DWH ).
Target
Tablespace will be created with the COMPRESS clause and Datapump will use that
clause when creating new tables.
http://www.oracle.com/technetwork/database/options/compression/faq-092157.html
4.1 SCHEMA LEVEL
- [10G] Source Tablespace & Schema
volumes
select owner,segment_type,tablespace_name,ceil(sum(bytes/1024/1024)) Sz_MB from dba_segments where owner = 'SCHEMA'
group by owner,segment_type,tablespace_name order by 4 desc
OWNER
|
SEGMENT_TYPE
|
TABLESPACE
|
SZ_MB
10G SOURCE |
SCHEMA
|
TABLE
|
SCHEMA_DAT
|
4254
|
SCHEMA
|
INDEX
|
SCHEMA_DAT
|
3359
|
4.1.1 First, without
COMPRESS
- [11G] Create Target Tablespace
CREATE TABLESPACE SCHEMA_DAT DATAFILE '/data/<base>/schema_dat.dbf' SIZE 512M
AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED
LOGGING ONLINE PERMANENT EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO FLASHBACK ON ;
- [11G] Import
$ time impdp system DUMPFILE=<base>_DP_DIR:<BASE>.SCHEMA.HEBDO.dp.dmp
LOGFILE=<base>_DP_DIR:impdp_<BASE>.SCHEMA.HEBDO.dp.log PARALLEL=2
…
real 14m48.827s
- Results
SEGMENT_TYPE
|
SZ_MB
10G SOURCE |
impdp
duration COMPRESS |
SZ_MB
11G TARGET |
Gain
MB |
TABLE
|
4254
|
14m48
|
4256
|
-2
|
INDEX
|
3359
|
2728
|
631
|
Tables are not
Compressed, but Index get reorganized.
select distinct compression ,
compress_for from
dba_tables where owner = 'SCHEMA' order by 1
COMPRESSION COMPRESS_FOR
DISABLED
Note :
« alter table » to use COMPRESS mode is a setting for NEW ROWS only.
select 'alter
table '||owner||'.'||table_name||' compress
for oltp ; ' from
dba_tables where owner = 'SCHEMA' order by 1
alter table SCHEMA.ABSENCES compress for oltp ;
alter table SCHEMA.ACTEUR compress for oltp ;
alter table SCHEMA.ACTEUR_2 compress for oltp ;
…
COMPRESSION COMPRESS_FOR
ENABLED OLTP
4.1.2 Now, using the
COMPRESS option
CREATE TABLESPACE SCHEMA_DAT DATAFILE '/data/<base>/schema_dat.dbf' SIZE 512M
AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED
LOGGING ONLINE PERMANENT EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO FLASHBACK ON
DEFAULT COMPRESS FOR OLTP ;
select
tablespace_name,
def_tab_compression ,
compress_for from
dba_tablespaces ;
TABLESPACE_NAME
|
DEF_TAB_COMPRESSION
|
COMPRESS_FOR
|
SCHEMA_DAT
|
ENABLED
|
OLTP
|
Note :
if necessary, how to turn several Tablespaces to COMPRESS FOR OLTP :
select tablespace_name, contents , def_tab_compression , compress_for from
dba_tablespaces where
compress_for is not null order by 1,2
select 'alter tablespace
'||tablespace_name||' default
compress for oltp ; ' from
dba_tablespaces
where contents = 'PERMANENT' and (
compress_for is null or
compress_for = 'BASIC' )
and tablespace_name not in ( 'SYSTEM','SYSAUX','UNDOTBS1','USERS','OWNER_DAT','COMMUN_DAT','OWBSYS_DAT')
and tablespace_name not like 'IDI%' and tablespace_name not like 'IBO%' and tablespace_name not like 'IHT%'
order by 1
$ time impdp system DUMPFILE=<base>_DP_DIR:<BASE>.SCHEMA.HEBDO.dp.dmp
LOGFILE=<base>_DP_DIR:impdp_<BASE>.SCHEMA.HEBDO.dp.log PARALLEL=2 transform=SEGMENT_ATTRIBUTES:n:table
…
real 15m4.196s
SEGMENT_TYPE
|
SZ_MB
10G SOURCE |
impdp
duration COMPRESS |
SZ_MB
11G TARGET |
Gain
MB |
TABLE
|
4254
|
15m04
|
1303
|
2951
|
INDEX
|
3359
|
2693
|
666
|
select distinct compression ,
compress_for from
dba_tables where owner = 'SCHEMA' order by 1
COMPRESSION COMPRESS_FOR
ENABLED OLTP
Conclusion
More than 50
% Compression is achieved on Tables !
4.2 DB LEVEL
We manage an
exclusion list of shemas.
·
Create tablespaces
select distinct 'create
tablespace '||tablespace_name||' DATAFILE
''/data/<base>/'||lower(tablespace_name)||'_dat''
SIZE 128M AUTOEXTEND ON NEXT 128M MAXSIZE UNLIMITED LOGGING ONLINE PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON DEFAULT COMPRESS FOR OLTP ;'
from dba_segments
where owner not in ('APEX_040000','APEX_PUBLIC_USER','FLOWS_FILES','APPLI','APPLI_BO','APPLI_LNK',
'DIP','APPLI02','APPLI02_BO','IBO','IBO_PUB','IBO_RDA','OWN','USR','OIN','OWBRT_SYS'
,'APPLI03_BO','APPLI04','APPLI04_BO') order by 1
·
Expdp FULL=Y
$ time
expdp system FULL=y DIRECTORY=<BASE>_DP_DIR DUMPFILE=expdp_db_mig_11G.dmp
PARALLEL=2 LOGFILE=expdp_db_mig_11G.log \
flashback_time="\"to_timestamp(to_char(sysdate,
'YYYY-MM-DD HH24:MI:SS'), 'YYYY-MM-DD HH24:MI:SS')\"" \
EXCLUDE=SCHEMA:\"in\(\'APEX_040000\',\'APEX_PUBLIC_USER\',\'CMENALDO\',\'FLOWS_FILES\',
\'APPLI\',\'APPLI_BO\',\'APPLI_LNK\',\'DIP\',\'APPLI02\',\'APPLI02_BO\',\'IBO\',\'IBO_PUB\',\'IBO_RDA\'
,\'OWN\',\'USR\',\'OIN\',\'OWBRT_SYS\',\'APPLI03_BO\',\'APPLI04\',\'APPLI04_BO\'\)\"
Export: Release 10.2.0.5.0 - 64bit
Production on Thursday, 30 August, 2012 15:08:50
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Password:
Connected to: Oracle Database 10g
Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining
and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** FULL=y DIRECTORY=<BASE>_DP_DIR
DUMPFILE=expdp_db_mig_11G.dmp PARALLEL=2 LOGFILE=expdp_db_mig_11G.log
flashback_time="to_timestamp(to_char(sysdate, 'YYYY-MM-DD HH24:MI:SS'),
'YYYY-MM-DD HH24:MI:SS')"
EXCLUDE=SCHEMA:"in('APEX_040000','APEX_PUBLIC_USER','FLOWS_FILES','APPLI','APPLI_BO','APPLI_LNK','DIP','APPLI02','APPLI02_BO','IBO','IBO_PUB','IBO_RDA','OWN','USR','OIN','OWBRT_SYS','APPLI03_BO','APPLI04','APPLI04_BO')"
Estimate in progress using BLOCKS
method...
Processing object type
DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method:
39.40 GB
…
Master table "SYSTEM"."SYS_EXPORT_FULL_01"
successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_FULL_01
is:
/WORK_EXP/oracle/<BASE>/expdp_db_mig_11G.dmp
Job "SYSTEM"."SYS_EXPORT_FULL_01"
successfully completed at 15:42:09
·
Impdp without keeping the Table
storage clause
$ time impdp system DIRECTORY=<base>_DP_DIR DUMPFILE=expdp_db_mig_11G.dmp
PARALLEL=4 LOGFILE=i_expdp_db_mig_11G.log transform=SEGMENT_ATTRIBUTES:n:table
…
$ grep -i ora- /WORK_EXP/oracle/<base>/i_expdp_db_mig_11G.log | egrep
-v "already exists|compilation
warnings|ORA-39151|ORA-39083|ORA-01119|ORA-27040" | more
·
Results : DB overall decrease =
23.6 GB / 58 GB
select
segment_type , Sz_MB from (
select
segment_type,ceil(sum(bytes/1024/1024)) Sz_MB from
dba_segments
where owner not in ('APEX_040000','APEX_PUBLIC_USER','FLOWS_FILES','APPLI','APPLI_BO','APPLI_LNK','DIP'
,'APPLI02','APPLI02_BO','IBO','IBO_PUB','IBO_RDA','OWN','USR','OIN','OWBRT_SYS',
'APPLI03_BO','APPLI04','APPLI04_BO')
group by segment_type order by 2 desc
) where Sz_MB > 100
SEGMENT_TYPE
|
SZ_MB
Source
|
SZ_MB Target
|
TABLE
|
32982
|
12491
|
INDEX
|
16665
|
13137
|
LOBSEGMENT
|
9148
|
8991
|
TYPE2 UNDO
|
574
|
647
|
CLUSTER
|
144
|
|
SUM
|
59513
|
35266
|
5] 11G Quickly Compress existing tables to get Free space
5.1 When the Tablespace is not compressed
[11G]
Supposing the Tablespace is not COMPRESSED, we will
-
turn COMPRESSION ON at the
Tablespace Level, for new Tables creation
-
turn COMPRESSION ON at the Object
Level for future created Blocks
-
and Move the biggest Tables to get
free space immediately
-- Another method to quickly get free spaces without
IMPDP, but just using MOVE on selected objects
select tablespace_name , sum(bytes/1024/1024/1024) Sz_GB from dba_data_files where tablespace_name = 'BI_DTM_DAT' group by tablespace_name -- 20,87 GB
select tablespace_name , sum(bytes/1024/1024/1024) Sz_GB from dba_free_space where tablespace_name = 'BI_DTM_DAT' group by tablespace_name -- 1,37 GB
select distinct compression , compress_for from dba_tables where owner = 'BI_DTM' order by 1 -- DISABLED & Null
-- 1/ compress at the tablespace_level for New Table
select tablespace_name, def_tab_compression , compress_for from dba_tablespaces where tablespace_name = 'BI_DTM_DAT' -- DISABLED & Null
alter tablespace BI_DTM_DAT default COMPRESS FOR OLTP ;
-- 2/ compress at the object_level every Tables
select 'alter table '||owner||'.'||table_name||' compress for oltp ; ' from dba_tables where owner = 'BI_DTM' order by 1
--
'ALTERTABLE'||OWNER||'.'||TABLE_NAME||'COMPRESSFOROLTP;'
alter table BI_DTM.TABLE_1 compress for oltp ;
…
alter table BI_DTM.TABLE_2_2013_11 compress for oltp ;
-- 3/ move tables > 500 MB
select segment_name , sum(bytes/1024/1024) Sz_MB from dba_segments where owner = 'BI_DTM' group by segment_name order by 2 desc
SEGMENT_NAME SZ_MB
TABLE_2_ARCH 3380
TABLE_2_2013 1777
TABLE_SRC 1776
TABLE_2_BRUT_ARCH 1496
TABLE_1 1152
TABLE_2_2013_11 648
TABLE_FACTS_2013_01
620
TABLE_FACTS_AUC
608
TABLE_FACTS_2013_03
608
TABLE_FACTS
585
select 'alter table '||owner||'.'||table_name||' move compress for oltp ; ' from dba_tables where owner = 'BI_DTM'
and table_name in
( select segment_name from ( select segment_name , sum(bytes/1024/1024) Sz_MB from dba_segments where owner = 'BI_DTM' group by segment_name ) where Sz_MB > 500 )
order by 1
'ALTERTABLE'||OWNER||'.'||TABLE_NAME||'MOVECOMPRESSFOROLTP;'
alter table BI_DTM.TABLE_1 move compress for oltp ;
alter table BI_DTM.TABLE_FACTS move compress for oltp ;
alter table BI_DTM.TABLE_FACTS_AUC move compress for oltp ;
alter table BI_DTM.TABLE_FACTS_2013_01 move compress for oltp ;
alter table BI_DTM.TABLE_FACTS_2013_03 move compress for oltp ;
alter table BI_DTM.TABLE_SRC move compress for oltp ;
alter table BI_DTM.TABLE_2_ARCH move compress for oltp ;
alter table BI_DTM.TABLE_2_BRUT_ARCH move compress for oltp ;
alter table BI_DTM.TABLE_2_2013 move compress for oltp ;
alter table BI_DTM.TABLE_2_2013_11 move compress for oltp ;
select tablespace_name , sum(bytes/1024/1024/1024) Sz_GB from
dba_free_space where tablespace_name = 'BI_DTM_DAT' group by tablespace_name --
1,37 GB -> 11,3 GB
We got back
10 GB Free Space out of 20.
5.2 When the Tablespace has the Compression
option, Find which segment is not compressed in a compressed tablespace
select owner, table_name
, compression , ceil(a.blocks/1024/1024) Sz_MB , b.compress_for
from
dba_tables a , dba_tablespaces
b
where a.tablespace_name=b.tablespace_name
and b.def_tab_compression
= 'ENABLED' and b.compress_for
= 'OLTP'
and a.compression
= 'DISABLED'
order by 4 desc
5.3 Find the biggest tables and see if
Compression is set at the table level
select a.owner , table_name , ceil(bytes/1024/1024) Sz_MB , compression ,
compress_for from
dba_tables a , dba_segments b
where a.table_name
= b.segment_name and a.owner = b.owner order by 3 desc
Set the
option and Compress the segment.
-- 6191 MB OWBSYS.OWB$WB_RT_AUDIT_STRUCT : COMPRESS_FOR =
BASIC
alter table OWBSYS.OWB$WB_RT_AUDIT_STRUCT move compress for oltp ;
select 'alter index '||owner||'.'||index_name||' rebuild ;
' from
dba_indexes where status != 'VALID' and owner not in ('SYS','SYSTEM') ;
alter index OWBSYS.WB_RT_AUDIT_STRUCT_W_RPOU rebuild ;
alter index OWBSYS.PK_RTT rebuild ;
alter index OWBSYS.WB_RT_AUDIT_STRUCT_RTD_IID_IND rebuild ;
alter index OWBSYS.WB_RT_AUDIT_STRUCT_PAR_IND rebuild ;
alter index OWBSYS.WB_RT_AUDIT_STRUCT_OBJ_IND rebuild ;
-- 1344 MB
We reduced
the table from 6191 MB to 1344 MB.
6] Miscellanous
Find out a
list ordered by datafile size for tablespaces with def_tab_compression =
'ENABLED'
select
file_name , ceil(bytes/1024/1024/1024) SZ_GB from (
select a.tablespace_name,def_tab_compression,compress_for, ceil(sum(bytes/1024/1024/1024)) GB_SZ from dba_data_files a , dba_tablespaces b
where a.tablespace_name = b.tablespace_name
and
def_tab_compression = 'ENABLED'
group by a.tablespace_name,def_tab_compression,compress_for order by 4 desc
) A , dba_data_files B where A.tablespace_name
= B.tablespace_name and GB_SZ > 20
/data/dbname/HR_DAT.dbf 56
/data/dbname/HR1_DAT.dbf 41
/data/dbname/HR2_DAT.dbf 21