DataPump COMPRESSION_ALGORITHM Option

_____________________________________________________________________________________________________________________

The Eucharistic Miracles of the World
This parameter determines the algorithm to compress the dump file data. 

Values:- COMPRESSION_ALGORITHM = {BASIC | LOW | MEDIUM | HIGH} 

BASIC:- 
  • This offers good combination of compression ratio and speed 
  • It is not CPU intensive 
  • This is the default compression method which is used in the older versions of data pump. 

LOW:- 
  • Compression ratio is low 
  • It is suitable for CPU bound servers as it is not a CPU intensive 

MEDIUM:- 
  • provides a good combination of compression ratios and speed 
  • It is similar to BASIC but it uses different algorithm 

HIGH:- 
  • Most CPU intensive option 
  • It offers good compression ratio and the dump file would the lowest in size. 
  • It can be used to copy the dump file over slower networks, where the limiting factor is network speed. 

Putting some examples here to compare the dump file size.
expdp directory=dp_dir dumpfile=comp_low.dmp logfile=comp_low.log full=y compression_algorithm=low job_name=compress_alg

expdp directory=dp_dir dumpfile=comp_high.dmp logfile=comp_high.log full=y compression_algorithm=high job_name=compress_alg

You can see the file size difference of small schema dump with different compression algorithm.

$ ls -tlr *dmp
-rw-r-----. 1 oracle oracle 78696448 Sep 21 05:30 comp_high.dmp
-rw-r-----. 1 oracle oracle 78712832 Sep 21 05:39 comp_low.dmp

_____________________________________________________________________________________________________________________

0 comments:

Post a comment

 

acehints.com Copyright 2011-20 All Rights Reserved | Site Map | Contact | Disclaimer