In Oracle 10g, exp and imp have been redesigned as
the Oracle Data Pump (although Oracle still ships and fully
supports exp and imp). If you’re used to exporting exp and imp, the
Data Pump command-line programs have a syntax that will look very

Data Pump runs as a job inside the database,
rather than as a stand-alone client application. This means that
jobs are somewhat independent of the process that started the
export or import. One machine (say a scheduled job) could start the
export, while another machine (such as a DBA’s laptop) can check
the status of the job. Since the job is inside the database, if you
want to export to a file, the first thing that you must do is
create a database DIRECTORY object for the output directory, and
grant access to users who will be doing exports and imports:

create or replace directory dumpdir as ‘c:\’;
grant read,write on directory dumpdir to scott;

Once the directory is granted, you can export a
user’s object with command arguments that are very similar to exp
and imp:

expdp scott/tiger directory=dumpdir

While the export job is running, you can press
[Ctrl]C (or the equivalent on your client) to “detach” from the
export job. The messages will stop coming to your client, but it’s
still running inside the database. Your client will be placed in an
interactive mode (with Export> prompt). To see which jobs are
running, type status. If
you run expdp attach=<jobname>, you can attach to a running

Data Pump doesn’t necessarily have to write to
files. Now there are options to allow you to export database
objects directly into a remote database over SQL*Net. You simple
specify the remote
option with the connect string of the remote database. This is
something like a one-time database replication job.

Data Pump is much faster than the old exp and
imp client commands. One new feature that really helps make it
faster is the “parallel” option. With this option, the Data Pump
will pump data in four different threads. For example, I ran the
following job, pressed [Ctrl]C, and queried the status of the
background jobs:

expdp scott/tiger directory=dumpdir
dumpfile=scott2.dmp parallel=4

Export: Release – Production on Friday, 31 December,
2004 14:54

Copyright (c) 2003, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release –
With the Partitioning, OLAP and Data Mining options
FLASHBACK automatically enabled to preserve database
Starting “SCOTT”.”SCOTT2″:  scott/********
dumpfile=scott2.dmp parallel=4 job_name=scott2
Estimate in progress using BLOCKS method…

Export> status

  Operation: EXPORT
  Mode: SCHEMA
  Bytes Processed: 0
  Current Parallelism: 4
  Job Error Count: 0
  Dump File: C:\SCOTT2.DMP
    bytes written: 4,096

Worker 1 Status:

Worker 2 Status:

Worker 3 Status:

Worker 4 Status:

Not only is the Data Pump running inside the
database, but also, most of the command-line features are exposed
from inside the database through a PL/SQL api, DBMS_DATAPUMP. For
example, you can start the export job from a PL/SQL package with
the following PL/SQL code:

    handle  number;
    handle :=‘EXPORT’,’SCHEMA’);


Check out Data Pump to learn about many of its
other great new features. For instance, Data Pump contains features
with the ability to rename datafiles, move objects to different
tablespaces, or select schema objects or schemas using wildcard
patterns or expressions. The Data Pump can also act as an interface
to external tables (i.e., a table can be linked to data stored in a
data pump export file like the Oracle Loader interface available
since Oracle 9i).

TechRepublic’s Oracle newsletter covers automating Oracle utilities, generating database alerts, solving directed graph problems, and more. Automatically subscribe today!