Tải bản đầy đủ (.pdf) (50 trang)

Tài liệu OCA: Oracle Database 11g Administrator Certified Associate- P23 doc

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (907.38 KB, 50 trang )

Review Questions

931
12. To recover a data file from the SYSTEM or UNDO tablespace, the instance must be in which
database state?
A. NOMOUNT
B. OPEN
C. ABORT
D. MOUNT
13. The STATUS column of the dynamic performance view V$LOGFILE contains what value if
one of the redo log file group members has been lost because of a media failure?
A. INVALID
B. STALE
C. DELETED
D. The column contains a NULL value.
14. Place the following events or actions leading up to and during instance recovery in the cor-
rect order.
1.
The database is opened and available.
2.
Oracle uses undo segments in the undo tablespace to roll back uncommitted
transactions.
3.
The DBA issues the
STARTUP
command at the SQL*Plus prompt.
4.
Oracle applies the information in the online redo log files to the data files.
A. 4, 3, 2, 1
B. 3, 4, 1, 2
C. 2, 1, 3, 4


D. 2, 1, 4, 3
E. 3, 2, 4, 1
F. 3, 4, 2, 1
15. You noticed that when your instance crashes, it takes a long time to start up the database.
Which advisor can be used to tune this situation?
A. The Undo Advisor
B. The SQL Tuning Advisor
C. The Database Tuning Advisor
D. The MTTR Advisor
E. The Instance Tuning Advisor
16. If a data file is missing when the instance is started, where is the error message recorded?
A. Only in the alert log.
B. All missing files are returned directly to the administrator in the SQL*Plus session.
C. The first missing file is returned directly to the administrator in the SQL*Plus session,
and the rest of the missing files are identified in V$RECOVER_FILE.
D. Only in the alert log and in the DBWR background-process trace files.
95127c16.indd 931 2/17/09 3:03:57 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
932

Chapter 16
N
Recovering the Database
17. In ARCHIVELOG mode, the loss of a data file for any tablespace other than the SYSTEM or
UNDO tablespace affects which objects in the database?
A. The loss affects only objects whose extents reside in the lost data file.
B. The loss affects only the objects in the affected tablespace, and work can continue in
other tablespaces.
C. The loss will not abort the instance but will prevent other transactions in any
tablespace other than SYSTEM or UNDO until the affected tablespace is recovered.

D. The loss affects only those users whose default tablespace contains the lost or damaged
data file.
18. Which dynamic performance view shows the data files either needing media recovery or
missing at instance startup?
A. V$RECOVER_FILE
B. V$DATAFILE
C. V$TABLESPACE
D. V$RECOVERY_FILE_DEST
E. V$RECOVERY_FILE_STATUS
19. A fire breaks out in the server room near the routers, and the operations manager cuts off
power to all servers, including the database servers. Before the fire is put out, the disk drive
containing the SYSTEM tablespace and both network cards on the Oracle Database 11g
server are destroyed. The user SCOTT was about to create a new table, but the connection
was dropped after the power was disconnected from the server. This scenario is primarily
an example of what kind of failure?
A. Network
B. Instance
C. Statement
D. Media
E. User error
F. User process
20. Which of the following conditions prevents the instance from progressing through the
NOMOUNT, MOUNT, and OPEN states?
A. One of the redo log file groups is missing a member.
B. The instance was previously shut down uncleanly with SHUTDOWN ABORT.
C. Either the spfile or init.ora file is missing.
D. One of the five multiplexed control files is damaged.
E. The USERS tablespace is offline, with one of its data files deleted.
95127c16.indd 932 2/17/09 3:03:57 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Answers to Review Questions

933
Answers to Review Questions
1. D. The distance (in bytes) between the checkpoint position in a redo log group and the end
of the current redo log group can never be more than 90 percent of the size of the smallest
redo log group.
2. C. The failure of one statement is considered a statement failure, and one way to solve the
problem is to enable resumable-space allocation. When resumable space is enabled, Oracle
generates an alert and places the session in a suspended state.
3. C. The parameter FAST_START_MTTR_TARGET specifies the desired time, in seconds, to
recover a single instance from a crash or instance failure. The parameters LOG_CHECKPOINT_
TIMEOUT and FAST_START_IO_TARGET can still be used in Oracle 11g but should be used
only together with an advanced-tuning scenario or for compatibility with older versions of
Oracle. MTTR_TARGET_ADVICE and FAST_START_TARGET_MTTR are not valid initialization
parameters.
4. D. The PMON process periodically polls server processes to make sure their sessions are
still connected.
5. C. A DBA’s disconnection of a session is an intentional process termination, not a failure.
If a user’s PC reboots, the user does not get a chance to log off, and the session is cleaned
up by PMON; similarly, disconnecting from the application or SQL*Plus before logging out
is considered a user-process failure. A network problem can prematurely disconnect a user
session, causing a user-process failure. In all cases, PMON performs the session cleanup,
whether the disconnection was intentional or not.
6. A, C. In addition to configuring a backup listener process and installing multiple network
cards, you can implement connect-time failover and a backup network connection to reduce
the possibility of network failures.
7. B. The instance must be shut down, if it is not already down, to repair or replace the missing
or damaged control file.
8. B, C. Media failure, physical corruption, logical corruption, and missing data files all can

be identified by the Data Recovery Advisor, which also provides recommendations for repair.
9. B, E. If a tablespace is taken offline because a data file is missing, the instance can still be
started as long as the missing data file does not belong to the SYSTEM or UNDO tablespace.
10. A. If a network card fails, the failure type is network; the actual media containing the
database files are not affected.
11. B. The Data Recovery Advisor in Oracle 11g Release 1 does not support RAC databases.
It is integrated with EM Database Control and with RMAN. CHANGE FAILURE and other
commands can be executed using RMAN. The ADVISE FAILURE command must be run
before you can perform REPAIR FAILURE.
95127c16.indd 933 2/17/09 3:03:57 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
934

Chapter 16
N
Recovering the Database
12. D. Unlike recovery of non–system-critical tablespaces other than SYSTEM or UNDO that can
be recovered with the database in OPEN state, the database must be in MOUNT state to recover
either the SYSTEM or UNDO tablespace.
13. A. If the redo log file group member has been lost because of a media failure or inadvertent
deletion, the STATUS column is set to INVALID when an attempt is made to write redo infor-
mation to that member.
14. B. Instance recovery, also known as crash recovery, occurs when the DBA attempts to open
the database but the files were not synchronized to the same SCN when the database was
shut down. Once the DBA issues the STARTUP command, Oracle uses information in the redo
log files to restore the data files (including the undo tablespace’s data files) to the state before
the instance failure. Oracle then uses undo data in the undo tablespace after the database
has been opened and made available to users to roll back uncommitted transactions.
15. D. The MTTR Advisor can tell the DBA the most effective value for the FAST_START_
MTTR_TARGET parameter. This parameter specifies the maximum time required in seconds

to perform instance recovery.
16. C. In addition to reporting the first missing file to the administrator and listing all the
missing files in the dynamic performance view V$RECOVER_FILE, the missing data file(s) are
noted in the DBWR background-process trace files.
17. B. The loss of one or more of a tablespace’s data files does not prevent other users from
doing their work in other tablespaces. Recovering the affected data files can continue while
the database is still online and available.
18. A. The dynamic performance view V$RECOVER_FILE contains a list of the data files that
either need media recovery or are missing when the instance is started.
19. B. The primary failure in this scenario is instance. Subsequently, a network failure will
occur when connections are attempted through the burned-out router. However, no con-
nections are possible until the network card in the server is replaced; the instance cannot
start because of a media failure on the disk containing the SYSTEM tablespace.
20. D. All copies of the control files as defined in the spfile or the init.ora file must be iden-
tical and available. If one of the redo log file groups is missing a member, a warning is
recorded in the alert log, but instance startup still proceeds. If the instance was previously
shut down with SHUTDOWN ABORT, instance recovery automatically occurs during startup.
Only an spfile or an init.ora file is needed to enter the NOMOUNT state, not both. If a
tablespace is offline, the status of its data files is not checked until an attempt is made to
bring it online; therefore, it will not prevent instance startup.
95127c16.indd 934 2/17/09 3:03:58 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Chapter
17
Moving Data and
Using EM Tools
ORACLE DATABASE 11g:
ADMINISTRATION I EXAM OBJECTIVES
COVERED IN THIS CHAPTER:
Moving Data

Describe and use methods to move data (Directory objects,

SQL*Loader, External Tables)
Explain the general architecture of Oracle Data Pump

Use Data Pump Export and Import to move data between

Oracle databases
Intelligent Infrastructure Enhancements
Use the Enterprise Manager Support Workbench

Managing Patches

95127c17.indd 935 2/17/09 3:10:37 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
As a DBA, you are often required to move data between
databases, extract data, or load data received from external
sources. Oracle 11g provides tools to move data. You can
use these tools to back up data from a table or a schema before making changes for quick
recovery. Oracle Data Pump is a high-performance data-movement tool that you can use to
unload and load data between Oracle databases, and you can use the SQL*Loader tool to
load data received from external sources such as flat files.
In this chapter you will also learn about contacting Oracle Support through Enterprise
Manager Support Workbench. EM Support Workbench is new in Oracle 11g and can be
used to examine a database problem and contact Oracle Support for a resolution. EM can
also alert you when database patches are ready. You will learn to use EM to stage and
apply a patch.
Understanding Data Pump
The Data Pump facility is a high-speed mechanism for transferring data or metadata
from one database to another or from operating-system files. Data Pump employs direct

path unloading and direct path loading technologies. Unlike the older export and import
programs (
exp
and
imp
), which operated on the client side of a database session, the Data
Pump facility runs on the server. Thus, you must use a database directory to specify dump-
file and log-file locations.
You can use Data Pump to copy data from one schema to another between two data-
bases or within a single database. You can also use it to extract a logical copy of the entire
database, a list of schemas, a list of tables, or a list of tablespaces to portable operating-
system files. Data Pump can also transfer or extract the metadata (DDL statements) for a
database, schema, or table.
You can call Data Pump from the command-line programs
expdp
and
impdp
or through
the
DBMS_DATAPUMP PL/SQL
package, or you can invoke it from EM.
Data Pump export extracts data and metadata from your database, and Data Pump
import loads this extracted data into the same database or into a different database, option-
ally transforming metadata along the way. These transformations let you, for example, copy
tables from one schema to another or remap a tablespace from one database to another.
These are some of the key features of Data Pump:
A fine-grained object selection using

INCLUDE
and

EXCLUDE
options
An option to specify a lower-compatibility version so only supported object types are

exported
95127c17.indd 936 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Understanding Data Pump

937
The ability to perform export and import in using parallel processes

The ability to detach and attach to a job from the client session, allowing the DBA to

close the export/import session and yet have the ability to administer the jobs
An option to change target table names, tablespace names, and schema names

Another option to compress metadata or data or both during export

A tablespace metadata export to support the transportable tablespace feature of the

database
An option to append data to an existing table or to truncate and load data to an exist-

ing table
The automatic use of direct path export whenever possible

The ability to copy data from one database to another using a network

The ability to specify a sample percentage to unload only a subset of data


The ability to monitor job progress; job status can be queried from the database or

using EM
An option to restart or terminate failed export and import jobs

Architecture of Data Pump
In Oracle 11g Data Pump, the database does all the work. This is a major deviation from
the architecture of export/import utilities, which previously ran as clients and did the major
part of the work. The dump files for export/import were stored at the client, whereas the
Data Pump files are stored at the server. Figure 17.1 shows the Data Pump architecture.
Data Pump Components
Data Pump consists of the following components:
Data Pump API
DBMS_DATAPUMP
is the PL/SQL API for Data Pump, which is the engine.
Data Pump jobs are created and monitored using this API.
Metadata API T h e
DBMS_METADATA
API provides the database object definition to the Data
Pump processes.
Client Tools Data Pump client tools
expdp
and
impdp
use the procedures provided by the
DBMS_DATAPUMP
package. These tools make calls to the Data Pump API to initiate and moni-
tor Data Pump operations.
Data-movement APIs Data Pump uses the Direct Path API (DPAPI) to move data. Certain

circumstances do not allow the use of DPAPI; in those cases, the Oracle external table with
the
ORACLE_DATADUMP
access driver API is used.
95127c17.indd 937 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
938

Chapter 17

Moving Data and Using EM Tools
FIGURE 17.1 Data Pump architecture
Export Dump
Client: expdp
Import Dump
Client: impdp
Other Clients:
Enterprise
Manager,
SQL*Plus
Metadata API:
DBMS_METADATA
Database
DBMS_DATAPUMP: Data and Metadata Movement Engine
Direct Path API
External Table
ORACLE_DATAPUMP
API
Data Pump Processes
Oracle Data Pump jobs, once started, are performed by various processes on the database

server. The following are the processes involved in the Data Pump operation:
Client process This process is initiated by the client utility—
expdp
,
impdp
, or other clients—
to make calls to the Data Pump API. Since Data Pump is completely integrated into the
database, once the Data Pump job is initiated, this process is not necessary for the progress
of the job.
Shadow process When a client logs into the Oracle Database, a foreground process is
created (a standard feature of Oracle). This shadow process services the client data dump API
requests. This process creates the master table and creates Advanced Queries (AQ) queues
used for communication. Once the client process ends, the shadow process goes away too.
Master control process (MCP) T h e master control process controls the execution of the
Data Pump job; there is one MCP per job. MCP divides the Data Pump job into various
metadata and data-load or -unload jobs and hands them over to the worker processes. The
MCP has a process name of the format
<ORACLE_SID>_DMnn_<PROCESS_ID>
. It maintains the
job state, job description, restart information, and file information in the master table.
95127c17.indd 938 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Understanding Data Pump

939
Worker process The MCP creates the worker processes based on the value of the
PAR-
ALLEL
parameter. The workers perform the tasks requested by the MCP, mainly loading
or unloading data and metadata. The worker processes have the format

<ORACLE_SID>_
DWnn_<PROCESS_ID>
. The worker processes maintain the current status in the master table
that can be used to restart a failed job.
Parallel query (PQ) processes The worker processes can initiate parallel-query processes
if an external table is used as the data-access method for loading or unloading. These are
standard parallel-query slaves of the parallel-execution architecture.
Oracle Data Pump cannot be used to load data into a database from data
exported using the
exp
utility.
Let’s consider the example of an export Data Pump operation and see all the activities
and processes involved. Say user A invokes the
expdp
client, which initiates the shadow pro-
cess. The client calls the
DBMS_DATAPUMP.OPEN
procedure to establish the kind of export to
be performed. The
OPEN
call starts the MCP process and creates two AQ queues.
The first queue is the status queue, used to send the status of the job, which includes log-
ging information and errors. Clients interested in the status of the job can query this queue.
This is strictly a unidirectional queue—the MCP posts the information to the queue, and
the clients consume the information. The second queue is the command-and-control queue,
which is used to control the worker processes established by the MCP and to perform
API commands and file requests. This is a bidirectional queue where the MCP listens and
writes. The commands are sent to this queue by the
DBMS_DATAPUMP
methods or by using

the parameters of the
expdp
client.
Once all the components (parameters and filters) of the job are defined, the client (
expdp
)
invokes
DBMS_DATAPUMP.START_JOB
. Based on the number of parallel processes requested,
the MCP starts the worker processes. The MCP directs one of the worker processes to do
the metadata extraction using the
DBMS_METADATA
API.
During the operation, a master table is maintained in the schema of the user who initi-
ated the Data Pump export. The master table has the same name as the name of the Data
Pump job. This table maintains one row per object with status information. In the event of
a failure, Data Pump uses the information in this table to restart the job. The master table
is the heart of every Data Pump operation; it maintains all the information about the job.
Data Pump uses the master table to restart a failed or suspended job. The master table is
dropped (by default) when the Data Pump job finishes successfully.
The master table is written to the dump file set as the last step of the export dump opera-
tion and is removed from the user’s schema. For an import dump operation, the master table
is loaded from the dump file set to the user’s schema as the first step and is used to sequence
the objects being imported.
While the export job is underway, the original client who invoked the export job can detach
from the job without aborting the job. This is especially useful when performing long-running
data export jobs. Users can attach the job at any time using the
DBMS_DATAPUMP
methods
and query the status or change the parallelism of the job.

95127c17.indd 939 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
940

Chapter 17

Moving Data and Using EM Tools
Since the master table is created in the Data Pump user’s schema as a
table, if there is an existing table in the schema with the Data Pump job
name, the job fails. The user must have appropriate privileges to create the
table and have appropriate tablespace quotas.
Data Access Methods
Data Pump chooses the most appropriate data-access method. Two methods are supported:
direct path access and external table access. Direct path export has been supported since
Oracle 7.3. External tables were introduced in Oracle9i, and support for writing to external
tables has been available since Oracle 10g. Data Pump provides an external-tables access
driver (
ORACLE_DATAPUMP
) that can be used to read and write files. The format of the file is
the same as the direct path methods; hence, it’s possible to load data that is unloaded in
another method. Data Pump uses the Direct Load API whenever possible. The following
are the exceptions when an external tables method will be used:
Tables with fine-grained access control are enabled in insert and select operations.

A domain index exists for a

LOB
column.
A global index on multipartition table exists during a single-partition load.


Clustered table or table has an active trigger during import.

A table contains

BFILE
columns.
A referential integrity constraint is present during import.

A table contains a

VARRAY
column with an embedded opaque type.
Loading and unloading very large tables and partitions, where the

PARALLEL
SQL
clause can be used to an advantage.
Loading tables that are partitioned differently at load time and unload time.

Using Data Pump Clients
Oracle 11g comes with the
expdp
utility to invoke Data Pump for export and comes with
impdp
for import. The Data Pump export utility (
expdp
) unloads data and metadata to a set
of OS files called dump files. The Data Pump import utility (
impdp
) loads data and meta-

data stored in an export dump file to a target database.
expdp
and
impdp
accept parameters
that are then passed to the
DBMS_DATAPUMP
program. The command-line executable name
for Data Pump export is
expdp
and for Data Pump import is
impdp
on Windows as well as
Unix platforms. For a user to invoke
expdp
/
impdp
, they need to set up a directory where the
dump files will be stored and they must have appropriate privileges to perform Data Pump
export/import. In the next section, I will discuss how to set up the export dump location.
95127c17.indd 940 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Understanding Data Pump

941
Setting Up the Dump Location
Since Data Pump is server-based, directory objects must be created in the database where
the Data Pump files will be stored. Directory objects are named directory locations on
the database server representing the physical location on the server’s file system. Directo-
ries are used with several database features, including BFILEs, external tables,

utl_file
,
SQL*Loader, and Data Pump.
The directory object contains the location of a specific operating-system directory. By
using a named directory object, you do not have to hard-code the directory path in pro-
grams, and you get file-management flexibility.
Under Unix, you create directories with the
CREATE DIRECTORY
statement, like this:
CREATE DIRECTORY dump_dir AS ‘/oracle/data_pump/dumps’;
CREATE DIRECTORY log_dir AS ‘/oracle/data_pump/logs’;
Under Windows, you create directories like this:
CREATE DIRECTORY dpump_dir AS ‘G:\datadumps’;
Directories are not schema objects, like tables or synonyms, because they are not owned
by a schema. Instead, directories are like profiles or roles in that they are owned by the
database. To control access to a directory, you need to grant the
READ
or
WRITE
object privi-
lege on that directory, like this:
GRANT read,write ON DIRECTORY dump_dir TO PUBLIC;
To create directories, you must have the
CREATE ANY DIRECTORY
system privilege. By
default, only the users
SYSTEM
and
SYS
have this privilege. Be careful in granting this system

privilege to users, because the database employs the operating-system credentials of the
database-instance owner.
Directory objects are owned by the
SYS
user; thus, the directory names
must be unique across the database.
The user executing Data Pump must have been granted permissions on the directory.
READ
permission is required to import, and
WRITE
permission is required to export and to
create log files or SQL files.
Note that the
oracle
user (who owns the software installation and database files) must
have read and write OS privileges on the directory. The user
SCOTT
, for example, need not
have any OS privileges on the directory for Data Pump to succeed.
A default directory can be created for Data Pump operations in the database. Privileged
users (with the
EXP_FULL_DATABASE
or
IMP_FULL_DATABASE
privilege) need not specify a
directory object name when performing the Data Pump operation. The name of the default
directory must be
DATA_PUMP_DIR
. Also, the privileged users need not have explicit
READ

or
WRITE
permission on
DATA_PUMP_DIR
.
95127c17.indd 941 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
942

Chapter 17

Moving Data and Using EM Tools
Using EM Database Control, you can create and edit directory objects. On the Database
Control Schema page, click Directory Objects under Database Objects. Figure 17.2 shows
the Directory Objects screen that appears.
FIGURE 17.2 Directory Objects screen of EM
Click the Edit button to change the physical directory. You can also use the Delete but-
ton to delete an existing directory and the Create button to create a new directory.
Data Pump can write three types of files to the OS directory defined in the database.
Remember that absolute paths are not supported; Data Pump can write only to a directory
defined by a directory database object. The file types are as follows:
Dump files These contain data and metadata information.
Log files These record the standard output to a file and contain job progress and status
information.
SQL files Data- dump import can extract the metadata information from a dump file,
which can be used to create database objects without using the Data Pump import utility.
You can specify the location of the files to the Data Pump clients using three methods
(given in the order of precedence):
Prefix the filename with the directory name separated by a colon; for example,


DUMPFILE=dumplocation:myfile.dmp
.
Use the

DIRECTORY
parameter on the OS environment.
Define the

DATA_DUMP_DIR
directory in the database for privileged users.
The export and import done using the
expdp
and
impdp
tools can have different modes
based on the requirement. The next section discusses this.
95127c17.indd 942 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Understanding Data Pump

943
Specifying Export and Import Modes
Export and import using the Data Pump clients can be performed in five different modes to
unload or load different portions of the database. When performing the dump-file import,
specifying the mode is optional; when no mode is specified, the entire dump file is loaded
with the mode automatically set to the one used for export.
Table 17.1 describes the export and import modes.
TABLE 17.1 Export and Import Modes in Data Pump
Mode Description Export Import
Database Performed by speci-

fying the
FULL=Y

parameter
The export user
requires the
EXP_FULL_
DATABASE
role.
The import user requires
the
IMP_FULL_DATABASE

role.
Tablespace Performed by
specifying the
TABLESPACES

parameter
Data and metadata
for only those objects
contained in the
specified tablespaces
are unloaded. The export
user requires the
EXP_
FULL_DATABASE
role.
All objects contained in
the specified tablespaces

are loaded. The import
user requires the
IMP_
FULL_DATABASE
privilege.
The source dump file can
be exported in database,
tablespace, schema, or
table mode.
Schema Performed by speci-
fying the
SCHEMAS

parameter. This is the
default mode
Only objects belonging to
the specified schema are
unloaded. The
EXP_FULL_
DATABASE
role is required
to specify a list of schemas.
All objects belonging to
the specified schema are
loaded. The source can be a
database or schema-mode
export. The
IMP_FULL_
DATABASE
role is required to

specify a list of schema
.
Table Performed by speci-
fying the
TABLES

parameter
Only the specified table,
its partitions, and its
dependent objects are
unloaded. The export user
must have the
SELECT

privilege on the tables.
Only the specified table,
its partitions, and its
dependent objects are
loaded. This requires the
IMP_FULL_DATABASE
role
to specify tables belong-
ing to a different user.
Transport
tablespace
Performed by
specifying the
TRANSPORT_
TABLESPACES


parameter
Only metadata for tables
and their dependent
objects within the speci-
fied set of tablespaces are
unloaded. Use this mode to
transport tablespaces from
one database to another.
Metadata from a trans-
port tablespace export is
loaded.
95127c17.indd 943 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
944

Chapter 17

Moving Data and Using EM Tools
In a database-mode export, the entire database is exported to operating-system files,
including user accounts, public synonyms, roles, and profiles. In a schema-mode export,
all data and metadata for a list of schemas is exported. At the most granular level is the
table-mode export, which includes the data and metadata for a list of tables. A tablespace-
mode export extracts both data and metadata for all objects in a tablespace list as well as
any object dependent on those in the specified tablespace list. Therefore, if a table resides in
your specified tablespace list, all its indexes are included whether or not they also reside in
the specified tablespace list. In each of these modes, you can further specify that only data
or only metadata be exported. The default is to export both data and metadata.
With some objects, such as indexes, only the metadata is exported; the actual internal
structures contain physical addresses and are always rebuilt on import.
The files created by a Data Pump export are called dump files, and one or more of

these files can be created during a single Data Pump export job. Multiple files are created
if your Data Pump job has a parallel degree greater than 1 or if a single dump file exceeds
the
filesize
parameter. All the export dump files from a single Data Pump export job are
called a dump-file set.
Using expdp
You use the
expdp
utility to perform Data Pump exports. Any user can export objects or a
complete schema owned by the user without any additional privileges. Nonprivileged users
must have
WRITE
permission on the directory object and must specify the
DIRECTORY
param-
eter or specify the directory object name along with the dump filename.
Here is an example to perform an export by user
SCOTT
. Since Scott is not a privileged
user, he must specify the
DIRECTORY
object name.
$ expdp scott/tiger
Export: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008
13:50:05
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 -
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39145: directory object parameter must be specified and non-null
Let’s create a directory for user
SCOTT
and grant read and write privileges on this
directory:
SQL> CREATE DIRECTORY dumplocation AS ‘/u02/dpump’;
Directory created.
95127c17.indd 944 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Understanding Data Pump

945
SQL> GRANT READ, WRITE on DIRECTORY dumplocation TO scott;
Grant succeeded.
Now, let’s try the export specifying the directory:
$ expdp scott/tiger directory=dumplocation
Export: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008
16:04:22
Copyright (c) 2003, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 -
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SCOTT”.”SYS_EXPORT_SCHEMA_01”: scott/********
directory=dumplocation
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/POST_SCHEMA/PROCACT_SCHEMA
. . exported “SCOTT”.”DEPT” 5.914 KB 4 rows
. . exported “SCOTT”.”EMP” 8.570 KB 14 rows
. . exported “SCOTT”.”SALGRADE” 5.867 KB 5 rows
. . exported “SCOTT”.”BONUS” 0 KB 0 rows
Master table “SCOTT”.”SYS_EXPORT_SCHEMA_01” successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
/u02/dpump/expdat.dmp
Job “SCOTT”.”SYS_EXPORT_SCHEMA_01” successfully completed at 16:04:55
$
Since you did not specify any other parameters,
expdp
used default values for the file-
names (
expdat.dmp
and
export.log
), did schema-level export (login schema), calculated
job estimation using the blocks method, used a default job name (
SYS_EXPORT_SCHEMA_01
),
and exported both data and metadata.

95127c17.indd 945 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
946

Chapter 17

Moving Data and Using EM Tools
Data Pump Export Parameters
You can use various parameters while invoking
expdp
. You can obtain a list of parameters
by specifying
expdp help=y
:
$ expdp help=y
Export: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008
16:54:49
Copyright (c) 2003, 2007, Oracle. All rights reserved.
The Data Pump export utility provides a mechanism for transferring data
objects
between Oracle databases. The utility is invoked with the following command:
Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
You can control how Export runs by entering the ‘expdp’ command followed
by various parameters. To specify parameters, you use keywords:
Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir
SCHEMAS=scott
or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
Keyword Description (Default)

------------------------------------------------------------------------------
ATTACH Attach to existing job, e.g. ATTACH [=job name].
COMPRESSION Reduce size of dumpfile contents where valid keyword.
values are: ALL, (METADATA_ONLY), DATA_ONLY and NONE.
CONTENT Specifies data to unload where the valid keyword
values are: (ALL), DATA_ONLY, and METADATA_ONLY.
DATA_OPTIONS Data layer flags where the only valid value is:
XML_CLOBS-write XML datatype in CLOB format
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION Encrypt part or all of the dump file where valid keyword
values are: ALL, DATA_ONLY, METADATA_ONLY,
ENCRYPTED_COLUMNS_ONLY, or NONE.
ENCRYPTION_ALGORITHM Specify how encryption should be done where valid
keyword values are: (AES128), AES192, and AES256.
ENCRYPTION_MODE Method of generating encryption key where valid keyword
values are: DUAL, PASSWORD, and (TRANSPARENT).
95127c17.indd 946 2/17/09 3:10:38 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Understanding Data Pump

947
ENCRYPTION_PASSWORD Password key for creating encrypted column data.
ESTIMATE Calculate job estimates where the valid keyword
values are: (BLOCKS) and STATISTICS.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.

FLASHBACK_TIME Time used to get the SCN closest to the specified time.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
REMAP_DATA Specify a data conversion function,
e.g. REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
REUSE_DUMPFILES Overwrite destination dump file if it exists (N).
SAMPLE Percentage of data to be exported;
SCHEMAS List of schemas to export (login schema).
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
TABLES Identifies a list of tables to export - one schema only.
TABLESPACES Identifies a list of tablespaces to export.
TRANSPORTABLE Specify whether transportable method can be used where
valid keyword values are: ALWAYS, (NEVER).
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be
unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.
The following commands are valid while in interactive mode.
Note: abbreviations are allowed
Command Description

------------------------------------------------------------------------------
ADD_FILE Add dumpfile to dumpfile set.
CONTINUE_CLIENT Return to logging mode. Job will be re-started if idle.
95127c17.indd 947 2/17/09 3:10:39 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
948

Chapter 17

Moving Data and Using EM Tools
EXIT_CLIENT Quit client session and leave job running.
FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands.
HELP Summarize interactive commands.
KILL_JOB Detach and delete job.
PARALLEL Change the number of active workers for current job.
PARALLEL=<number of workers>.
REUSE_DUMPFILES Overwrite destination dump file if it exists (N).
START_JOB Start/resume current job.
STATUS Frequency (secs) job status is to be monitored where
the default (0) will show new status when available.
STATUS[=interval]
STOP_JOB Orderly shutdown of job execution and exits the client.
STOP_JOB=IMMEDIATE performs an immediate shutdown of the
Data Pump job.
$
FLASHBACK_SCN
and
FLASHBACK_TIME
are mutually exclusive parameters.
The

DUMPFILE
parameter can specify more than one file. The filenames can be comma-
separated, or you can use the
%U
substitution variable. If you specify
%U
in the
DUMPFILE
file-
name, the number of files initially created is based on the value of the
PARALLEL
parameter.
Preexisting files that match the name of the files generated are not overwritten; an error
is flagged. To forcefully overwrite the files, use the
REUSE_DUMPFILES=Y
parameter. The
FILESIZE
parameter determines the size of each file. Table 17.2 shows some examples.
You can specify all the parameters in a file and specify the filename with
the
PARFILE
parameter. The only exception is the
PARFILE
parameter
inside the parameter file. Recursive
PARFILE
is not supported.
The
SAMPLE
parameter is useful to get a subset of data unloaded from the source table.

Specify the percentage of rows that need to be unloaded using this parameter. The
SAMPLE

parameter is not valid for network exports.
In the next section, I will discuss the
impdp
utility, which does the import from a dump
file created using
expdp
.
95127c17.indd 948 2/17/09 3:10:39 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Understanding Data Pump

949
TABLE 17.2 Data Pump
DUMPFILE
Examples
Parameter Examples File Characteristics
DUMPFILE=exp%U.dmp
FILESIZE=200M
Initially the
exp01.dmp
file will be created; once the file is
200MB, the next file will be created.
DUMPFILE=exp%U_%U.dmp
PARALLEL=3
Initially three files will be created:
exp01_01.dmp
,

exp02_02.dmp
, and
exp03_03.dmp
. Notice that every occur-
rence of the substitution variable is incremented each time.
Since there is no
FILESIZE
, no more files will be created.
DUMPFILE=DMPDIR1:exp%U.dmp,
DMPDIR2:exp%U.dmp
FILESIZE=100M
This method is especially useful if you do not have enough
space in one directory to perform the complete export job.
The dump files are stored in directories defined by
DMPDIR1

and
DMPDIR2
.
Using impdp
The Data Pump import program
impdp
is the utility that can read and apply the dump file
created by the
expdp
utility. The directory permission and privileges for using
impdp
are
similar to those for
expdp

.
impdp
has several modes of operation, including full, schema, table, and tablespace. In
the full mode, the entire content of an export file set is loaded. In a schema-mode import,
all content for a list of schemas in the specified file set is loaded. The specified file set for a
schema-mode import can be from either a database or a schema-mode export. With a table-
mode import, only the specified table and dependent objects are loaded from the export file
set. With a tablespace-mode import, all objects in the export file set that were in the specified
tablespace list are loaded.
With all these modes, the source can be a live database instead of a set of export files.
Table 17.3 shows the supported mapping of export mode to import mode.
TABLE 17.3 Export to Import Modes
Source Export Mode Import Mode
Database
Schema
Table
Tablespace
Live database
Full
95127c17.indd 949 2/17/09 3:10:39 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
950

Chapter 17

Moving Data and Using EM Tools
Source Export Mode Import Mode
Database
Schema
Live database

Schema
Database
Schema
Table
Tablespace
Live database
Table
Database
Schema
Table
Tablespace
Live database
Tablespace
The
IMP_FULL_DATABASE
role is required if the source is a live database or the export ses-
sion required the
EXP_FULL_DATABASE
role.
Data Pump Import Parameters
You can use various parameters while invoking
impdp
. You can obtain a list of parameters
by specifying
impdp help=y
:
$ impdp help=y
Import: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008
21:13:53
Copyright (c) 2003, 2007, Oracle. All rights reserved.

The Data Pump Import utility provides a mechanism for transferring data
objects
between Oracle databases. The utility is invoked with the following command:
Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
You can control how Import runs by entering the ‘impdp’ command followed
by various parameters. To specify parameters, you use keywords:
Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp
USERID must be the first parameter on the command line.
TABLE 17.3 Export to Import Modes (continued)
95127c17.indd 950 2/17/09 3:10:39 PM
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

×