Tải bản đầy đủ (.pdf) (10 trang)

OCA /OCP Oracle Database 11g A ll-in-One Exam Guide- P88 pot

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (227.15 KB, 10 trang )

OCA/OCP Oracle Database 11g All-in-One Exam Guide
826
10. Tidy up with Database Control: connect as SYSTEM.
11. From the database home page take the Server tab, then the Windows
link in the Oracle Scheduler section. Select the DAYTIME window, and
click DELETE.
Two-Minute Drill
Create a Job, Program, and Schedule
• A job can specify what to do and when to do it, or it can point to a program
and/or a schedule.
• A job (or its program) can be an anonymous PL/SQL block, a stored procedure,
or an external operating system command or script.
• Either Database Control or the DBMS_SCHEDULER API can be used to
manage the Scheduler environment.
Use a Time-Based or Event-Based Schedule
for Executing Scheduler Jobs
• A time-based schedule has start and end dates, and a repeat interval.
• The repeat interval can be a date expression, or a calendaring expression
consisting of a frequency, an interval, and possibly several specifiers.
• An event-based schedule uses an agent to query an Advanced Queue, and
launches jobs depending on the content of the queued messages.
Create Lightweight Jobs
• A lightweight job has less overhead in the data dictionary than a regular job,
and therefore large numbers of them can be created much faster than an
equivalent number of regular jobs.
• Lightweight jobs do not have the full range of attributes that regular jobs
have.
Use Job Chains to Perform a Series of Related Tasks
• A chain object consists of a number of steps.
• Each step can launch a program.
• Simple logic (such as the success or failure of a previous step) can control the


flow of execution through a job chain with branching steps.
Chapter 22: The Scheduler
827
PART III
Create Windows and Job Classes
• A window is a defined period during which certain jobs may run, that will itself
open according to an embedded schedule or a preexisting schedule object.
• A window can activate a Resource Manager plan.
• Only one window can be open at once.
• If windows overlap, the open window will be determined by which has HIGH
or LOW priority.
• A job class associates the jobs in the class with a Resource Manager consumer
group.
Use Advanced Scheduler Concepts to Prioritize Jobs
• Jobs can be prioritized at two levels: the Resource Manager will allocate
resources via consumer groups to all the jobs in a class, and the class will
prioritize the jobs within it according to the job priority set by the Scheduler.
• Scheduler priority varies between levels 1 to 5 (highest to lowest).
Self Test
1. When a job is due to run, what process will run it? (Choose the best answer.)
A. A CJQn process
B. A Jnnn process
C. A server process
D. A background process
2. Which of the following is a requirement if the Scheduler is to work? (Choose
the best answer.)
A. The instance parameter JOB_QUEUE_PROCESSES must be set.
B. A Resource Manager plan must be enabled.
C. A schedule must have been created.
D. All of the above.

E. None of the above.
3. A Scheduler job can be of several types. Choose all that apply:
A. An anonymous PL/SQL block
B. An executable operating system file
C. A PL/SQL stored procedure
D. A Java stored procedure
E. An operating system command
F. An operating system shell script (Unix) or batch file (Windows)
OCA/OCP Oracle Database 11g All-in-One Exam Guide
828
4. You create a job with this syntax:
exec dbms_scheduler.create_job(-
job_name=>'j1',-
program_name=>'p1',-
schedule_name=>'s1',-
job_class=>'c1');
and find that it is not running when expected. What might be a reason for
this? (Choose the best answer.)
A. The schedule is associated with a window, which has not opened.
B. The job has not been enabled.
C. The class is part of a Resource Manager consumer group with low priority.
D. The permissions on the job are not correct.
5. What are the possible priority levels of a job within a class? (Choose the best
answer.)
A. 1 to 5
B. 1 to 999
C. HIGH or LOW
D. It depends on the Resource Manager plan in effect
6. You want job to run every 30 minutes. Which of the following possibilities for
the REPEAT_INTERVAL argument are correct syntactically and will achieve this

result? (Choose two answers.)
A. 'freq=minutely;interval=30'
B. 'freq=hourly;interval=1/2'
C. '0 00:30:00'
D. 'freq=minutely;byminute=30'
E. 'freq=byminute;interval=30'
7. You create a job class, and you set the LOGGING_LEVEL argument to
LOGGING_RUNS. What will be the result? (Choose the best answer.)
A. There will be a log entry for each run of each job in the class, but no
information on whether the job was successful.
B. There will be a log entry for each run of each job in the class, and
information on whether the job was successful.
C. There will be a single log entry for the class whenever it is run.
D. You cannot set logging per class, only per job.
8. Which of the following statements (if any) are correct regarding how
Scheduler components can be used together? (Choose all that apply.)
A. A schedule can be used by many jobs.
B. A job can use many programs.
Chapter 22: The Scheduler
829
PART III
C. A class can have many programs.
D. Job priorities can be set within a class.
E. Consumer groups control priorities within a class.
F. A Resource Manager plan can be activated by a schedule.
9. Which view will tell you about jobs configured with the Scheduler? (Choose
the best answer.)
A. DBA_JOBS
B. DBA_SCHEDULER
C. DBA_SCHEDULED_JOBS

D. DBA_SCHEDULER_JOBS
10. If two windows are overlapping and have equal priority, which window(s)
will be open? (Choose the best answer.)
A. Both windows will be open.
B. Windows cannot overlap.
C. Whichever window opened first will remain open; the other will remain
closed.
D. Whichever window opened first will be closed, and the other will open.
E. It will depend on which window has the longest to run.
Self Test Answers
1. þ B. Jobs are run by job queue processes.
ý A, C, and D. The job queue coordinator does not run jobs; it assigns them
to job queue processes. These are not classed as background processes, and
they are not server processes.
2. þ E. The Scheduler is available, by default, with no preconfiguration steps
needed.
ý A, B, C, and D. A is wrong because (in release 11g) the JOB_QUEUE_
PROCESSES instance parameter defaults to 1000; therefore, it does not need
to be set. B and C are wrong because the Resource Manager is not required,
and neither is a schedule.
3. þ A, B, C, D, E, and F. The JOB_TYPE can be PLSQL_BLOCK, or STORED_
PROCEDURE (which can be PL/SQL or Java), or EXECUTABLE (which includes
executable files, OS commands, or shell scripts).
ý All the answers are correct.
OCA/OCP Oracle Database 11g All-in-One Exam Guide
830
4. þ B. The job will, by default, not be enabled and therefore cannot run.
ý A, C, and D. A is wrong because the job is not controlled by a window,
but by a schedule. C is wrong because while the Resource Manager can control
job priority, it would not in most circumstances block a job completely. D is

wrong because while permissions might cause a job to fail, they would not
stop it from running.
5. þ A. Job priorities are 1 to 5 (highest to lowest).
ý B, C, and D. B is wrong because it is the incorrect range. C is the choice
for window priority, not job priority. D is wrong because the Resource Manager
controls priorities between classes, not within them.
6. þ A and B. Both will provide a half-hour repeat interval.
ý C, D, and E. C is the syntax for a window’s duration, not a repeat interval.
D and E are syntactically wrong: there is no such specifier as BYMINUTE (though
such a specifier might be useful).
7. þ B. With logging set to LOGGING_RUNS, you will get records of each run
of each job, including the success or failure.
ý A, C, and D. A is wrong because LOGGING_RUNS will include the success
or failure. C and D are wrong because even though logging is set at the class
level, it is applied at the job level. Note that logging can also be set at the job level.
8. þ A and D. Many jobs can be controlled by one schedule, and Scheduler
priorities are applied to jobs within classes.
ý B, C, E, and F. B and C both misunderstand the many-to-one relationships
of Scheduler objects. E is wrong because consumer groups control priorities
between classes, not within them. F is wrong because plans are activated by
windows, not schedules.
9. þ D. The DBA_SCHEDULER_JOBS view externalizes the data dictionary
jobs table, with one row per scheduled job.
ý A, B, and C. A is wrong because DBA_JOBS describes the jobs scheduled
through the old DBMS_JOB system. B and C refer to views that do not exist.
10. þ E. Other things being equal, the window with the longest to run will open
(or remain open).
ý A, B, C, and D. Only one window can be open at once, and windows
can overlap. The algorithms that manage overlapping windows are not well
documented, but neither C nor D is definitively correct.

CHAPTER 23
Moving and Reorganizing Data
Exam Objectives
In this chapter you will learn to
• 052.17.1 Describe and Use Methods to Move Data
(Directory Objects, SQL*Loader, External Tables)
• 052.17.2 Explain the General Architecture of Oracle Data Pump
• 052.17.3 Use Data Pump Export and Import to Move Data Between
Oracle Databases
• 053.16.2 Describe the Concepts of Transportable Tablespaces and Databases
• 053.16.1 Manage Resumable Space Allocation
• 053.16.3 Reclaim Wasted Space from Tables and Indexes by Using the Segment
Shrink Functionality
831
OCA/OCP Oracle Database 11g All-in-One Exam Guide
83 2
There are many situations where bulk transfers of data into a database or between
databases are necessary. Common cases include populating a data warehouse with
data extracted from transaction processing systems, or copying data from live systems
to test or development environments. As entering data with standard INSERT statements
is not always the best way to do large-scale operations, the Oracle database comes
with facilities designed for bulk operations. These are SQL*Loader and Data Pump.
There is also the option of reading data without ever actually inserting it into the
database; this is accomplished through the use of external tables.
Data loading operations, as well as DML, may fail because of space problems. This
can be an appalling waste of time. The resumable space allocation mechanism can
provide a way to ameliorate the effect of space problems. There are also techniques to
reclaim space that is inappropriately assigned to objects and make it available for reuse.
SQL*Loader
In many cases you will be faced with a need to do a bulk upload of datasets generated

from some third-party system. This is the purpose of SQL*Loader. The input files may
be generated by anything, but as long as the layout conforms to something that
SQL*Loader can understand, it will upload the data successfully. Your task as DBA
is to configure a SQL*Loader controlfile that can interpret the contents of the input
datafiles; SQL*Loader will then insert the data.
Architecturally, SQL*Loader is a user process like any other: it connects to the
database via a server process. To insert rows, it can use two techniques: conventional
or direct path. A conventional insert uses absolutely ordinary INSERT statements. The
SQL*Loader user process constructs an INSERT statement with bind variables in the
VALUES clause and then reads the source datafile to execute the INSERT once for each
row to be inserted. This method uses the database buffer cache and generates undo
and redo data: these are INSERT statements like any others, and normal commit
processing makes them permanent.
The direct path load bypasses the database buffer cache. SQL*Loader reads the
source datafile and sends its contents to the server process. The server process then
assembles blocks of table data in its PGA and writes them directly to the datafiles. The
write is above the high water mark of the table and is known as a data save. The high
water mark is a marker in the table segment above which no data has ever been
written: the space above the high water mark is space allocated to the table that has
not yet been used. Once the load is complete, SQL*Loader shifts the high water mark
up to include the newly written blocks, and the rows within them are then immediately
visible to other users. This is the equivalent of a COMMIT. No undo is generated, and
if you wish, you can switch off the generation of redo as well. For these reasons, direct
path loading is extremely fast, and furthermore it should not impact on your end
users, because interaction with the SGA is kept to a minimum.
Direct path loads are very fast, but they do have drawbacks:
• Referential integrity constraints must be dropped or disabled for the duration
of the operation.
• INSERT triggers do not fire.
Chapter 23: Moving and Reorganizing Data

833
PART III
• The table will be locked against DML from other sessions.
• It is not possible to use direct path for clustered tables.
These limitations are a result of the lack of interaction with the SGA while the
load is in progress.
EXAM TIP Only UNIQUE, PRIMARY KEY, and NOT NULL constraints are
enforced during a direct path load; INSERT triggers do not fire; the table is
locked for DML.
SQL*Loader uses a number of files. The input datafiles are the source data that it will
upload into the database. The controlfile is a text file with directives telling SQL*Loader
how to interpret the contents of the input files, and what to do with the rows it extracts
from them. Log files summarize the success (or otherwise) of the job, with detail of any
errors. Rows extracted from the input files may be rejected by SQL*Loader (perhaps
because they do not conform to the format expected by the controlfile) or by the
database (for instance, insertion might violate an integrity constraint); in either case
they are written out to a bad file. If rows are successfully extracted from the input but
rejected because they did not match some record selection criterion, they are written
out to a reject file.
The controlfile is a text file instructing SQL*Loader on how to process the input
datafiles. It is possible to include the actual data to be loaded on the controlfile, but you
would not normally do this; usually, you will create one controlfile and reuse it, on a
regular basis, with different input datafiles. The variety of input formats that SQL*Loader
can understand is limited only by your ingenuity in constructing a controlfile.
Consider this table:
SQL> desc dept;
Name Null? Type

DEPTNO NOT NULL NUMBER(2)
DNAME VARCHAR2(14)

LOC VARCHAR2(13)
And this source datafile, named DEPTS.TXT:
60,CONSULTING,TORONTO
70,HR,OXFORD
80,EDUCATION,
A SQL*Loader controlfile to load this data is DEPTS.CTL:
1 load data
2 infile 'depts.txt'
3 badfile 'depts.bad'
4 discardfile 'depts.dsc'
5 append
6 into table dept
7 fields terminated by ','
8 trailing nullcols
9 (deptno integer external(2),
10 dname,
11 loc)
OCA/OCP Oracle Database 11g All-in-One Exam Guide
834
To perform the load, from an operating system prompt run this command:
sqlldr userid=scott/tiger control=depts.ctl direct=true
This command launches the SQL*Loader user process, connects to the local database
as user SCOTT password TIGER, and then performs the actions specified in the
controlfile DEPTS.CTL. The DIRECT=TRUE argument instructs SQL*Loader to use
the direct path rather than conventional insert (which is the default). Taking the
controlfile line by line:
Line Purpose
1 Start a new load operation.
2 Nominate the source of the data.
3 Nominate the file to write out any badly formatted records.

4 Nominate the file to write out any unselected records.
5 Add rows to the table (rather than, for example, truncating it first).
6 Nominate the table for insertion.
7 Specify the field delimiter in the source file.
8 If there are missing fields, insert NULL values.
9, 10, 11 The columns into which to insert the data.
This is a very simple example. The syntax of the controlfile can handle a wide
range of formats with intelligent parsing to fix any deviations in format such as length
or data types. In general you can assume that it is possible to construct a controlfile
that will understand just about any input datafile. However, do not think that it is
always easy.
TIP It may be very difficult to get a controlfile right, but once you have it, you
can use it repeatedly, with different input datafiles for each run. It is then the
responsibility of the feeder system to produce input datafiles that match your
controlfile, rather than the other way around.
External Tables
An external table is visible to SELECT statements as any other table, but you cannot
perform DML against it. This is because it does not exist as a segment in the database:
it exists only as a data dictionary construct, pointing toward one or more operating
system files. Using external files is an alternative to using SQL*Loader, and is often
much more convenient.
The operating system files of external tables are located through Oracle directory
objects. Directories are also a requirement for Data Pump, discussed later in this
chapter.
Chapter 23: Moving and Reorganizing Data
835
PART III
Directories
Oracle directories provide a layer of abstraction between the user and the operating
system: you as DBA create a directory object within the database, which points to a

physical path on the file system. Permissions on these Oracle directories can then be
granted to individual database users. At the operating system level, the Oracle user
will need permissions against the operating system directories to which the Oracle
directories refer.
Directories can be created either from a SQL*Plus prompt or from within Database
Control. To see information about directories, query the view DBA_DIRECTORIES.
Each directory has a name, an owner, and the physical path to which it refers. Note that
Oracle does not verify whether the path exists when you create the directory—if it does
not, or if the operating system user who owns the Oracle software does not have
permission to read and write to it, you will only get an error when you actually use the
directory. Having created a directory, you must give the Oracle database user(s) who
will be making use of the directory permission to read from and write to it, just as your
system administrators must give the operating system user permission to read from
and write to the physical path.
EXAM TIP Directories are always owned by user SYS, but any user to whom
you have granted the CREATE ANY DIRECTORY privilege can create them.
Figure 23-1 demonstrates how to create directories, using SQL*Plus. In the figure,
user SCOTT attempts to create a directory pointing to his operating system home
directory on the database server machine. This fails because, by default, users do not
have permission to do this. After being granted permission, he tries again. As the
directory creator, he will have full privileges on the directory. He then grants read
permission on the directory (and therefore any files within it) to all users, and read
and write permission to one user. The query against ALL_DIRECTORIES shows that
the directory (like all directories) is owned by SYS: directories are not schema objects.
This is why SCOTT cannot drop the directory, even though he created it.
Using External Tables
A common use of external tables is to avoid the necessity to use SQL*Loader to read
data into the database. This can give huge savings in the ETL (extract-transform-load)
cycle typically used to update a DSS system with data from a feeder system. Consider
the case where a feeder system regularly generates a dataset as a flat ASCII file, which

should be merged into existing database tables. One approach would be to use
SQL*Loader to load the data into a staging table, and then a separate routine to
merge the rows from the staging table into the DSS tables. This second routine cannot
start until the load is finished. Using external tables, the merge routine can read the
source data from the operating system file(s) without having to wait for it to be loaded.

×