Tải bản đầy đủ (.pdf) (40 trang)

Tài liệu ORACLE8i- P22 docx

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (470.7 KB, 40 trang )

CHAPTER 17 • MONITORING AND TUNING LATCHES, LOCKS, AND WAITS
836
If your system administrator manages these systems, you may not be privy to the
mapping of physical disks. Perhaps, in part, the administrator doesn’t understand the
importance of such mapping or doesn’t know how to create such a map. Either way,
it’s imperative that you make clear to your system administrator that you need to
know where this data resides, and with what other systems it interacts.
Correcting I/O problems can make a huge difference in performance. This action
alone can transform you into a hero. Trust me. I’ve been there.
How’s the Shared Pool Doing?
After tuning I/O, you may still be seeing a good deal of latch contention. Now’s the
time to look at the shared pool and take advantage of the scripts provided in Chap-
ter 15. A poorly tuned shared pool can cause all sorts of latch issues. Run the various
scripts that monitor the hit ratios of the shared pool. Are your hit ratios low? If so,
you must add memory.
WARNING Any time you add memory to any Oracle memory structure such as the
shared pool, make sure you don’t add so much that you cause the system to start thrash-
ing memory pages between the swap disks and memory. This will inevitably result in per-
formance problems. You’ll only wind up worse off than before.
If the hit ratios are not low, and you see thrashing, make sure you have not over-
allocated memory to the point that you are paging or excessively swapping memory
to and from the disk. If you are, you must correct this problem immediately. You’ll
need to enlist the help of your system administrator to determine if you’re having
system memory contention issues.
In keeping with the theme of reducing I/O is the idea that the fewer numbers of
blocks you have to deal with, the less work the database will have to do, and latch con-
tention will be minimized. Do everything you can to reduce I/Os during queries. Make
sure your tables are allocating block storage correctly (look at PCTUSED/PCTFREE).
Making sure tables load in order of the primary key (or most often used) index columns,
and tuning your SQL to return the result set in the fewest number of block I/Os (logical
or physical) will result in a reduction in latch contention—-we guarantee it.


Finally, don’t just throw CPUs, disks, and memory at the problem: that’s the wrong
kind of lazy solution and often doing so prolongs the problem. As your database sys-
tem grows, even the faster hardware will not be able to handle the load. The only
exception to this rule is if you simply do not have enough disk space to properly dis-
tribute the I/O of the database. If this is the case then you simply have to buy more
disk pronto.
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
837
NOTE In some cases, lack of memory in the shared pool or the database buffer cache
actually is the problem. This can be true if you see low hit ratios in any of the memory
areas.
Fragmentation of the shared pool can also be a problem. Consider changing the
shared_pool_reserved_size
parameter, which is associated with the parameter
_shared_pool_reserved_min_alloc
These parameters affect the location in the shared pool for storing PL/SQL code. If the
code is of a size greater than _SHARED_POOL_RESERVED_MIN_ALLOC, it will be
stored in an area of the shared pool set aside by the parameter SHARED_POOL_
RESERVED_SIZE. If there isn’t enough memory available for that chunk to be stored
in reserved memory, it will be stored in the normal memory area of the shared pool.
You can positively affect shared pool fragmentation by increasing the
SHARED_POOL_RESERVED_MIN_ALLOC parameter so that your largest PL/SQL pro-
grams are loaded there. This approach will eliminate fragmentation issues.
Another method that can be used to limit fragmentation of the shared pool is the
use of the DBMS_SHARED_POOL.KEEP procedure to pin often used PL/SQL objects in
the shared pool. (See Chapter 20 for more on DBMS_SHARED_POOL.) You might con-
sider pinning commonly used objects in the SGA every time the database starts up.

Doing so will help improve performance, and will go a long way toward reducing per-
formance problems.
Tune Up Your SQL
If everything looks hunky-dory with the shared pool then make sure you are using re-
usable SQL statements with bind variables as much as possible. If you aren’t, you can
cause all sorts of problems, including latching contention. See Chapter 16 for more
information on how to write reusable SQL and how to determine if SQL needs to be
rewritten. You may also want to take advantage of cursor sharing in Oracle8i, which is
also discussed in Chapter 16.
General Tuning for Latch Contention
Because there are various levels of latches, contention for one latch can cause contention
against other, lower-level latches. A perfect example is the attempt to acquire a redo
copy latch to quickly allocate memory in the redo log buffer. Depending on the size
of the redo to be written, Oracle will often opt to use one of the several redo allocation
TUNING LOCKS, LATCHES, AND WAITS
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 17 • MONITORING AND TUNING LATCHES, LOCKS, AND WAITS
838
latches rather than use the one redo copy latch. Having acquired the redo allocation
latch, Oracle will then quickly try to acquire the level-six redo copy latch. Oracle
needs this latch only long enough to allocate space in the redo log buffer for the
entries it needs to write; then it releases the latch for other processes to use. Unfortu-
nately, a delay in getting the redo copy latch can keep other processes from acquiring

the available redo allocation latches. The bottom line is that you must always deal with
latch contention level by level, tuning from the highest level (15) to the lowest (0).
Consider increasing the _SPIN_COUNT parameter if you are seeing excessive sleeps
on a latch. On many systems it defaults to 2000, but yours might be different. If
you’re seeing problems with redo copy latches or other latch sleeps, see what you can
do by playing with this parameter.
You can use the ALTER SYSTEM command to reset the spin count as well, which
means you don’t have to shut down the database. Here’s the syntax for this:
ALTER SYSTEM SET “_SPIN_COUNT” = 4000;
After you have reset the spin count, let the system run normally for a few minutes
and then check to see if the number of spins has dropped. Also, has there been any
change in the number of sleeps? (Note that sleeps for the redo copy latch are not
unusual.)
WARNING Remember that hidden or undocumented parameters are not sup-
ported by Oracle in most cases. That includes _SPIN_COUNT (though it was a docu-
mented parameter until Oracle8). With this in mind, test all hidden parameters before
you decide to use them in production, and find out about any bugs by checking Oracle’s
registered information.
TIP In conjunction with your latch contention tuning, keep in mind that tuning bad SQL
statements can have a huge impact on latching overall. So by all means tune the instance
as best you can—but often your best results will come from SQL tuning.
Tuning Redo Copy Latch Problems
Oracle’s multiple redo copy latches are designed to relieve the hard-pressed single
redo allocation latch. When using the redo copy latch, Oracle acquires the redo allo-
cation latch only long enough to get memory in the redo log buffer allocated. Once
that operation is complete, it releases the redo allocation latch and writes the redo log
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

839
entry through the redo copy latch. Note that sleeps for the redo copy latches are nor-
mal and unique to this latch.
The process will sleep if it fails to acquire one of the redo copy latches. When it
wakes up, it tries to acquire the next redo copy latch in order, trying one at a time
until it is successful. Oracle executes a sleep operation between each acquisition
attempt, so you see the increases in the SLEEP columns of the V$LOCK data dictio-
nary view. That being the case, if you get multiple processes fighting for this latch,
you are going to get contention. You can do a couple of things to try to correct this
problem.
Increase the number of redo copy latches by increasing the default value of the
parameters LOG_SIMULTANEOUS_COPIES and LOG_ENTRY_PREBUILD_THRESHOLD.
Check your operating system documentation for restrictions on increasing these
values.
Tuning Redo Allocation Latch Problems
Oracle’s lone redo allocation latch serializes access to the redo log buffer, allocating
space to it for the server processes. Sometimes this latch is held for the entire period
of the redo write, and sometimes just long enough for the allocation of memory in
the redo log buffer. The parameter LOG_SMALL_ENTRY_MAX_SIZE sets a threshold
for whether the redo allocation latch will be acquired for the duration of the redo log
buffer write. If the size of the redo is smaller (in bytes) than LOG_SMALL_ENTRY_
MAX_SIZE, the redo allocation latch will be used. If the redo is larger, a redo copy
latch will be used. So if you see latch contention in the form of sleeps or spins on the
redo allocation latch, consider reducing LOG_SMALL_ENTRY_MAX_SIZE.
NOTE There is a school of opinion for setting LOG_SMALL_ENTRY_MAX_SIZE to 0 and
always using the redo copy latches. We contend that things Oracle are rarely so black and
white. Always test a setting like this, and always be willing to accept that something else
will work better.
Other Shared Pool Latching Problems
Latching issues in the shared pool are usually caused by insufficient memory alloca-

tion. As far as database parameters go, there isn’t a lot to tune with respect to the
shared pool beyond memory. Of course, maintain your typical vigilance over I/O dis-
tribution and bad SQL.
TUNING LOCKS, LATCHES, AND WAITS
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 17 • MONITORING AND TUNING LATCHES, LOCKS, AND WAITS
840
Tuning Buffer Block Waits
Data block waits listed in the V$WAITSTAT view can indicate an insufficient number
of free lists available on the table or index where the wait is occurring. You may need
to increase it. In Oracle8i you can dynamically increase or decrease the number of free
lists in an object by using the ALTER TABLE statement with the FREELISTS keyword in
the STORAGE clause.
NOTE By the way, don’t expect to see waits in V$WAITSTAT for the FREELIST class. This
statistic applies only to free list groups. Free list groups are used in Oracle Parallel Server
(OPS) configurations, so it’s unlikely that you’ll use them if you are not using OPS.
Another concern is the setting of the object’s INITRANS parameter. The INITRANS
parameter defaults to 1, but if there is substantial DML activity on the table, you may
need to increase the setting. This parameter, as well, can be adjusted dynamically.
Note that by increasing either free lists or INITRANS for an object, you are reducing
the total space available in a block for actually storing row data. Keep this in mind.
It can be hard to identify exactly what object is causing the buffer block wait prob-
lems. Perhaps the easiest way is to try to capture the waits as they occur, using the

V$SESSION_WAIT view. Remember that this view is transitory, and you might need to
create a monitoring script to try and catch some object usage trends. Another way to
monitor object usage is to enable table monitoring and watch the activity recorded in
the SYS.DBA_TAB_MODIFICATIONS view. You can create a job to copy those stats to a
permanent table before you update the statistics. See Chapter 16 for more on table
monitoring.
WARNING Carefully measure the performance impact of monitoring. It generally is
insignificant, but with a system that is already “performance challenged,” you may further
affect overall performance by enabling monitoring. Nevertheless, the potential gains from
knowing which tables are getting the most activity may well override performance con-
cerns. Remember that short-term pain for long-term gain is not a bad thing. You can pay
me now, or you can pay me later.
It’s the same old saw: The easiest way to reduce buffer block waits is to tune your
I/O and then tune your SQL. Statements and databases that run efficiently will reduce
the likelihood of buffer block waits.
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
841
TUNING LOCKS, LATCHES, AND WAITS
Beyond Simple
Database Management
PART
III
By the way—others suggest that reducing the block size of your database is
another solution. This approach might or might not reduce waiting; in any
case, it’s not a good idea. The overwhelming advantages of larger block sizes
cannot be ignored.
Last Word: Stay on Top of I/O Throughput!

We discussed proper placement of database datafiles in Chapter 4. If you’re run-
ning multiple databases, I/O distribution becomes critical. In addition, file
placement, partitioning, tablespaces, and physical distribution all affect I/O
throughput to an even greater degree. Thus the concepts discussed in Chapter 4
have direct application in tuning methodologies. We’ll close this chapter with
seven tenets for maximizing I/O throughput:
1. Separate data tablespaces from index tablespaces.
2. Separate large, frequently used tables, into their own tablespaces. If you
partition tables, separate each partition into its own tablespace.
3. Determine which tables will frequently be joined together and attempt to
distribute them onto separate disks. If you can also manage to put them
on separate controllers as well, so much the better.
4. Put temporary tablespaces on their own disks, particularly if intense disk
sorting is occurring.
5. Beware of the system tablespace. Often times DBAs think it is not heavily
used. You might be surprised how frequently it is read from and written
to. Look at V$FILESTAT and see for yourself.
6. Separate redo logs onto different disks and controllers. This is for both per-
formance reasons and recoverability reasons.
7. Separate your archived redo logs onto different disks. The archiving
process can have a significant impact on the performance of your system
if you do not distribute the load out correctly.
TIP One last bit of advice. On occasion, it’s the actual setups of the disk and file
systems that are hindering performance. Make sure you ask your system administra-
tor for help if you are having serious performance problems. He or she might have
additional monitoring tools on hand that can help you solve your problem. A lazy
DBA takes advantage of all resources at his or her disposal, always.
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com

Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18
Oracle8i Parallel
Processing
FEATURING:
Parallelizing Oracle operations 844
Using parallel DML and DDL 845
Executing parallel queries 849
Performing parallel recovery
operations 854
Tuning and monitoring parallel
processing operations 855
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
P
arallel processing is the process of using a multiprocessor computer to
divide a large task into smaller operations to be executed in parallel. In
Oracle, a single database operation can be divided into subtasks, which are
performed by several different processors working in parallel. The result is
faster, more efficient database operations.
In this chapter, we discuss several options for effectively implementing parallel
processing. The chapter begins with some basics of parallelizing operations, and then
discusses how to use parallel DML and DDL, execute parallel queries, and perform
parallel recovery operations. Finally, you will learn about the parallel processing param-
eters and how to monitor and tune parallel processing operations.
Parallelizing Oracle Operations
In Oracle8i, parallel processing is easy to configure, and it provides speed and opti-
mization benefits for many operations, including the following:

• Batch bulk changes
• Temporary rollup tables for data warehousing
• Data transfer between partitioned and nonpartitioned tables
• Queries
• Recovery operations
Prior to Oracle8i, you needed to configure your database instance for DML opera-
tions. Oracle8i automatically assigns these values. Setting the init.ora parameter
PARALLEL_AUTOMATIC_TUNING to TRUE will establish all of the necessary parame-
ters to default values that will work fine in most cases. (Oracle recommends that
PARALLEL_AUTOMATIC_TUNING be set to TRUE whenever parallel execution is
implemented.)
NOTE Setting the PARALLEL_AUTOMATIC_TUNING parameter to TRUE automatically
sets other parallel processing parameters. If necessary, you can adjust individual parallel
processing parameters in the
init.ora file to tune your parallelized operations. See the
“Tuning and Monitoring Parallel Operations” section later in this chapter for details.
When you use parallel processing, the database evaluates the number of CPUs on
the server and the number of disks on which the table’s data is stored in order to
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
845
determine the default degree of parallelism (DOP). The default degree of parallelism is
determined by two initialization parameters. First, Oracle estimates the number of
blocks in the table being accessed (based on statistics in the data dictionary) and
divides that number by the value of the initialization parameter PARALLEL_
DEFAULT_SCANSIZE. Next, you can limit the number of query servers to use by
default by setting the initialization parameter PARALLEL_DEFAULT_MAX_SCANS.
The smaller of these two values is the default degree of parallelism. For example, if

you have a table with 70,000 blocks and the parameter PARALLEL_DEFAULT_
SCANSIZE is set to 1000, the default degree of parallelism is 70.
Rather than accepting the default degree of parallelism, you can tell Oracle what the
degree of parallelism should be. You can assign a degree of parallelism when you create
a table with the CREATE TABLE command or modify a table with the ALTER TABLE
command. Also, you can override the default degree of parallelism by using the PAR-
ALLEL hint (as explained in the “Using Query Hints to Force Parallelism” section later
in this chapter). To determine the degree of parallelism, Oracle will first look at the sys-
tem parameters, then the table settings, and then the hints in the SQL statements.
NOTE When you are joining two or more tables and the tables have different degrees of
parallelism associated with them, the highest value represents the maximum degree of
parallelism.
Using Parallel DML and DDL
You can use parallel DML to speed up the execution of INSERT, DELETE, and UPDATE
operations. Also, any DDL operations that both create and select can be placed in par-
allel. The PARALLEL option can be used for creating indexes, sorts, tables—any DDL
operation, including SQL*Loader.
Enabling and Disabling Parallel DML
When you consider what is involved in performing standard DML statements for
INSERT, UPDATE, and DELETE operations on large tables, you will be able to see the
advantage of parallel processing. To use parallel DML, you first need to enable it. Use
the following statement to enable parallel DML:
ALTER SESSION ENABLE PARALLEL DML;
USING PARALLEL DML AND DDL
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
846
When you’re finished with the task you want to perform in parallel, you can dis-
able parallel DML, as follows:
ALTER SESSION DISABLE PARALLEL DML;
Alternatively, simply exiting the session disables parallel DML.
You can also use the ALTER SYSTEM command to enable and disable the PARAL-
LEL option.
Creating a Table with Parallel DML
Let’s work through an example to demonstrate parallel DML in action. In this example,
you will use parallel DML to create a table that combines information from two other
existing tables. The example involves data from a product-buying club of some sort
(such as a music CD club or a book club). The two tables that already exist are PROD-
UCT and CUSTOMER.
The PRODUCT table contains information about the products that a customer has
ordered. It has the following columns:
CUST_NO
PROD_NO
PROD_NAME
PROD_STAT
PROD_LEFT
PROD_EXPIRE_DATE
PROD_OFFERS
The CUSTOMER table contains information about each customer, including the
customer’s name, address, and other information. It has the following columns:
CUST_NO
CUST_NAME
CUST_ADD

CUST_CITY
CUST_STATE
CUST_ZIP
CUST_INFO
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
847
Using parallel DML, you will create a third table named PROD_CUST_LIST. This
table will contain combined information from the PRODUCT and CUSTOMER tables,
based on specific criteria. It will have the following columns:
CUST_NO
PROD_NAME
PROD_NO
CUST_NAME
CUST_ADD
CUST_CITY
CUST_STATE
CUST_ZIP
PROD_LEFT
PROD_OFFERS
The CUSTOMER table has a one-to-many relationship with the PRODUCT table.
Using parallel DML is a fast way to create the PROD_CUST_LIST table, since this
method uses the power of multiple CPUs.
For the PROD_CUST_LIST table, you will identify all customers who have five
products left prior to their club contract’s expiration (PROD_LEFT = 5). You might use
this information to send a discounted renewal option to these customers. You also
will include customers whose contract has expired and who have been sent three or
fewer renewal offers (PROD_OFFERS <= 3). You may want to offer these customers a

special incentive to get them back. Listing 18.1 shows the code to create the table.
Listing 18.1: Creating a Table with Parallel DML
SQL> ALTER SESSION ENABLE PARALLEL DML;
SQL> INSERT
INTO prod_cust_list (SELECT
b.cust_no
, a.prod_name
, a.prod_no
, a.prod_left
, a.prod_offers
, b.cust_name
, b.cust_add
, b.cust_city
USING PARALLEL DML AND DDL
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
848
, b.cust_state
, b.cust_zip
FROM
product a, customer b
WHERE a.cust_no = b.cust_no
AND a.prod_left = 5

AND a.prod_expire_date <= SYSDATE
AND a.prod_offers <= 3);
COMMIT;
ALTER SESSION DISABLE PARALLEL DML;
The first ALTER SESSION command enables parallel processing. The INSERT state-
ment that follows processes the operation in parallel. The final ALTER SESSION state-
ment disables the PARALLEL option.
What you accomplished here with the INSERT statement can be done with the
DELETE statement and UPDATE statement as well. You can see how this feature could
be useful for creating and updating data warehousing applications, as well as for
reporting from them.
Using Parallel DDL
Parallel DDL (PDDL) is a misnomer; rather than literally creating an object in parallel,
PDDL loads the data in parallel. PDDL applies to situations in which you build the
object in the same statement in which you define it. The example in Listing 18.1 cre-
ated a third table with the SELECT INTO clause. This is parallel DDL, because the code
creates the table and populates it with data at the same time. Similarly, the SELECT
command and associated INSERT command can be split into parallel processes. Thus,
any of your DDL operations that both create and select can be placed in parallel.
Here is an example of using parallel DDL to create an index:
SQL> CREATE INDEX product_cust_no_n1
ON product
(cust_no)
PARALLEL;
In this example, index creation results in a full table scan, but it is accomplished in
parallel. The SELECT portion is run using PARALLEL.
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

849
Parallel Loading with SQL*Loader
If you have a lot of data to load via SQL*Loader (discussed in depth in Chapter 22),
using the PARALLEL option can reduce the load time. However, if you think about
what SQL*Loader is accomplishing in a direct-path load operation, you will know
that parallel loading is a bit more complex. In a direct-path load, SQL*Loader skips
the portion of trying to look for space in existing blocks. It goes to the table’s high-
water mark and starts inserting rows in the new blocks, while periodically reestablish-
ing the new high-water mark.
Multiple processes will need to know what each of the other processes is accom-
plishing. Since only one process can access the table header block at a time, setting
PARALLEL=TRUE for multiple direct loads changes what the loading processes do. In
this case, each process creates its own temporary tablespace in the tablespace that is
being loaded. The indexes are not maintained. Once each process has completed the
load it is assigned, each temporary segment is merged into the table by adding the
extents to the header block of the table.
Because the parallel load is accomplished in this fashion, each process needs to
have its own input file. This requires some planning on your part, rather than leaving
the task up to the Parallel Manager. You must drop all indexes and re-create them
when the load is complete.
Executing Parallel Queries
The ideal environment for implementing a parallel query (PQ) is one in which tables
and indexes are partitioned, so that the query can access multiple partitions. Using
partitioning in conjunction with parallelism is the best way to speed up the execution
of your SQL code. Parallel queries on partitioned tables execute quickly through the
use of partition pruning—a very large table is separated into many different sections,
and then the parallel operations access only the parts needed, rather than the whole
table. (See Chapter 24 for details on Oracle partitioning.)
Parallel query operation (PQO) employs a producer-consumer input/output approach.
A SQL statement is handled through a client process, the Query Coordinator (QC), and

parallel execution (PX) processes. The client process sends the SQL statement to the
QC. The QC takes the original SQL, breaks it apart, and sends it to the various CPUs.
The PX slave receives the SQL from the QC and then gathers the data from the desired
tables. The producer part of the PX slave accesses the database (produces data). The
consumer part of the PX slave accepts (consumes) data from the producer. The PX
EXECUTING PARALLEL QUERIES
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
850
slave returns the results to the QC, where the results are combined into one return
statement and returned to the client process.
The total number of PX slaves on the instance is controlled by the PARALLEL_
MAX_SERVERS parameter in init.ora. PX slaves are borrowed from a pool of slaves
in the instance as needed. Communication between slaves is handled by exchanging
messages through a message queue in the SGA. For example, if the PARALLEL_
MAX_SERVERS parameter is set to 8, the number of PX slaves borrowed from a pool
of slaves is a maximum of eight. The PARALLEL_MAX_SERVERS parameter might also
be set to a lower value, depending on how the memory pool is set up and how many
other processes are presently running with the PARALLEL option on. You can adjust
the size of the message buffer and the PARALLEL_MAX_SERVER parameter, as described
in the “Tuning and Monitoring Parallel Operations” section later in this chapter.
Using One PX Slave
The following is an example of a parallel query executed using one PX slave, using

the CUSTOMER table from the example presented in the “Creating a Table with Paral-
lel DML” section earlier in this chapter.
SQL> SELECT
COUNT(1)
FROM product;
Figure 18.1 shows the process flow when the parallel query is invoked. First, the
server process that communicates with the client process becomes the QC. The QC is
responsible for handling communications with the client and managing an addi-
tional set of PX slave processes to accomplish most of the work in the database. The
QC enlists multiple PX slave processes, splits the workload between those slave
processes, and passes the results back to the client. The example shown in Figure 18.1
depicts three PX slave processes, which are considered to be one PX slave.
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
851
FIGURE 18.1
Parallel query
execution with one
PX slave
Here’s how the query is handled behind the scenes:
1. The client process sends the SQL statement to the Oracle server process.
2. The server process, which is controlled by PARALLEL_MAX_SERVERS, does the
following:
• Develops the best parallel access path to retrieve the data
• From the original SQL, creates multiple queries that access specific table par-
titions and/or table ROWID ranges
• Becomes QC to manage PX slave processes
• Recruits PX slave processes to execute the rewritten queries

• Assigns partitions and ROWID ranges to process for the PX slave processes
3. The PX slave processes do the following:
• Accept queries from the QC
• Process assigned partition and ROWID ranges assigned by the QC
• Communicate results to other PX slave processes via messages through mes-
sage queues
SQL
Results
Database
Client
process
Query
Coordinator
PX slave
process
PX slave
process
PX slave
process
SQL
Results
SQL
Results
SQL
Results
SQL area
CPU3
CPU2
CPU1
EXECUTING PARALLEL QUERIES

Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
852
4. The QC does the following:
• Receives the result sets from the PX slave processes
• Performs final aggregation if necessary
• Returns the final result set to the client process
Using Two PX Slaves
Two PX slaves are used for each parallel query execution path for a merge or a hash
join operation, or when a sorting or an aggregation operation (functions such as AVG,
COUNT, MAX, and MIN) is being accomplished in the original query. In this case,
each slave acts as both producer and consumer in the relationship. The slaves that
access the database produce data, which is then consumed by the second set of PX
slaves.
Here is an example of a parallel query executed using two PX slaves, using the
PRODUCT table from the example presented in the “Creating a Table with Parallel
DML” section earlier in this chapter:
SQL> SELECT
prod_name
, COUNT(1)
FROM product
GROUP BY prod_name;
In a sorting operation (GROUP BY), the first set of PX slave processes will select

rows from the database and apply limiting conditions. The result will be sent to the
second set of PX slave processes for sorting. The second set of PX slave processes has
the task of sorting rows within a particular range. Each of the PX (producer) slave
processes that retrieved data directly sends its results to the designated slave process,
according to the sort key.
Figure 18.2 shows the process flow when two PX slaves are used. This figure depicts
two columns of PX slave processes with communication between each column, repre-
senting two slave processes. The row of PX slave processes next to the database is con-
sidered the producer, because these slave processes get the data, and the second row
of PX slave processes is considered the consumer, because these processes receive the
data from the first set of PX slave processes to send back to the QC.
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
853
FIGURE 18.2
Parallel query execu-
tion with two PX slaves
Using Query Hints to Force Parallelism
Using hints to override the degree of parallelism established on a table is an easy way
to manage the work to be performed. For example, suppose you have a table that has
the PARALLEL option turned on and set to 2. Now you need to get the
SUM(AMOUNT) for each quarter from this table, so you want to increase the degree
of parallelism to get the results faster. For example, if you raise the degree of paral-
lelism to 6, you divide the work six ways and speed up the operations.
NOTE As noted earlier in the chapter, the order of operation is system parameter first;
then table settings, which can override system parameters; then hints, which override
table settings.
You can enable parallelism with the PARALLEL hint and disable it with the

NOPARALLEL hint. The PARALLEL hint has three parameters:
• The table name
• The degree of parallelism
• The instance setting
Since there are no keywords in the hint, you must be careful with the syntax. In
the following example, a hint is used to tell the Optimizer to use a degree of paral-
lelism of 3 when querying the CUSTOMER table.
Database
Client
process
Query
Coordinator
PX slave
process
PX slave
process
PX slave
process
PX slave
process
PX slave
process
PX slave
process
SQL area
Sort area
EXECUTING PARALLEL QUERIES
Beyond Simple
Database Management
PART

III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
854
SELECT /*+ PARALLEL(customer,3) */
FROM customer;
You can also use hints to specify the number of instances that should be involved
in performing the query. Simply include a value for the instances after the degree of
parallelism parameter, as in this example:
SELECT /*+ PARALLEL(customer,5,3) */
FROM customer;
This example sets a degree of parallelism of 5 and sets instances to 3, to use three
instances for resolving the query. This effectively sets 15-way parallelism.
When you want to disable parallelism for the query, use the NOPARALLEL hint.
The following disables the PARALLEL option used in the preceding example, without
modifying the init.ora parameters:
SELECT /*+ NOPARALLEL(customer) */
FROM customer;
Performing Parallel Recovery Operations
During an Oracle database recovery operation, the recovery server process has a lot to
do. It reads a recovery record from the log file; reads the block from the datafile, if
necessary, and places it into the buffer cache; and then applies the recovery record. It
repeats this process until all of the recovery records have been applied. This means
that the recovery server process is busy performing a great deal of reading, writing,
and blocking on input/output during database recovery. Given that database recovery
is often a time-pressured operation, the ability to speed it up by parallelization is
clearly welcome.

NOTE Prior to Oracle 7.1, the only form of parallel recovery was to start up multiple user
sessions to recover separate datafiles at the same time. Each session read through the
redo logs independently and applied changes for its specified datafile. This method
depended on the ability of the I/O subsystem to parallelize the separate operations. If the
operating system couldn’t parallelize, you would see little improvement in performance.
Oracle 7.1 and later offer true parallel recovery capabilities. With this feature, the
recovery server process acts as a coordinator for several slave processes. The recovery
server process reads a recovery record from the redo log file and assigns the recovery
record to a parallel slave process, repeating these steps until all recovery records have
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
855
been applied. The slave process performs the other steps: It receives each recovery
record from the recovery server process, reads a block into buffer cache if necessary,
and applies the recovery record to the data block. The slave process continues until it
is told that the recovery is complete.
You can invoke parallel recovery in either of two ways:
• Set RECOVERY_PARALLELISM in the init.ora file.
• Supply a PARALLEL clause with the RECOVER command in Server Manager.
(See Chapter 10 for more information about the RECOVER command.)
You must have set PARALLEL_MAX_SERVERS to above 0 before you can enable paral-
lel recovery, because it uses the parallel servers as the recovery slaves. The RECOVERY_
PARALLELISM parameter specifies the number of processes that will participate in par-
allel recovery during recovery. The RECOVERY_PARALLELISM setting cannot be
greater than the PARALLEL_MAX_SERVERS setting. Oracle will not exceed the value
of PARALLEL_MAX_SERVERS for recovery, even if the DBA requests a higher degree of
parallelism. The PARALLEL_MAX_SERVERS parameter is discussed in more detail in
the next section.

Contrary to Oracle’s claim that there is little benefit to using parallel recovery with
a setting of less than 8, personal experience reveals that the best performance is
achieved with RECOVERY_PARALLELISM set to two times the CPU count. Systems
with faster disk channels may benefit from a higher setting, perhaps three times the
CPU count. We recommend tuning toward disk channel saturation.
NOTE If your system is using asynchronous I/O, there will be little benefit to using par-
allel recovery.
Tuning and Monitoring Parallel Operations
Oracle8i uses many initialization parameters to control how various parallel-
processing operations will operate. As explained earlier in the chapter, when you set
the init.ora parameter PARALLEL_AUTOMATIC_TUNING to TRUE, Oracle automati-
cally sets other parallel processing parameters. If necessary, you can adjust the other
parameters individually to tune parallel operations on your system. Table 18.1 lists
the init.ora parameters for parallel server processes, along with a brief description,
the default value, and the valid values for each parameter.
TUNING AND MONITORING PARALLEL OPERATIONS
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
856
TABLE 18.1: INIT.ORA PARAMETERS FOR PARALLEL SERVER PROCESSES
Parameter Default Value Description
LARGE_POOL_SIZE 0 Controls whether large objects are stored in
the large pool section of the shared pool.

The minimum value is 600KB.
PARALLEL FALSE Controls whether direct loads are per-
formed using parallel processing.
PARALLEL_MAX_SERVERS 0 Sets the maximum number of parallel
processes that can be created for the
instance. The maximum setting is 3599.
PARALLEL_MIN_PERCENT 0 Sets the minimum percentage of requested
parallel processes that must be available in
order for the operation to execute in paral-
lel. The maximum setting is 100.
PARALLEL_MIN_SERVERS 0 Sets the minimum number of parallel
processes created at instance startup to be
used by parallel operations in the database.
Valid values are 0 to the value of PARALLEL_
MAX_SERVERS.
PARALLEL_SERVER FALSE Enables Oracle Parallel Server (OPS).
Defines the instance group (by name)
used for query server processes.
NULLPARALLEL_INSTANCE_
GROUP
Controls the amount of shared pool
space used by parallel query operations.
Depends on operat-
ing system; about
2KB
PARALLEL_EXECUTION_
MESSAGE_SIZE
Optimizes parallelized joins involving
very large tables joined to small tables.
FALSEPARALLEL_BROADCAST_

ENABLED
Automatically sets other parallel
parameters.
FALSEPARALLEL_AUTOMATIC_
TUNING
Varies the degree of parallelism based on
the total perceived load on the system.
FALSEPARALLEL_ADAPTIVE_
MULTI_USER
Specifies the number of processes
spawned to perform parallel recovery.
For parallel recovery, you can set this to
LOW (number of recovery servers may
not exceed 2
× CPU count) or HIGH
(number of recovery servers may not
exceed 4
× CPU count).
FALSE (no parallel
recovery)
FAST_START_PARALLEL_
ROLLBACK
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
857
TABLE 18.1: INIT.ORA PARAMETERS FOR PARALLEL SERVER PROCESSES (CONTINUED)
Parameter Default Value Description
SESSIONS ((1.1*PROCESSES)+5) Sets the number of user and system

sessions. Valid values are 1 to 2
31
.
TRANSACTIONS (1.1*SESSIONS) Sets the number of concurrent active
transactions. Valid values are 4 to 2
32
.
Oracle recommends that PARALLEL_AUTOMATIC_TUNING be set to TRUE when-
ever parallel execution is implemented. This setting will automatically set the
PARALLEL_ADAPTIVE_MULTI_USER, PROCESSES, SESSIONS, PARALLEL_MAX_
SERVERS, PARALLEL_THREADS_PER_CPU, and PARALLEL_EXECUTION_
MESSAGE_SIZE parameters.
The following parameters can be adjusted with the ALTER SYSTEM command:
• FAST_START_PARALLEL_ROLLBACK
• PARALLEL_ADAPTIVE_MULTI_USER
• PARALLEL_INSTANCE_GROUP
• PARALLEL_THREADS_PER_CPU
Here is an example of changing the PARALLEL_THREADS_PER_CPU setting:
ALTER SYSTEM SET parallel_threads_per_cpu = 8;
Defines the number of processes that
will participate in parallel recovery during
instance or media recovery. This cannot
be set to more than the PARALLEL_MAX_
SERVERS setting.
0RECOVERY_PARALLELISM
Sets the number of operating system user
processes that can simultaneously connect
to Oracle. The minimum value is 6.
Depends on CPU
count and MAX_

PARALLEL_SERVERS
PROCESSES
Specifies whether recovery is in parallel
mode.
FALSEPARALLEL_TRANSACTION_
RECOVERY
Specifies the number of parallel query
operation threads executed per CPU.
Depends on operat-
ing system
PARALLEL_THREADS_
PER_CPU
Computes the degree of parallelism for
parallel operations where the degree of
parallelism is not set.
Depends on operat-
ing system
PARALLEL_THREADS_
PER_CPU
Specifies the number of instances
configured.
1PARALLEL_SERVER_
INSTANCES
TUNING AND MONITORING PARALLEL OPERATIONS
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA

www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
858
This will set the system parameter PARALLEL_THREADS_PER_CPU to 8. You can
alter each of the other parameters in the same way.
The following sections discuss several parameters that can affect parallel execution
performance dramatically, views for monitoring parallel query processing, and paral-
lel query tuning.
Setting the Message Buffer Size
Oracle uses message buffers for interoperational parallel processes. One buffer is
required for each producer-consumer connection (described in the “Executing Parallel
Queries” section earlier in this chapter). Generally, increasing the size of the message
buffer increases throughput between the producer and consumer PX slaves, which in
turn reduces execution time. The number of connections for a particular query is, at
the most, the square of the maximum degree of parallelism of the query, because each
producer has a connection to each consumer. For example, if the maximum degree of
parallelism of the query is 2, there could be as many as 4 connections. If the maxi-
mum degree of parallelism is 8, there will be 64 connections. Based on this relation-
ship, one can deduce that memory requirements for message buffers increase greatly
as the value of the degree of parallelism increases.
The system-wide amount of memory required for the message buffers also depends
on the number of concurrent queries executing and the size of the message buffer.
The following formula is used for calculating the buffer space in Oracle 8i:
buffer space = (3 * PARALLEL_EXECUTION_MESSAGE_SIZE) * (CPU_COUNT +
2) * PARALLEL_MAX_SERVERS
The init.ora parameter PARALLEL_EXECUTION_MESSAGE_SIZE sets the message
buffer size. The default value for the PARALLEL_EXECUTION_MESSAGE_SIZE parame-
ter is 2KB. If your system has enough memory, increasing this value to between 4KB
and 8KB will likely increase performance between the consumer and producer PX

slaves, if the message buffers are indeed a bottleneck. (If your process is I/O bound,
this is a good indication that the message buffers are a bottleneck.) Since the PARALLEL_
EXECUTION_MESSAGE_SIZE value directly increases and decreases the need for
shared pool space, and the shared pool is part of the SGA, increasing the size of this
parameter will require you to increase the size of the SGA as well. If there is not
enough space, parallel queries will fail with an ORA-4031 memory allocation failure
message.
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
859
Setting the Message Buffer Location
Another important decision is where the message buffers will reside—in which SGA
pool. If PARALLEL_AUTOMATIC_TUNING is set to FALSE, the buffers will be in the
shared pool. If this parameter is set to TRUE, the buffers will be in the large pool.
Since the shared pool often has resource contention, it is recommended that you set
the PARALLEL_AUTOMATIC_TUNING value to TRUE, so that the message buffers will
be placed in the large pool.
To properly configure the size of the large pool, tune the other parameters that
affect parallel execution (see Table 18.1) and bring up the instance with PARALLEL_
AUTOMATIC_TUNING set to TRUE. Make sure that all parameters affected by this
parameter being set to TRUE are configured to meet your needs. The only way you
will know if the parameters are configured properly is to start processing.
You can retrieve the total size of the message pool from V$SGASTAT with this
query:
SQL> SELECT
pool, SUM(bytes)
FROM v$sgastat
WHERE name = ‘PX msg pool’;

TIP We’ve found that when we set PARALLEL_AUTOMATIC_TUNING to TRUE, the set-
tings rarely need to be modified. At most, we’ve needed to place a hint in the SQL to
increase the degree of parallelism for only a few processes. In one case, for example, a
process that was I/O bound was running on a computer that had four CPUs, which were
not even being used. We modified the SQL code to increase the degree of parallelism and
use all the CPUs. The process ran in less than half the time it had taken earlier in the week.
Setting the Minimum and Maximum Parallel Servers
When you execute your SQL in a parallel operation, Oracle will increase the number
of query servers as demand requires, up to the maximum number of parallel execu-
tion servers for an instance, specified by the PARALLEL_MAX_SERVERS parameter.
Oracle recommends that you use the same value across all instances in the parallel
server environment.
The PARALLEL_MAX_SERVERS parameter is used to size the large pool, which is
part of the SGA. If you didn’t allocate enough memory to the SGA, Oracle will use
only what is available. To avoid problems, make sure that PARALLEL_MAX_SERVERS
TUNING AND MONITORING PARALLEL OPERATIONS
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
CHAPTER 18 • ORACLE8i PARALLEL PROCESSING
860
is set properly. You want to ensure that there is adequate memory for times of peak
database utilization. If PARALLEL_MAX_SERVERS is set too high, memory shortages
may occur during high memory allocation, resulting in degraded performance. When
PARALLEL_MAX_SERVERS is set too low, some of your queries may not have enough

available memory resources to perform in parallel.
PARALLEL_MIN_SERVERS is the opposite of PARALLEL_MAX_SERVERS. Oracle will
decrease the number of query servers as demand requires, down to the amount set by
the PARALLEL_MIN_SERVERS parameter. This parameter specifies the minimum num-
ber of parallel execution servers for an instance. If there is not enough memory in the
SGA to handle this minimum, the SQL will not be executed in parallel.
The good news is that setting PARALLEL_AUTOMATIC_TUNING to TRUE will
automatically set these two parameters for you. In my experience, the automatic set-
tings work well. As you run through your day-to-day operations, you should be able
to tell if these two parameters need adjustment. Keep in mind that if you adjust either
of these parameters, you will be affecting the SGA and how it is allocated.
Viewing Parallel Query Information
Several dynamic performance views (V$ views) can help you evaluate the runtime
performance of parallel queries. Armed with the knowledge you gain from these mon-
itoring tools, you should be able to identify parallel performance problem areas. Table
18.2 lists the V$ tables of interest for monitoring parallel processing
TABLE 18.2: ORACLE V$ VIEWS FOR PARALLEL PERFORMANCE MONITORING
View Name Description
V$PQ_SYSSTAT Shows system-level statistics for parallel queries
V$PQ_SESSTAT Shows session-level statistics for parallel queries
V$PQ_SLAVE Displays statistics for each active parallel query in the instance
V$PQ_TQSTAT Displays statistics for all parallel queries and query operations
Tuning Parallel Query Processes
Although many parallel queries perform well, you may encounter some problem areas.
When this happens, you must be ready to tune the processes. You will need to enhance
or rewrite the SQL involved in the parallel operations. To accomplish this, you need to
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

861
understand how Oracle implements parallelism and the various parameters that affect
the parallel configuration and performance. With this foundation, you can learn how to
decode parallel execution plans and to identify areas for improvement.
For example, Listing 18.2 shows an execution plan that reveals SQL that could be
improved.
Listing 18.2: A Sample Execution Plan
EXPLAIN PLAN
SET STATEMENT_ID = ‘markb’ FOR
SELECT
a.task_number
, b.expenditure_number
, sum(b.amount)
FROM
tasks a
, expenditures b
WHERE
a.task_id = b.task_id
AND to_char(b.creation_date,’dd-mon-yyyy’)
BETWEEN ’01-JAN-2000’ AND ’31-DEC-2000’
GROUP BY
a.task_number
, b.expenditure_number;
SELECT STATEMENT COST = 19326822
2.1 SORT GROUP
3.1 SORT GROUP BY
4.1 HASH JOIN
5.1 TABLE ACCESS FULL ‘EXPENDITURES’
5.2 PARTITION RANGE ITERATOR
6.1 TABLE ACCESS FULL ‘TASKS’

You can see that we are doing a full table scan on both tables. This would not be
necessary if we were using an index on the driving table. All we would need to do is
get rid of the to_char on b.creation_date, and then the index would take effect, let-
ting the parallel query access the data in parallel without a full table scan. This would
allow us to access both the data and the associated index in parallel, speeding up the
execution of the SQL.
TUNING AND MONITORING PARALLEL OPERATIONS
Beyond Simple
Database Management
PART
III
C
opyright ©2002 SYBEX, Inc., Alameda, CA
www.sybex.com
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

Tải bản đầy đủ ngay
×