Tải bản đầy đủ (.pdf) (10 trang)

Introduction to Modern Liquid Chromatography, Third Edition part 59 pot

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (117.73 KB, 10 trang )

536 METHOD VALIDATION
Table 12.1
Determination of Method Accuracy/Recovery and Precision
Sample Concentration Accuracy/Recovery
(mg) Replicate 1 Replicate 2 Replicate 3
1.000 98.91% 98.79% 98.44%
2.000 99.08% 98.54% 98.39%
3.000 98.78% 98.68% 98.01%
Mean 98.62%
Standard deviation 0.32%
Relative standard deviation 0.32%
Acceptance criteria Accuracy (mean) Precision (RSD)
98–102% ≤2.0%
Assessment Pass Pass
product) spiked with known amounts of impurities (if impurities are not available,
see specificity, Section 12.2.3).
Table 12.1 illustrates a representative accuracy study. To document accuracy,
the guidelines recommend that data be collected from a minimum of nine determi-
nations over a minimum of three concentration levels covering the specified range
(i.e., three concentrations, three replicates each). The data should be reported as
the percent recovery of the known or added amount, or as the difference between
the mean and true value with confidence intervals (±1SD). In Table 12.1, data
are shown relative to 100%, and the mean recovery for n = 9 samples is 98.62%
with %RSD = 0.32%. In this example both the accuracy and precision pass the
pre-defined acceptance criteria of 98–102% and ≤2%, respectively.
12.2.2 Precision
The precision of an analytical method is defined as the closeness of agreement among
individual test results from repeated analyses of a homogeneous sample. Precision
is commonly performed as three different measurements: repeatability, intermediate
precision, and reproducibility.
12.2.2.1 Repeatability


The ability of the test method to generate the same results over a short time interval
under identical conditions (intra-assay precision) should be determined from a
minimum of nine determinations. Their repeatability should cover the specified
range of the procedure (i.e., three concentrations, three repetitions each) or from
a minimum of six determinations at 100% of the test or target concentration.
Representative repeatability results are summarized in Table 12.2, where results are
summarized for six replicate injections of the same sample. The 0.12% RSD easily
passes the ≤2% acceptance criterion.
12.2 TERMS AND DEFINITIONS 537
Table
12.2
Determination of Repeatability by Replicate Injections
of the Same Sample
Injection Response
1 488,450
2 488,155
3 487,986
4 489,247
5 487,557
6 487,923
Mean 488,220
Standard deviation 582.15
RSD 0.12%
Acceptance criteria (RSD) ≤2%
Assessment Pass
12.2.2.2 Intermediate Precision
Intermediate precision refers to the agreement between the results from within-
laboratory variations due to random events that might normally occur during the
use of a test method, such as different days, analysts, or equipment. To determine
intermediate precision, an experimental design should be employed so that the

effects (if any) of the individual variables can be monitored. Typical intermediate
precision results are shown in Table 12.3. In this study, analysts from two different
laboratories prepared and analyzed six sample preparations from one batch of
samples and two preparations each from two additional batches (all samples are
assumed to be the same concentration); all data from each analyst were pooled
for the summary in Table 12.3. The analysts prepared their own standards and
solutions, used a column from a different lot, and used a different HPLC system
to evaluate the sample solutions. Each analyst successfully attained the precision
requirements of ≤2% RSD, and the %-difference in the mean values between the
two analysts was 0.7%, which indicates that there is no difference in the mean values
obtained (Student’s t-test, P = 0.01).
12.2.2.3 Reproducibility
Documentation in support of collaborative studies among different laboratories
should include the standard deviation, relative standard deviation (or coefficient of
variation), and the confidence interval. Table 12.4 lists some results typical of a
reproducibility study. To generate the data shown here, analysts from two different
laboratories (but not the same analysts involved in the intermediate precision study)
prepared and analyzed six sample preparations from one product batch and two
preparations each from two additional batches (all samples are assumed to be
the same concentration). They prepared their own standards and solutions, used
a column from a different lot, and used a different HPLC system to evaluate the
538 METHOD VALIDATION
Table 12.3
Measurement of Intermediate Precision
Amount
Analyst One Analyst Two
Mean 13.9mg 14.0mg
Standard deviation 0.05 mg 0.03 mg
% RSD 0.36 0.21
% Difference (means) 0.70%

Acceptance criteria (RSD) ≤2%
Assessment Pass
Table 12.4
Measurement of Reproducibility
Amount
Lab One Lab Two
Mean 14.0 mg 13.8 mg
Standard deviation 0.07 mg 0.14 mg
% RSD 0.50 1.01
% Difference (means) 1.43%
Acceptance criteria (RSD) ≤2%
Assessment Pass
sample solutions. Each analyst successfully attained the precision requirements of
≤2% RSD, and the %-difference in the mean values between the two analysts was
1.4%, indicating that there is no difference in the mean values obtained (Student’s
t-test, P = 0.01).
12.2.2.4 Ruggedness
Ruggedness is defined in past USP guidelines as the degree of reproducibility of test
results obtained by the analysis of the same samples under a variety of conditions,
such as different laboratories, analysts, instruments, reagent lots, elapsed assay
times, assay temperature, or days. Ruggedness is a measure of the reproducibility
of test results under the variation in conditions normally expected from labora-
tory to laboratory and from analyst to analyst. The use of the term ruggedness,
however, is falling out of favor; the term is not used by the ICH but is instead
addressed in guideline Q2 (R1) [4] under the discussion of intermediate precision
(Section 12.2.2.2, within-laboratory variations: different days, analysts, equipment,
etc.) and reproducibility (Section 12.2.2.3, between laboratory variations from
collaborative studies).
12.2 TERMS AND DEFINITIONS 539
12.2.3 Specificity

Specificity is the ability to measure accurately and specifically the analyte of interest
in the presence of other components that may be expected to be present in the
sample. Specificity takes into account the degree of interference from other active
ingredients, excipients, impurities, degradation products, and so forth. Specificity in
a test method ensures that a peak’s response is due to a single component (no peak
overlaps). Specificity for a given analyte is commonly measured and documented by
resolution, plate number (efficiency), and tailing factor.
For identification purposes, specificity is demonstrated by (1) separation from
other compounds in the sample, and/or (2) by comparison to known reference
materials.
Separation from Other Compounds. For assay and impurity tests, specificity
can be shown by the resolution of the two most closely eluted compounds that
might be in the sample. These compounds usually are the major component or
active ingredient and the closest impurity. If impurities are available, it must be
demonstrated that the assay is unaffected by the presence of spiked materials
(impurities and/or excipients). If the impurities are not available, the test results are
compared to a second, well-characterized procedure. For assay, the two results are
compared. For impurity tests, the impurity profiles are compared. Comparison of test
results will vary with the particular test method but may include visual comparison
as well as retention times, peak areas (or heights), peak shape, and so forth.
Comparison to Known Reference Materials. Starting with the publication of
USP 24, and as a direct result of the ICH process, it is now recommended that
a peak-purity test based on diodearray (DAD) detection or mass spectrometry
(MS) be used to demonstrate specificity in chromatographic analyses. Modern
DAD technology (Section 4.4.3) is a powerful tool used to evaluate specificity.
DAD detectors can collect spectra across a range of wavelengths for each data
point collected across a peak, and through software processes, each spectrum can
be compared (to the other spectra collected) to determine peak purity. Used in
this manner, DAD detectors can distinguish minute spectral and chromatographic
differences not readily observed by simple overlay comparisons.

However, DAD detectors can be limited, on occasion, in the evaluation of
peak purity by a lack of UV response, as well as by the noise of the system and the
relative concentrations of interfering substances. Also the more similar the spectra
are, and the larger the concentration ratio is, the more difficult it is to distinguish
co-eluted compounds. Mass spectrometry (MS) detection (Section 4.14) overcomes
many of these limitations of the DAD, and in many laboratories it has become the
detection method of choice for method validation. MS can provide unequivocal peak
purity information, including exact mass, structural and quantitative information.
The combination of both DAD and MS on a single HPLC instrument can provide
complementary information and ensure that interferences are not overlooked during
method validation.
12.2.4 Limit of Detection and Limit of Quantification
The limit of detection (LOD) is defined as the lowest concentration of an analyte
in a sample that can be detected but not necessarily quantified. It is a limit test
540 METHOD VALIDATION
that specifies whether an analyte is above or below a certain value. The limit of
quantification (LOQ, also called limit of quantitation) is defined as the lowest
concentration of an analyte in a sample that can be quantified with acceptable
precision and accuracy by the test method.
Determination of the LOQ is a two-step process. Regardless of the method
used to determine the LOQ, the limits should first be estimated from experimental
data, such as by signal-to-noise ratio or the slope of a calibration curve (Sections
4.2.4, 11.2.5). Second, the latter value must be confirmed by results for samples
formulated at the LOQ. For further details, see Section 11.2.5.
12.2.5 Linearity and Range
Linearity is the ability of the test method to provide results that are directly
proportional to analyte concentration within a given range. Linearity generally is
reported as the variance of the slope of the regression line (e.g., standard error from
an Excel regression analysis, as in Fig. 11.7). Range is the interval between the upper
and lower concentrations of analyte (inclusive) that have been demonstrated to be

determined with acceptable precision, accuracy, and linearity using the test method
as written. The range is normally expressed in the same units as the results obtained
by the test method (e.g., ng/mL). Guidelines specify a minimum of five concentration
levels for determining range and linearity, along with certain minimum specified
ranges that depend on the type of test method [2, 4]. Table 12.5 summarizes typical
minimum ranges specified by the guidelines [4]. Data to be reported generally
include the equation for the calibration curve line, the coefficient of determination
(r
2
), and the curve itself, as illustrated in Figures 11.8 and 11.9 based on the data of
Table 11.1.
12.2.6 Robustness
The robustness of an analytical procedure is defined as a measure of its capacity to
obtain comparable and acceptable results when perturbed by small but deliberate
variations in specified experimental conditions. Robustness provides an indication of
the test method’s suitability and reliability during normal use. During a robustness
study, conditions are intentionally varied to see if the method results are affected.
The key word in the definition is deliberate . Example HPLC variations are illustrated
Table 12.5
Example Minimum Recommended Ranges
Type of Method Recommended Minimum Range
Assay 80–120% of the target concentration
Impurities
a
From the reporting level of each impurity, to 120% of the specification
Content uniformity 70–130% of the test or target concentration
Dissolution ±20% over the specified range of the dissolution test
a
For toxic or more potent impurities, the range should reflect the concentrations at which these must be
controlled.

12.2 TERMS AND DEFINITIONS 541
Table
12.6
Typical Variations to Test Robustness in Isocratic Separations
Factor Limit Range
Organic solvent concentration ±2–3%
Buffer concentration ±1–2%
Buffer pH (if applicable) ±0.1–0.2 pH units
Temperature ±3

C
Flow rate ±0.1–0.2 mL/min
Detector wavelength ±2–3 nm for 5-nm bandwidth
Injection volume Depends on injection type and size
Column lots 2–3 different lots
Table 12.7
Typical Variations to Test Robustness in Gradient Separations
Factor Limit Range
Initial gradient hold time ±10–20% of hold time
Slope and length Slope determined by the gradient range and time; adjust
gradient time by ±10–20% and allow the slope to vary
Final hold time Adjust to allow last-eluted compound to appear in
chromatogram
Other variables listed in Table 12.6
(as appropriate)
in Tables 12.6 and 12.7 for isocratic and gradient methods, respectively. Variations
should be chosen symmetrically about the value specified in the test method (e.g., for
±2%, variations of +2and−2%), to form an interval that slightly exceeds the
variations that can be expected when the test method is implemented or transferred.
For example, if the buffer pH is adjusted by titration and the use of a pH meter, the

typical laboratory has an error of ±0.1 pH units. To test robustness of a test method
to variations in a specified pH-2.5 buffer, additional buffer might be prepared and
tested at pH-2.3 and pH-2.7 to ensure that acceptable analytical results are obtained.
For instrument settings, manufacturers’ specifications can be used to determine
variability. The range evaluated during the robustness study should not be selected
to be so wide that the robustness test will purposely fail, but rather to represent
the type of variability routinely encountered in the laboratory. Challenging the test
method to the point of failure is not necessary. One practical advantage of robustness
tests is that once robustness is demonstrated over a given range of an experimental
condition, the value of that condition can be adjusted within that range to meet
system suitability without a requirement to revalidate the test method (Section 12.8).
542 METHOD VALIDATION
Robustness should be tested late in the development of a test method, and if
not, is typically one of the first method characteristics investigated during method
validation. However, throughout the method development process attention should
be paid to the identification of which chromatographic conditions are most sensitive
to small changes so that, when robustness tests are undertaken, the appropriate
variables can be tested. Robustness studies also are used to establish the system
suitability test to make sure that the validity of the entire system (including both
the instrument and the test method) is maintained throughout method implemen-
tation and use. In addition, if the results of a test method or other measurements
are susceptible to variations in experimental conditions, these conditions should
be adequately controlled and a precautionary statement included in the method
documentation.
To measure and document robustness, the following characteristics should be
monitored:
• critical peak pair resolution R
s
• column plate number N (or peak width in gradient elution)
• retention time t

R
• tailing factor TF
• peak area (and/or height) and concentration
Replicate injections should be made during the robustness study to improve the
estimates (e.g., %RSD) of the effect of an experimental-variable change. In many
cases multiple peaks are monitored, particularly when some combination of acidic,
neutral, or basic compounds is present in the sample. It may be useful to include
in the method document a series of compromised chromatograms illustrating the
extremes of robustness (see discussion of Fig. 1.5 of [11]). Such examples, plus
corrective instructions, can be useful in troubleshooting method problems.
12.3 SYSTEM SUITABILITY
Although not formally a part of method validation according to the USP, system
suitability tests are an integral part of chromatographic methods [6]. System suit-
ability tests are used to verify that the resolution and precision of the system are
adequate for the analysis to be performed. System suitability tests are based on the
concept that the equipment, electronics, analytical operations, and samples comprise
an integral system that can be evaluated as a whole.
System-suitability tests check for adequate system performance before or dur-
ing sample analysis. Characteristics such as plate number, tailing factor, resolution,
and precision (repeatability) are measured and compared to the method specifica-
tions. System-suitability parameters are measured during the analysis of a ‘‘system
suitability sample’’ (a mixture of the main components and expected degradants
or impurities, formulated to simulate a representative sample). However, samples
consisting of only a single peak (e.g., a drug substance assay where only the API is
present) can be used, provided that a column plate number and tailing factor are
specified in the test method. Replicate injections of the system suitability sample
are compared to determine if requirements for precision are met. Unless otherwise
12.4 DOCUMENTATION 543
Table
12.8

FDA System Suitability Recommendations
Parameter Recommendation Comments
Retention factor kk
>
2 Peak should be well resolved from other
peaks and the t
0
-peak
Injection repeatability RSD ≤1% for n ≥ 5 Measured at time samples are analyzed
Resolution R
s
R
s
>
2 Measured between peak of interest and
closest potential interfering peak
Tailing factor TF TF ≤2
Column plate number NN
>
2000 Column characteristics not specified
Source: Data from [10].
specified by the test method, data from five replicate injections of the analyte are
used to calculate the relative standard deviation when the test method requires
RSD ≤2%; data from six replicate injections are used if the specification is RSD
>
2%.
In a regulated environment, system suitability tests must be carried out prior
to the analysis of any samples. Following blank injections of mobile phase, water,
and/or sample diluent, replicate system suitability injections are made, and the results
compared to method specifications. If specifications are met, subsequent analyses can

continue. If the method’s system suitability requirements are not met, any problems
with the system or method must be identified and corrected (possibly as part of
a formal out-of-specification [OOS] investigation), and passing system suitability
results must be obtained before sample analysis is resumed. To provide confidence
that the test method runs properly, it is also recommended that additional system
suitability samples (quality control samples or check standards) are run at regular
intervals (interspersed throughout the sample batch); %-difference specifications
should be included for these interspersed samples to make sure the system still
performs adequately over the course of the entire sample run. Alternatively, a
second set of system suitability samples can be included at the end of the run.
In 1994 the FDA published a reviewer guidance document regarding the valida-
tion of chromatographic methods that includes the last-published recommendations
for system suitability [10]. These recommendations are summarized in Table 12.8.
Many practitioners believe that test methods that satisfy the criteria of Table 12.8
will reduce the risk of criticism during a regulatory audit. These guidelines serve
as useful examples; however, actual specifications are set by the user and can vary
significantly according to the method.
12.4 DOCUMENTATION
Validation documentation includes the protocol used to carry out the validation,
the test method, and the validation report. These documents should be written
as controlled documents as part of a quality system (Section 12.9) that ensures
compliance with appropriate regulations.
544 METHOD VALIDATION
12.4.1 Validation Protocol
The validation protocol specifies the requirements (validation procedures and accep-
tance criteria) to be satisfied. Where possible, the protocol should reference standard
operating procedures (SOPs) for specific work instructions and analytical methods.
The protocol must be prepared and approved before the official validation process
begins. In addition the validation protocol typically contains the following:
• protocol title

• purpose of the test method to be validated
• description of the test and reference substances
• summary of the test method to be validated, including the equipment, speci-
fied range, and description of the test and reference substances; alternatively,
the detailed method description may be referenced or appended to the
protocol
• validation characteristics to be demonstrated
• establishment and justification of the acceptance criteria for the selected
validation characteristics
• dated signature of approval of a designated person and the quality unit
The protocol title is a brief description of the work or study to be performed,
for example, ‘‘Validation of the Test Method for the HPLC Assay of API X in
Drug Product Y.’’ The purpose should specify the scope and applicability of the
test method. The summary must adequately describe the actual written test method
(which contains enough detail to be easily reproduced by a qualified individual).
To reduce repetition, however, the test method often is included by reference or
as an appendix to the protocol. The specific validation characteristics (accuracy,
precision, etc.) to be evaluated are also included in the protocol, because these are
dependant upon the type of analytical method (Section 12.5). Acceptance criteria for
method validation (e.g., allowable error or imprecision) often are established during
the final phase of method development (sometimes referred to as ‘‘pre-validation’’
experiments; see Section 12.4.2). The designated quality unit representative reviews
and approves the protocol, to ensure that the proper regulatory regulations will be
met and the proposed work will satisfy its intended purpose (Section 12.9).
Experimental work outlined in the validation protocol can be designed such
that several appropriate validation characteristics are measured simultaneously.
For example, experiments that measure accuracy and precision can be used as
part of linearity studies, LOD, and LLOQ can be determined from the range and
linearity data, and the solution stability of the sample and standard can use the
same preparations that test accuracy and precision. Executed in this manner, the

experimental design makes the most efficient use of time and materials.
12.4.2 Test Method
The test method is the formal document that contains all of the necessary detail to
implement the analytical procedure on a routine basis. The test method is a controlled
document with revision control (the requirement that any document changes are
authorized, and all revisions are available for later comparison), approvals at the
12.4 DOCUMENTATION 545
appropriate levels (including the quality unit, Section 12.9), and written with enough
detail to warrant only one possible interpretation for any and all instructions. A
typical test method will include the following:
• descriptive method title
• brief method description or summary
• description of the applicability and specificity, along with any special
precautions (e.g., safety, storage, and handling)
• list of reagents, including source and purity/grade
• equipment, including the HPLC and any other equipment necessary
(balances, centrifuges, pH meters, etc.)
• detailed instrument operating conditions, including integration settings
• detailed description of the preparation of all solutions (mobile phases,
diluents), standards, and samples
• system suitability test description and acceptance criteria
• example chromatograms, spectra, or representative data
• detailed procedures, including an example sample queue (the order in which
standards and samples are run)
• representative calculations
• revision history
• approvals
Once drafted, test methods often are subjected to a pre-validation stage, to
demonstrate that acceptance criteria will be met when the formal validation takes
place. The pre-validation stage typically consists of an evaluation of linearity and

accuracy. Sometimes a test of robustness, if it has not already been evaluated
during method development, is then carried out. The validation process usually will
proceed more smoothly, and with lower risk of failure, if the ability to pass all the
key validation criteria is confirmed during the pre-validation stage. A draft method
will become an official test method after a full validation of its intended purpose.
12.4.3 Validation Report
The validation report is a summary of the results obtained when the proposed test
method is used to conduct the validation protocol. The report includes representative
calculations, chromatograms, calibration curves, and other results obtained from
the validation process. Tables of data for each step in the protocol, and a pass/fail
statement for each of the acceptance criteria are also included. A validation report
generally consists of the following sections:
• cover page with the title, author, and affiliations
• signature page dated and signed by appropriate personnel, which may include
the analyst, the group leader, a senior manager, and a quality control and/or
a quality assurance representative
• an itemized list of the validation characteristics evaluated, often in the form
of a table of contents

×