Total Analytical Error (TAE) from Concept to Calculations
Do you struggle with understanding statistical concepts applicable to your role and responsibilities in the development of biologics?
Are you aware of the trend toward Bayesian statistical inference and the shift away from the frequentist approach?
Because I often hear “yes” and “no” respectively to the questions above, I want to explore analytical error as a timely topic. It is highly relevant to the issuance earlier this year of ICH Q14 Analytical procedure development | European Medicines Agency and the introduction of the term “Total Analytical Error” as an “alternative approach to individual assessment of accuracy and precision.”
Without including a specific calculation, interpreting existing assay validation guidance (and understanding regulatory reviewers’ expectations in reporting it), has been confusing. This fall issue of the Quality Quarterly is intended to: (1) raise awareness and (2) offer some clarity to offset (and validate!) any confusion you might be experiencing.
Bench scientists intuitively understand that the accuracy and precision of a test method define its “resolving power”- the ability to discriminate between samples with different amounts of the analyte being measured.
Most are familiar with existing guidance such as:
- ICH Q2(R2) Validation of analytical procedures | European Medicines Agency
- 〈1033〉 Biological Assay Validation
These documents are useful tools for those tasked with designing studies to measure accuracy and precision, but they do not currently use the term “total analytical error.”However, the glossary in the current Bioanalytical Method Validation Guidance for Industry | FDA (BMV) does define total error for a reportable measurement and provides a general calculation:
“Total error is the sum of the absolute value of the errors in accuracy (%) and precision (%). Total error is reported as percent (%) error.”
Standard Deviation/Measured Mean +
|Measured Mean-Expected Value|
/ Expected Value
Total analytical error = bias + (1.65 x imprecision)
Total Error = %BIAS + (1.96 * %CV)
Total Error = Bias² + Variance + irreducible error
The latter appears to be using Pythagoras’s theorem to “prove” the addition of Bias2 and variance (standard deviation2 ;SD2) to arrive at total error. Note that the equation above the hypotenuse in the diagram squares the bias and standard deviation (SD) terms before adding them and then returning to them to their original units of measure by taking the square root. If this is indeed true, then the BAV guidance equation cannot also be true.
However, you might recognize the addition of bias2 and variance (SD2) as part of the denominator in the Cpm equation referenced in the current USP<1033> chapter. In the Cpm equation, the total measurement uncertainty of a reported value is combined with process variance in the denominator and related to the width of a product specification (USL-LSL) in the numerator. From this value, the probability of an out-of-specification (OOS) result can be derived.
By working backward from a maximum acceptable OOS rate (i.e., manufacturer’s risk tolerance) and an estimated process variance term, the limits on assay precision and bias terms can be established and validation results reported per ICH Q2(R2) Validation of analytical procedures | European Medicines Agency (europa.eu) using a frequentist approach (as evidenced by an expectation of confidence intervals)
- Precision: The standard deviation, relative standard deviation (coefficient of variation) and confidence interval should be reported for each type of precision investigated and be compatible with the specification limits.
- Accuracy: An appropriate confidence interval (e.g., 95%) for the mean percent recovery or the difference between the mean and accepted true value (as appropriate) should be compared to the acceptance criterion to evaluate analytical procedure bias. The appropriateness of the confidence interval should be justified.
Complicating matters slightly has been a shift towards implementing a tolerance interval approach to setting lot release specification limits. This is where the Bayesian inference approach comes in. It establishes limits based on the probability of future results falling within specification limits. It requires another set of concepts and equations that are described in USP <1210> “Statistical Tools for Procedure Validation”.
Statisticians from the FDA have recently co-authored an open-access paper published in AAPS which spells out the advantage of a Bayesian approach over a frequentist approach: “Comparing a Bayesian Approach (BEST) with the Two One-Sided t-Tests (TOSTs) for Bioequivalence Studies.” AAPS J 24, 97 (2022).
I learned of this paper from a post on LinkedIn and invited clarification on what appears to me, to be a hybrid approach (a Bayesian interval with frequentist bounds?) There are different camps among Bayesian approach advocates, so I am waiting for more clarity. In the meantime, this open-access paper sent to me by consultant statistician Dr. Janice Callahan, is a good primer on the subject of Bayes or not Bayes, is this the question? (nih.gov).
Although I am not a biostatistician, I have had countless conversations with statistical colleagues over the past 30 years to get me to my level of understanding about TAE. I am still learning about Bayesian ideas and applications and hoping for clear consensus and communication if regulatory expectations shift toward them.