If this issue goes out according to schedule, it will be precisely 0.8876712328767120 years since I launched the Quality Quarterly. I want to mark this ~1 year anniversary by thanking those who have taken time to reply to the three prior emails– it has been wonderful to get updates, feedback, and new opportunities to collaborate!
I appreciate all who have stayed subscribed since the start and extend a warm welcome to those who have joined more recently! Past Quality Quarterly issues get posted to the website about three months after they are sent as emails.
The topics covered are catalyzed by client work and corresponding recurring issues and common questions. Feel free to email me with requests to cover an area of your interest!
As the subject line of this email and the above calculated ratio of elapsed time suggest (324 days ÷ 365 days/year), this issue addresses questions and confusion around significant figures. Although this topic often leads to discussions about how to appropriately round measurements, there are convenient conventions codified elsewhere (e.g. US Pharmacopeia- General Notices, 7.20 Rounding Rules or American Society for Testing and Materials -ASTM E29)
I would like to provide clarity on:
- What significant figures (sigfigs) are
- How measurement uncertainty (MU) can be used to identify the sigfigs in a mean
- What to do if more sigfigs are required to make decisions than is achievable by the MU
Gaps in collective understanding can make it difficult to choose the number of digits required for listing the target value and acceptance limits for a critical quality attribute on a product specification. This is important because it is the number of digits that drive the rounding necessary to report results on the corresponding certificate of analysis.
Significant Figures Defined
The Nevada Department of Environmental Protection succinctly documents the term “significant figures” in a quality control setting as follows:
“Significant figures include all of the digits in a measurement that are known with certainty plus one more digit, which indicates the uncertainty of the measurement.”
Noted also in this publication is the obvious, “Regardless of the measuring device, there is always some uncertainty in a measurement.”
Intuitively, this is why good scientists make multiple independent measurements and report an average of the replicate results. They know that the bigger the sample size (“n” independent replicates), the greater the confidence in the data and the lower the uncertainty associated with the reported value. However, it is less clear how many of the numbers that a software program spits out as the mean are meaningful (i.e., can be considered sigfigs.)
I will use the reporting of relative potency (RP) measurements as an example because the USP information chapter covering bioassay validation (<1033>) provides a target MU value to support a claim of two significant figures in a reported mean. Because relative potency data are assumed to be lognormal, log transformation is required for statistical analysis. This introduces another layer of complexity, so I recommend reading the relevant sections in these links if you are not experienced working in logarithms.
- Significant Figures (US Navy) US Navy
- Significant Figure Rules for logs (laney.edu)
Distinguishing Certain Digits from the Uncertain
Many organizations report RP results in percentage units with 100% being the target listed in the specification. RP may also be expressed as a ratio with a value of 1 being the target. It would not be uncommon to see the latter expressed as 1.0 (indicating two sigfigs) or 1.00 (indicating three sigfigs).
Practices for establishing the acceptance range include the listing of +/- value such as +/-20% or +/-0.2 or more correctly, establishing a multiplicative range (1 X 0.7and 1÷0.7 yielding specification limits of 0.7 to 1.4). In the latter case, the specification is an approximation of the additive range in log units (e.g. 0.00 +/-0.15 log 10 RP) which yield 0.71 to 1.41 after back transformation (10-0.15 to 100.15).
In this example, typically the reported results are calculated in log units and rounded after back transformation to the first decimal place. For the lower limit this is one sigfig, for the upper two sigfigs. However, in the log units, the limits are expressed to the same number of decimal places and the value to the left of the decimal place for both limits is zero.
This leads to the first important point:
The target and the acceptance limits on a specification and corresponding certificate of analysis should be listed in the units of analysis and rounding should also occur in those units.
Correspondingly, the MU must be expressed in those same units of measure, i.e log10 RP. Furthermore, there must be only one MU value. Statistics require homogeneity of variance across the range of measurements. It is from this single MU metric that the correct number of significant figures is derived.
Per USP<1033>, the standard error of the mean (SEM; precision ) based on log transformed data is the MU metric. It, in turn, is used to calculate what is termed the percent geometric coefficient of variation (%GCV).
According to the current USP chapter, a %GCV between 2- 20% is said to be indicative of two significant figures in the mean. Note however that there is no explanation or justification provided for this statement.
While this value can be accepted by simply citing guidance issued by a standard setting organization, I always want to have a clear scientific or statistical justification in case there is a regulatory challenge or the standard changes through a revision cycle process. In seeking an alternative approach, I discovered other proposed rules in a 2014 open access publication.
With considerable help from my friend and statistical consultant Dr. Janice Callahan, I gleaned the following about what the author proposes and how he arrived at his rules. Using simulated data distributions for a given mean and two different SEM values, he computed the frequency of the digits at each place (he calls these “decades”) in the long strings of numbers that are generated for the sample set of 8000 individual means. Digit or “decade” certainty is indicated by a skewed distribution of values (high frequency location positions), whereas digits of uncertainty are marked by essentially even distribution across all possible values (indicative of random frequencies. See Table 1 in the paper and compare the effect of SEM in Table 1A vs 1B.
From further analysis and his Figures 1 and 2, he derives several rules for identifying significant figures. I am still not totally clear on the math and reasoning behind it, but the second rule is by far the easiest to implement. That rule identifies the number of significant digits supported based solely on the sample size.
We now go back to the log10 RP example specification of 0.00+/-0.15, and provide additional information: an assay precision of 0.031, with sample size=4 gives an SEM of 0.0155. If a test sample returned a reported mean value in log10 RP units of 0.0623, how should the value be rounded? Using rule 1A in the paper, we arrive at the 1/100 decade, the location of the first non-zero digit in the SEM (the 1/100th’s place where the 1 is located) and would report the value as 0.06 [See the footnote below for the meaning of the zeros in log10 RP].
Using rule 1B, for a mean reported value of 0.0623, the mean/SEM=4.019 so this C value as he calls it, gives a first digit of 4 so rule 1A is used and the result is the same as for 1A; report a mean of 0.06. However, using rule 2, the number of significant figures in the SEM is only 1 (for n=1-6; to get 2 sigfigs requires n=7 -100) . This would result in an SEM being reported to the 1/10 decade (*see footnote again) and rounding of 0.0623 to the 1/10 decade which would be 0.1 using conventional rules.
By increasing the sample size to eight digits in the same example, the SEM becomes 0.01096. Applying rule 1A (directly or via1B; C=5.684307), we arrive again at the 1/100 decade and the corresponding last significant digit of the mean is in the 1/100 decade resulting in 0.06 rounded value.
Whichever way we calculate the reported value, it falls within the acceptance limits of -0.15 to 0.15 and is thus a passing result but only rules 1A and 1B allow for the specification to be listed to the 1/100 decade. Rule 2 would most often return one sigfig for bioassay tests (n<7), so the original specification +/-0.15 would need to be widened to +/-0.2 or narrowed to +/-0.1.
I now want to return to the USP’s %GCV rule.
By solving the GCV equation using a 20% GCV value, the corresponding SEM value is 0.08. However, this 0.08 value can be arrived at by differing combinations of precision and sample size. An assay with 0.08 precision and a sample size of one gives the same SEM value as an assay precision of 0.8 and a sample size of 100 (0.8/√100). By rule 2 in the paper, the number of significant digits is dependent on “n” and these two examples would yield different answers.
Also, in using %GCV only there is no relation of the SEM to the size of the mean as is accomplished by rule 1B. A reported value of 0.0649 with an SEM 0.08 yields a C value of 0.81125 so the first sigfig in the SEM is 8 in the 1/100 decade from rule 1A. But, for a reported value of 0.16, the C value using the same 0.08 SEM value is 2 and the number of significant digits is now considered to be 3 using rule 1B.
Given that RP assays generally have n<7, using one sigfig is most conservative but using rule 1A aligns with the USP outcome. Rule 1B allows for an increase in reporting significant figures if the first digit in the C value is sufficiently small but I am not entirely sure it works the same for log transformed values, so I propose sticking to rule 1A as the justification.
Now, working backwards again, we can take the maximum SEM that will support two significant figures by rule 1A, then arrive at SEM=0.09. From that value, %GCV calculation returns a value of 23%. This SEM can also be used to compute a % relative standard deviation (RSD) per a paper by Charles Tan (Tan CY. RSD and other variability measures of the lognormal distribution. Pharmacopeial Forum, 2005; 31(2): 653–5) yielding a value of 21% RSD and can be used to calculate a critical fold difference (CFD) per USP<1033> returning a value of 1.8 (using base 10 rather than natural logs).
These latter two values are slightly higher than, and slightly lower than, the oft quoted but not citable FDA expectations for %RSD (i.e. %CV<15%) or a CFD <2, respectively. The %GCV limit to support two significant figures also goes up slightly (from 20% to 23%). To get to an SEM that supports three significant figures, the SEM needs to drop to at least 0.009–not likely achievable for RP from my experience.
Thus, my recommendation is to use rule 1A as the justification for using up to 23% GCV, 21% RSD or 1.8 CFD as the MU metric limit to support two significant figures in a mean. If the sample size is greater than six, then rule 2 provides extra support for the choice of two significant figures.
I can’t see justifying rule 1B. I’d rather accept fewer sigfigs given that the SEM estimate usually comes from a limited number of experiments and thus is an underestimate of the true value.
What if the number of significant digits identified for an assay result is not enough?
To determine whether an assay and its reportable values are fit for purpose, the number of significant figures required must be established a priori. Therein typically lies the challenge. How certain one needs to be about a value depends on the risk of being wrong or at least being off by a bit.
This comes down to matching the capability of the tool to the need. You can always use a more precise tool than you need but you don’t want to use a less precise tool.
Back to relative potency values, the central question is what’s the risk of being wrong If potency is too high, it may pose a safety risk, if too low it may lack efficacy.
To gauge the thresholds between acceptable and unacceptable product lots, the relationship between potency and outcome needs to be established clinically by relating potency input to patient responses. Implicit in those studies is the fact that the input values are varied (ie product with a range of potencies is used) and that the assigned RP values are exact and thus error free.
All measurements are uncertain so being exact is not possible and it becomes important to make the error as small as is reasonably possible. The better the measuring tool, the fewer the number of replicates needed to arrive at the same SEM.
Again, what precision and replication combination is good enough? The constraints of cost and time make two significant figures a reasonable target during development but if only one sigfig can be supported, the specification should reflect that.
On a slightly tangential point but another motivating factor in choosing the topic of sigfigs, is using an equivalence model for RP lot release testing. In adopting this approach, the equivalence bounds, the mean and the associated confidence interval (CI) must all be rounded to the same location in log units once all calculations are complete.
Returning to our worked-out example above with a reported RP value of 0.0623, a sample size n=4 and an SEM of 0.0155, the 90%CI (0.0623+/-0.036477133) is 0.025823 to 0.098777. Using 0+/-0.15 to arrive at equivalence limits in the 1/00 decades place, the CI rounded to two sigfigs (0.03 to 0.10) falls within bounds and thus the product can be considered to have a relative potency equivalent to 100%.
Now my final statement: I am trying to envision the topic I might choose if I make it to my first log10 year anniversary for the Quality Quarterly. Luckily, I have nine years to think about that….
The RP ratio value of 0.7, 1.0 an 1.42 can also be expressed in scientific notation as 0. 7 X 100 , 1.0 X 100 and 1.42 X100 . Taking the log10 of each term in the example limits returns -0.1549 for log10(0.7) and 0 for log10(100), respectively for the lower limit, 0 and 0 for the target value and 0.1523 and 0 for the upper limit (1.42). Using the rules described here (CHM 112 Sig Figs for logs (uri.edu)), adding the terms together (e.g. 0+(-0.1549) and 0+0.1523) and then rounding each to the 1/00 decade, the acceptance limits on the target 0.00 become -0.15 and 0.15 Note that the zero to the left of the decimal is an exact number. It comes from the exponent in the second term of the scientific notation. The other two values to the right of the decimal come from the log of the value.
Now consider possible SEM values 0.1, 0.01 and 0.001 and rule 1 A. We use the first non-zero digit to locate the last digit in the mean value. The zeroes after the decimal are “captive” zeroes. That is, they have meaning. The more of them, the lower the MU and the more sigfigs associated with the mean. The term sigfig in log units includes all the zeroes after the decimal. One sigfig is in the 1/10 decade, two is in the 1/100 decade, three in the 1/1000 decade etc. Locating the first non-zero decade in the SEM means finding the first place of uncertainty in a mean in log units and what we call the last significant figures. This can be re-stated by saying that in the log units, a zero in an SEM indicates exactness (if to the left of the decimal) or certainty (if to the right of the decimal). Thus, the first non-zero is the first place of uncertainty and the place to which a mean value is rounded following conventional rules.