New Technologies in QC


Vol. 24 • Issue 10 • Page 30

Quality

Quality control plays a key role in helping laboratories ensure the quality of patient results and minimize patient risk. While many think of QC as being a static area within the field of clinical laboratory sciences, it is, in fact, dynamic with advances in QC strategy continually evolving. New technologies in QC are presented at local and national conferences around the globe, but only a fraction in the lab community is fortunate enough to attend these meetings. This article will cover a few that are being presented, whether in Australia, the Americas, Asia or other regions.

Setting QC Targets

A good estimate for target mean for a new lot can be achieved by designing a crossover study with 10 results and back calculating the standard deviation (SD) from a previous lot. The targets should be re-evaluated after accumulating more than 60 data points at each level. For a new lot with no historical data, establishing initial SD requires more than 60 data points. It is also suggested that there should be a quarterly review.

Repeating a QC rule outside 2SD from the mean is very effective. A traditional 1:2s QC rule is when the QC is considered to have failed if a single point is outside the 2SD limits. Unfortunately, a 1:2s rule has a high false rejection rate. A repeat 1:2s has lower false rejection rates, like a 1:3s QC rule, and the error detection rates are better than the traditional multi-rules. In fact, they approach error detection rates of the 1:2s QC rule for moderate to large out-of-control error conditions.

If the QC rule with mean and SD across multiple instruments measuring the same analyte is the same (as long as the QC rule mean and SD are designed appropriately), the overall risk of reporting unreliable results is lower compared to the traditional approach of setting QC rules on running mean and SD. Using a QC rule with a fixed mean appears to balance the risk of reporting incorrect results across multiple instruments, while the fixed SD allocates more error detection capability to poorer performing instruments, resulting in a lower overall risk of reporting incorrect results. Setting a QC rule with fixed means and SDs for multiple instruments performing the same assays can provide good QC performance characteristics from the perspective of the reliability of patient results.

Recover from a small out-of-control condition by spot-checking patient specimens near medical decision limits. Repeat all the patients at the medical decision limits up to the last successful QC event, and assess the magnitude of the measurement error by taking the difference between the new result and the old result. If the measurement error is greater than the allowable total error specification, the result should be corrected.

Recover from a large out-of control-condition by retesting patient samples in batches of 10 back to the last successful QC event, assessing the magnitude of the measurement errors by retesting the last 10 samples and taking the difference between the new results and the old results. If the measurement error is greater than the allowable total error ðspecification, then continue with the retesting in batches of 10 until the measurement error is less than the allowable total error specification or is insignificant. Those results with measurement error greater than the total error specification should be corrected.

Review Limits

Set review limits for automated chemistry tests that don’t have critical values or delta checks. Historical patient data is used to generate histograms for each analyte and determine the 95 percent non-parametric confidence intervals. The histograms and confidence intervals are visually inspected to establish the review codes where results outside of these historical limits would occur less than or equal to 5 percent of the time. The limits are widened for high volume tests to minimize false rejections. Patient results outside these limits are subject to review. This approach can be readily implemented in a typical auto verification workflow.

Archive Image

Click to view larger graphic.

A high sigma process can accommodate QC rules with lower false rejection rates as they are easy to QC. QC rejection limit, defined as a fraction (f) of the allowable total error (TEa) is a simple yet effective QC rule (±f*TEa) for high sigma-metric processes. A QC rule with rejection limits as ±0.6* TEa has at least 90 percent error detection capability with a false rejection rate less than 1 percent for any process with sigma-metric value greater than 5.3. The higher the sigma metric, the greater the error detection rate and lower the false rejection rate for these rules.

A low sigma process needs a QC rule with high error detection capability, as they are difficult to QC. New Z-squared QC rules (Z2) and repeat Z-squared are also shown to be more powerful than traditional ðmulti-rules. See Table.

The rejection limits are set to achieve an acceptable false rejection rates and the required error detection. The family of Z2 QC rules provides superior error detection ability with fewer QC samples for low sigma processes.

Performance Criteria

A recent study showed that the analytical imprecision and bias have considerable effect on misclassifying a patient result. Historical known-patient data of diseased and normal patients was taken. Decision limits were used to classify patients as diseased or normal. The bias and imprecision of the assay was varied and misclassifications were recorded. The lower the imprecision and bias was, the lower the misclassification rate. The misclassifications were used to identify an acceptable value for bias and imprecision for an assay. The study also showed that false classifications increased when allowable total error limits were increased. Consequently, it was recommended that analytical quality specification be set at maximum bias and imprecision.

These new technologies, in conjunction with current recommended best practices, may help laboratories design better quality control strategies that would identify failures that could otherwise compromise the integrity of the patient results, which may lead to patient harm. Some of these technologies may be readily implemented by the lab today while others may require additional work, such as software changes or internal studies to determine the approach that works best for the laboratory.

Lakshmi Kuchipudi is a senior scientist at Bio-Rad Labs and is currently working on her PhD in Statistics at Texas A&M.

About The Author