Vol. 22 • Issue 9 • Page 24
Lot-to-lot variation in immunoassay testing is one of the most common quality control (QC) issues faced by laboratory professionals. If not managed effectively, shifts in QC following a reagent lot change can be a significant source of analytical errors for laboratories.
This can be especially so when very low concentrations of an analyte are being reported as clinically significant or when a reference range is very narrow. A good example is provided by the measurement of troponin in the diagnosis of myocardial infarction. Demands for more reliable and faster tests to detect heart attacks have driven the diagnostic industry to produce more sensitive assays.
Ultra-sensitive tests may offer a means of earlier diagnosis, but that brings with it issues of a loss in assay specificity and the potential for increased false positive results. In these circumstances the running of controls with values set at the diagnostic decision levels is critical. Unfortunately, many commercial controls do not meet these requirements; hence, the performance of these highly sensitive assays is often not being fully confirmed.
This is further complicated by shifting control values when using different methods, instrumentation and even when different batches of reagent from the same supplier are used. As we will see, this is often down to manufacturers choosing to increase their margins by not using fully human components in the manufacture of their controls.
Normally, laboratories perform a number of checks to ensure the accuracy of patient test results before they are released. In immunoassay testing, laboratories should revalidate each new batch of reagent as they are changed. QC tests should be run before lots run out and again when new lots are introduced, and values reassigned accordingly.
During this revalidation process it is not abnormal for the laboratory to notice shifts in their QC results when compared to the previous batch of reagent. However, when patient samples are analyzed, the results remain the same. This can be a significant source of confusion and frustration to laboratories.
So what causes this discordance between QC samples and patient samples when validating new batches of reagent, and what should a laboratory with shifting QC do? Adjust the control range, check for shifts in patient data or simply ignore the QC shift? The solution may lie in none of these actions.
Animal vs. Human
One possible cause of shifting QC results between reagent lots comes when the immunoassay control being used contains constituents of animal origin. Animal derivatives within the control may cause the sample to behave differently to patient samples. Many immunoassay controls on the market contain animal constituents, which are often added to bulk out control products in an attempt to keep manufacturing costs down. Despite the animal content, these types of controls are often described as “human based.”
Fully human QC materials are essential for immunology and immunoassay-based methods where the reagents contain human antibody. Using 100% human controls not only creates a matrix similar to the patient sample, but also reduces antibody cross reactivity and the risk of control values shifting when reagent batch is changed.
The use of controls that contain constituents of animal origins can cause problems with antibody-based tests due to differences in antibody specificity toward non-human components. Different batches of reagent are likely to have been manufactured with different batches of antibody. The non-human components in the control will react very differently with each different lot of antibody, resulting in significantly large shifts in QC results when reagent batch is changed. Such shifts lead to a frequent need to reassign control materials, increasing the laboratory workload and costs.
Case in Point
The importance of using 100% human controls to reduce lot-to- lot variations is ðdemonstrated in the following example: A U.S. laboratory had recently changed batches of microalbumin reagent and after running QC samples, found that the results had shifted and were no longer recovering as expected.
In addition to QC, the laboratory also ran some EQA samples and found that with the first batch of reagent the results for the two samples were 17 mg/L and 19 mg/L, respectively. However, when the new batch was introduced, these values shifted to 5 mg/L and 6 mg/L.
On closer investigation it was identified that the QC materials in use were not entirely human in origin. The laboratory was advised of the anomalies regularly reported between 100% human controls and controls containing animal derivatives. As a result, they agreed to run a series of patient samples alongside fully human controls. The laboratory subsequently confirmed that results for both batches agreed well and that QC results were much closer to the target of 20 mg/L, with the fully human controls recovering around 19 mg/L.
With budgetary constraints affecting most laboratories, many are seeking cheaper ðcontrol products. However, the need to re-run QC tests when anomalies occur demonstrates that it can be a false economy to opt for cheaper, lower quality QC products.
Control manufacturers must keep pace with the demands of the diagnostic industry. Demands by the marketplace grow daily for more sensitive tests that are accurate, stable, have an acceptable shelf life and cost-effective. New tests with ever increasing sensitivities may offer the laboratory clinician more tools in the complex process of disease diagnosis; the lab professionals need to have confidence in these tests.
David Martin is senior scientist – Manufactðuring, Randox Laboratories.