Closing Quality Gaps

We laboratory professionals believe automated analyzers make fewer mistakes than flesh and blood techs. Instruments don’t get distracted, develop workarounds or deviate from a procedure. They don’t make human errors, in short. Fortunately, similar engineering concepts can be applied to manual methods to anticipate and reduce human error, closing quality gaps.

From Small to Titanic Errors
Psychologist James Reason, PhD, describes error as “circumstances in which planned actions fail to achieve the desired outcome.”1 Human error is frequently attributed to an action that deviates from expected behavior. According to one website, we make three to seven errors per hour, and as many as eleven per hour when stressed.2 Another site cites a rate of 10 to 30 errors per hundred opportunities with failure rates of 5 to 10 per hundred with good quality practices in place.3Blood Banking

In a normal workday, a technologist can make many errors, some of which may not adversely affect patient care, such as dialing the wrong telephone extension, misreading an order, confusing similar patient names, skipping a procedure step, etc. Good technologists reflect upon a manual process and apply a workaround to catch these errors in real time or simply start over. In many cases, we rely on procedural controls to ensure reagents were properly added (control line on a rapid immunoassay device, etc.) or procedure steps followed (adding check cells to anti-IgG reagent, etc.).

But small errors can be deadly. Misidentifying a patient or mislabeling a blood bank specimen, for example, can ultimately cause a fatal hemolytic reaction if the patient is transfused with ABO-incompatible units. A root cause analysis might identify poor lighting at bedside, untrained personnel, an improperly applied wristband, a lost history card and other system factors.

Such errors can cause an event cascade, such as the rapid sinking of Titanic in 1912 after striking an iceberg. The ship was sailing too fast, iceberg warnings were ignored, the ship’s construction caused it to fill too rapidly and “unzip” when sinking, there were not enough lifeboats, etc.4 These might be attributed to human error and, in retrospect, considered avoidable. But in the laboratory or on the Titanic, by the time a fatal outcome occurs, it’s too late.

Process Controls
Human error can be mitigated by process control. Process controls are used in engineering to ensure outputs are predictable, stable and operating within known limits of variation. Process controls monitor an environment and make changes based on set points given by the user. For example, an automated chemistry analyzer monitors the temperature of onboard reagents and turns on or off a compressor as needed. Engineered controls ensure consistent sampling, pipetting, dispensing, timing and reading of complex reactions.

Process control systems may consist of open (manual) or closed (automated) loops. A closed loop control contains a feedback mechanism to make sure an output is reached, allowing the system to self-correct. Closed loops are unaffected by external interference. Open loop controls are generally less complex, involving no feedback. Output is not controlled and dependent on steps being performed correctly. Open loop systems are, therefore, more subject to external interference and less reliable than closed loop systems.5

For example, ABO tube testing, like all manual testing, is open loop. The reliability of the output depends on the steps being performed correctly and in the correct order. If a tech is asked, “How do you know that ABO is correct?,” responses will vary: “The front and back type matched,” “I followed the procedure” or “I rechecked the type with a quick front type on the CBC tube.” Individual habits to ensure steps are followed can vary.

By spelling out process controls in manual testing, subtle errors can be avoided and the process improved. Let’s consider a practical example.

A Practical Example: ABO Testing
Manual process controls break down a system into individual outcomes that affect the reliability of the endpoint if they don’t perform as expected, as illustrated in Table 1. For example, if a reagent expiration date is before today’s date, the reagent has expired and testing stops. While much of this may be implied in a procedure (e.g., Do not use beyond expiration date on vial) or intuitive as part of training, tech habits of when to add reagents and what to look for can vary, causing subtle errors.

Commercial reagent cell suspensions have a similar appearance, for example, and A1 or B cells can be switched with each other or antibody screening cells. Once cells are dropped into tubes containing serum, there are no visual cues apart from the reactions themselves to suggest they were added correctly. Human error teaches us that “mistakes happen” to the best and brightest of technologists.

About The Author