Vol. 16 •Issue 11 • Page 66
As immunohistochemical analysis has morphed from an ancillary diagnostic methodology to that which helps determine treatment approaches, the need for standardization and quality control becomes critical.
The field of immunohistochemistry (IHC) has undergone a remarkable evolution over the past 30 years. In the early days of IHC, pathologists were looking mostly at reagents that would help in terms of establishing a specific diagnosis based on the expression of a particular marker or set of markers. Now, in addition to an ever-expanding menu of diagnostic markers, they are looking at antibodies that can provide prognostic and therapeutic information. An increasing array of therapy related and prognostic markers such as estrogen receptor, progesterone receptor and HER2/neu continues to emerge.
“The entire field is moving in the direction of prognostic and potential therapeutic markers,” says Ronald A. DeLellis, MD, pathologist-in-chief, Lifespan Academic Medical Center, Rhode Island Hospital.
The greatest impact in IHC has been delivered by automation, which has helped eliminate some of the vagaries associated with performing assays manually.
“Clearly with automation we have become more efficient and, in a way, have addressed the issues of standardization at least within individual institutions from run to run,” says Victor E. Reuter, MD, vice-chair, Department of Pathology, Memorial Sloan-Kettering Cancer Center, New York.
In addition to increased automation, detection systems have changed. This evolution has led to different detection and amplification systems that are more robust and allow for the identification of lower concentrations of antigens, with lower background staining.
The evolution of IHC also can be seen in the numbers and types of antibodies being used.
“Numerous antibodies are now looking at proteins that might be relevant at the molecular level in any given tumor,” says Dr. Reuter. “The types of antibodies also have changed; they are not just mouse monoclonals.”
Though Dr. Reuter’s point is made from a research standpoint, it will ultimately transfer to the clinical arena.
“Some of those antibodies are more specific and can be used in lower concentrations,” he explains. “Although many were developed as research reagents, if you are able to look at a molecule that is relevant to any given tumor, transporting that antibody to the human and being able to use it clinically would be very important.”
Advancing technologies in molecular diagnostics, informatics and digital imaging are allowing scientists to compare which strategies are most helpful—eventually resulting in the best information at the best price with the most rapid turnaround time available to patients across the entire spectrum of hospitals and clinics, says Dr. Reuter.
“If, for example, we say in a very controlled fashion that FISH is better than IHC for HER2, we can determine if that is necessarily true at Memorial Sloan-Kettering, MD Anderson, Mayo Clinic, Johns Hopkins or other hospitals,” he continues. “And does it put a cost constraint that is formidable for some institutions or patients? Yes, it affects IHC, but it gives us different ways of addressing the same issue and we have to make decisions based on efficiency, cost, impact on the patient and quality of information.”
Molecular technologies in particular have had a significant impact on IHC with the development of gene expression profiling using microarray chips. This has allowed investigators to measure messenger RNA levels across thousands of genes simultaneously.
“It has been possible, on the basis of looking for those patterns of gene expression, to begin to look at the proteins that are the products of the genes,” says Dr. DeLellis. “Many investigators have begun to make antibodies to those proteins that are overexpressed in different kinds of tumors. There has been a great deal of synergy between the molecular biologist and pathologist, who is now able to localize the overexpressed material in samples of tumor or other processes.”
The antibodies are developed not only by commercial vendors, but also by individual researchers. The goal is not only to be able to define the basic nature of a particular neoplasm—which has been done for the past 25 years using antibodies, including keratins and lymphoid markers—but also to provide a more sophisticated approach to subtyping them into prognostically relevant categories, says Dr. DeLellis.
Need for Standardization
There is a critical need for standardization of both pre-analytical and analytical phases of testing to ensure that results derived from different laboratories are directly comparable.
At the pre-analytical stage, factors including fixation time and types of fixatives are becoming increasingly important since pathologists are not only looking at whether a reaction is positive or negative, but also trying to introduce some quantitation into their analyses.
According to Dr. DeLellis, approaches to reduce intra- and interlaboratory variation in the performance of immunohistochemical assays are relatively limited at this point. In some instances, guidelines do exist.
“For example, there are published guidelines about the length of time of fixation for estrogen, progesterone and HER2/neu analysis—but pathologists have not really begun to establish guidelines for other kinds of products that might be significant for treatment or prognosis,” he asserts.
“It has taken quite a long time for even the kinds of guidelines we have for steroid receptors and HER2/neu to develop,” Dr. DeLellis continues. “But as reagents become available that can answer some of our questions, then particular pre-analytical guidelines will be developed for optimizing the preservation of what we are looking for in tissue sections.”
The need for increasing standardization becomes even more critical when taking into account the trend toward personalized medicine.
“As we enter an era where selection of therapy is being based on the results of these particular assays, standardization becomes much more relevant,” says Dr. Reuter. “Just as the FDA approves a new drug or a new assay in the clinical laboratory, we should do everything possible to achieve standardization.”
Effort is underway to develop systems for ensuring high-quality samples, but it is challenging with so many different prognostic agents available because pre-analytical factors would most likely differ for each marker. One possibility would be to include with each clinical sample a cell line whereby the absolute level of expression of a particular marker is clear. The pathologist would then embed that cell line together with the unknown tissue sample to provide a built-in control that is fixed and processed under identical conditions. Though the end results make this an attractive approach in solving issues of pre-analytical variability, the method is a bit cumbersome.
“Every time someone performed a biopsy looking for a particular marker or set of product, he would also need to have a cell line that would be producing the same marker to be compared with the unknown sample,” explains Dr. DeLellis. “The challenge of this approach is that it would increase turnaround times and add a great deal of expense.”
Another way is to use known concentrations of particular antigens processed together with the clinical samples. This methodology may be cumbersome as well, but it is something that several investigators are looking into, says Dr. DeLellis.
As time goes by, data will be available to determine precisely how pre-analytical variables are impacting results, says Dr. Reuter.
“For example, if 30 percent of the volume performed in our IHC laboratory involves blocks coming from other institutions where we can’t control those variables, we have to know if, in fact, that is going to create a bias in our results,” he explains. “Is there a difference whether tissue is in formalin for 12 hours versus 24 hours versus 36 hours for any given antibody? Different antigens are going to respond differently to all of these variables; it is not that you can have across-the-board generalizations, but you are going to have to tailor these for the individual antigen and antibody.”
Many experts contend that the staining technologies are pretty well set, though there may be methods that perhaps deliver a greater degree of sensitivity.
“The epitope-induced retrieval methodology certainly is very solid, but what really needs to be done is standardization in the preanalytical testing phase whereby you can have an unknown sample and be able to predict with a great deal of assurance the levels of expression of prognostic markers or potential therapeutic markers,” Dr. DeLellis tells ADVANCE. “But the only way you can do that is if you compare it quite carefully with some known sample.”
A Regulatory Hand
With the growing need for standardization, many anticipate that the regulatory agencies are going to have a greater influence in approving pre-analytic and analytic systems used for IHC.
“There will be regulatory oversight; not only at the laboratory in making sure that we adhere to pre-analytical and analytical guidelines, but also in approving the use of different antibodies, FISH techniques and new machinery and hardware to be able to perform these kinds of assays,” predicts Dr. Reuter. “It will be mostly through regulatory control—or at least the fear of regulatory control—that we are going to see great improvement in these areas.”
But, of course, it will come at a cost.
Todd J. Smith is associate editor.