Home >> ALL ISSUES >> 2014 Issues >> No ifs, ands, or buts on IHC assay validation

No ifs, ands, or buts on IHC assay validation

image_pdfCreate PDF

Karen Titus

March 2014—Like Gypsy Rose Lee, tests and their true nature reveal themselves bit by bit. For immunohistochemistry, this unhurried disclosure has meant evolving ideas of whether these tests must indeed be validated and, if so, then how, exactly. The discussion recently culminated in a new CAP guideline for laboratories.

“Principles of Analytic Validation of Immunohistochemical Assays” was scheduled to be published March 19 online ahead of print in Archives of Pathology & Laboratory Medicine (http://tinyurl.com/ihcguideline). It’s a pioneering effort to address an area overlooked in anatomic pathology.

Dr. Patrick Fitzgibbons, chair of the group that wrote the new guideline, says the feedback helped the group reconsider the discretion lab directors have. “We’re basically stressing, more than we had initially, that the lab director has to be responsible for making some of the decisions,” he says.

Dr. Patrick Fitzgibbons, chair of the group that wrote the new guideline, says the feedback helped the group reconsider the discretion lab directors have. “We’re basically stressing, more than we had initially, that the lab director has to be responsible for making some of the decisions,” he says.

While laboratories have known for years that assays need to be validated before being put into clinical service—it’s part of CLIA, after all—not everyone has appreciated that tests that essentially resemble special stains need to be scrutinized, too.

“Pathologists have learned that validation of immunohistochemical assays is a little bit more important than they might have thought five or six years ago,” says Paul E. Swanson, MD, a member of the workgroup that produced the guideline and professor of pathology, University of Washington School of Medicine, Seattle. Some laboratories may have figured it out on their own by being attentive to the model proposed in the HER2 guidelines and following through with ER and PR testing, says Dr. Swanson, who was formerly the director of anatomic pathology at UW. “But they weren’t entirely sure whether it applied, for example, to a structural protein that defined a pattern of differentiation rather than a possible target for therapy.”

Patti Loykasek, HTL(ASCP), QIHC(ASCP), another member of the workgroup, says that she does at least one CAP inspection a year and that she too sees a gap. “I think labs have gotten a little better about knowing they need to validate tests, but I think it’s done a little haphazardly. The results aren’t always well-documented, and final data collation and sign-off by the medical director are often missing,” says Loykasek, test development technologist at RML (Regional Medical Laboratory), Tulsa, Okla.

Loykasek

Loykasek

While the CAP checklists ask if antibodies are validated, says Loykasek, they give no specific parameters for how to validate, leaving much open to interpretation. “Most people are going to do the least amount of work possible, because they’re busy,” she says. “We’re always asking them to do more work with fewer people.”

“There was definitely a need for a set of guidelines,” she adds.

Three IHC tests have already run the validation gauntlet and are the subject of their own guidelines: HER2, ER, and PgR. (These three markers are thus not covered in this most recent document.) But some pathologists had long suspected that apart from this trio, IHC validation was a hazy concept for many labs.

Hunches gave way to proof with a recent study, says Patrick Fitzgibbons, MD, who chaired the workgroup. It’s the fourth reference in the guideline (Hardy LB, et al. Arch Pathol Lab Med. 2013; 137[1]:19–25), which detailed a CAP survey looking at IHC validation procedures and practices in 727 laboratories. (Dr. Fitzgibbons and another workgroup member, Jeffrey D. Goldsmith, MD, were coauthors.)

“What we learned,” says Dr. Fitzgibbons, who also chairs the CAP Cancer Biomarker Reporting Committee, “is that there really is not a consistent mechanism for validating immunohistochemistry assays.”

As the workgroup searched the literature, further inconsistencies became apparent. Some papers recommended validation sets of 20 positive cases and 20 negative cases. Others suggested more cases, and still others, fewer.

Beyond that basic—how many?—lay others. Should all assays be validated the same way? Or were there differences?

“Let’s say you use a different fixative, or let’s say you decalcify a specimen, because it’s bone tissue,” Dr. Fitzgibbons says. “Does that affect the validation?”

What about antigens that are extremely difficult to find, so-called rare antigens? If a validation set requires 40 cases, “There may not be a lab in the country that can get 40 of these, if they’re that rare. What do you tell labs in that setting? How do you validate assays for rare infectious organisms?” asks Dr. Fitzgibbons, a pathologist at St. Jude Medical Center, Fullerton, Calif.

The survey also showed that many laboratories were unaware when assay revalidation is needed, says Dr. Fitzgibbons. What requires full revalidation (equivalent to initial assay validation) and what requires only confirmation that the assay is working as intended?

These were among the issues facing the workgroup as they put together the guideline.

The guideline’s 14 recommendations should give laboratories a solid push out of the starting blocks.

The first recommendation sets matters straight: Laboratories must validate all immunohistochemical tests before placing them into clinical service. Per the guideline, means include (but aren’t limited to):

  1. Correlating the new test’s results with the morphology and expected results;
  2. Comparing the new test’s results with the results of prior testing of the same tissues with a validated assay in the same laboratory;
  3. Comparing the new test’s results with the results of testing the same tissue validation set in another laboratory using a validated assay;
  4. Comparing the new test’s results with previously validated nonimmunohistochemical tests; or
  5. Testing previously graded tissue challenges from a formal proficiency testing program (if available) and comparing the results with the graded response.

Beyond that declaration, the guideline’s authors highlight some other critical areas:

  • For initial validation of assays used clinically (apart from HER2, ER, and PgR), labs should achieve at least 90 percent overall concordance between the new test and the comparator test or expected results. “It could be another IHC test done at a different laboratory, or another marker or another methodology, like in situ hybridization,” says Dr. Fitzgibbons. The most common scenario would be a lab using a new antibody for a marker it has offered in the past, he says. “Because antibodies change all the time. If you have a completely new antibody clone, you should revalidate it.”
    “We also allow labs to use just expected results,” he continues. “Because sometimes you don’t have another test, but from the literature you know what the results ought to be.”
  • For predictive marker assays (again, with the exception of HER2, ER, and PgR), labs should test a minimum of 20 positive cases and 20 negative cases. If the lab’s medical director decides that a validation set of fewer than 40 cases is sufficient, he or she will need to document the rationale.
  • For nonpredictive factor assays, the guideline recommends a smaller validation set: a minimum of 10 positives and 10 negatives. Again, lab directors who decide that a smaller validation set is appropriate need to document their reasons.

In essence, there are two levels of validation. Why would one test require less stringent validation than another?

Dr. Swanson traces the answer back to the early practice of immunohistochemistry. Before the advent of predictive and prognostic markers, IHC focused on giving information that helped to resolve a reasoned, histologic diagnosis, a practice that remains largely true today. When tests are an ancillary element of analysis, he says, “they are, I think, quite reasonably seen as less risk to a patient.” He points to a similar line of reasoning at the FDA, which considers risk to patients when determining approvals and clearances of IHC reagents and other medical devices. “With that difference in mind, we felt a less stringent approach to a diagnostic validation was appropriate,” he says.

While the workgroup was willing to recommend a smaller validation set for nonpredictive markers, Dr. Swanson makes clear they nonetheless still require a higher standard than labs might have previously thought. You might say, ‘Well, let’s do three cases, because I know from my experience these cases should be positive, and maybe a couple negatives—and everything will be fine.’ But that’s not true,” Dr. Swanson says. “Anybody who does laboratory medicine knows that you can’t establish a reference range or an expected outcome for a given test unless you’ve looked at enough samples to achieve a credible level of reproducibility.”

The committee thus wanted to provide a guideline that had, as Dr. Swanson puts it, “statistical meat to it” but could still be attained by the typical laboratory.

Where did those numbers come from? “Sometimes people think these numbers are pulled out of the air. I know I did when I read previous guidelines,” says Dr. Goldsmith, director of the surgical pathology laboratory at Beth Israel Deaconess Medical Center, Boston, and assistant professor of pathology, Harvard Medical School. “So it’s worth mentioning that we deliberated for a long time, walking the fine line between doing the right thing and not making it overly onerous on the labs. At the end of the day, the number that we came up with for a typical validation set was supported by statistics,” which are provided in the guideline’s supplemental material. “It would be better to have 50 cases,” he continues, “but everyone knows if we had 50 cases in the validation set, no one would ever do it.”

Dr. Swanson

Dr. Swanson

For those who find it difficult to obtain the required number of cases for validation sets, it’s possible, says Dr. Swanson, that three or four smaller labs could join efforts, sharing information and material, for, say, rare antigen. “It’s a little extra work; in my mind, it’s not a lot of extra work,” he says. And while the committee members discussed the importance of having validation tissues handled, processed, fixed, and stained in the same way clinical materials are, “We also know that that’s not always practical even for large reference labs, because they are often working with materials that were not processed in their lab.”

The guideline also makes clear, says Dr. Swanson, that labs will sometimes use smaller validation sets. “It’s not that labs can throw the recommendations out the window or neglect them,” says Dr. Swanson, “but using the 20-case validation set could be altered to fit certain clinical circumstances at the laboratory director’s discretion.”

With this approach, however, 10 different medical directors might approach the problem of rare antigen, for example, in 10 different ways. “Can you be sure that the quality of that stain in those 10 laboratories is comparable?” Dr. Swanson asks. “The answer is no.” That’s why the guideline requires directors to document their alternative validation and demonstrate objectively its validity. While not stated in the guideline, the implication is clear, says Dr. Swanson: “If you can’t establish validity of a test, you shouldn’t do the test.”

  • The recommendations on revalidation address three possible changes:
CAP TODAY
X