Home >> ALL ISSUES >> 2013 Issues >> Rx for optimizing rapid flu test performance

Rx for optimizing rapid flu test performance

image_pdfCreate PDF

Anne Paxton

January 2013—With the arrival of another flu season—this one early and intense—rapid influenza diagnostic tests (RIDTs) are once again occupying many laboratory directors’ minds. But although laboratories have found RIDTs useful for the last decade, evaluations of the test kits’ performance have been limited to manufacturers’ product inserts and a few small-scale studies. Like swing shift and day shift workers in the hospital, RIDTs have not been brought together for an assessment side by side.

A new study sponsored by the Centers for Disease Control and Prevention and the Biomedical Advanced Research and Development Authority, an agency of the Department of Health and Human Services, fills that gap. Titled “Evaluation of 11 commercially available rapid influenza diagnostic tests—United States, 2011–2012” (MMWR, Nov. 2, 2012), the study is the first to measure the performance of the commonly used RIDTs against a standardized set of representative influenza viruses. “Clinical laboratories now have their first comprehensive evaluation of the majority of the commercially available tests,” says study co-author Daniel B. Jernigan, MD, MPH, deputy director of the CDC’s Influenza Division.

In the study, researchers at the Medical College of Wisconsin tested performance of test kits made by Thermo Fisher Scientific, Becton Dickinson, Meridian Bioscience, Inverness Medical, Response Biomedical, SA Scientific, Quidel, Princeton BioMeditech, and Sekisui Diagnostics. For each of the 11 FDA-cleared RIDTs commercially available for the 2010–11 influenza season, the researchers measured the number of positive samples in progressively increased dilutions of 23 influenza viruses—16 influenza A and seven influenza B. The study used identical viral concentrations for each kit tested and a large collection of recent influenza viruses to allow for a more finely detailed characterization of test performance.

The evaluation of RIDTs was not intended to brand any particular test as good or bad, Dr. Jernigan emphasizes. Rather, the study is part of a three-pronged CDC strategy to improve rapid tests by: 1) working with the FDA and manufacturers to make the tests better, 2) working with organizations to improve testing practices, and 3) getting better information about rapid flu testing to clinicians, including partnering with the Joint Commission to develop a Web-based continuing medical education series. “This study gives us baselines that show how the tests are performing using a standard set of conditions,” Dr. Jernigan says. “And we can use the study design to continue to evaluate RIDTs available in the U.S.”

Similar studies have evaluated rapid tests in the past; however, those studies would usually compare just two or three tests at a time, not everything that was out there, says lead study author Eric Beck, PhD, now a senior technologist in molecular diagnostics at Dynacare Laboratories. Dr. Beck was with the Midwest Respiratory Virus Program of the Department of Pediatrics at Medical College of Wisconsin when he helped lead the study. “In all honesty,” Dr. Beck says, “a lot of the results that you see on sensitivity come straight out of the manufacturer’s product insert, so there are not a lot of studies out there that really compare everything kind of equally.” Moreover, many sites use different flu strains, so the results aren’t always able to be correlated from one study to the next.

The swine flu (H1N1) pandemic in 2009 sparked new attention to the quality of rapid antigen testing, Dr. Beck says. At the time, clinicians, researchers, and regulators were concerned about whether RIDTs could detect the newly emerging virus. “When the H1N1 strain first hit, the assumption was that the rapid tests were not picking it up as readily as they did the previous seasonal flu strains.” That was one of the central purposes of the study, he adds: to see if in fact the rapid tests work better or worse for the strains currently out there, especially with the 2009 H1N1 strains supplanting the seasonal strains that had been prevalent.

To answer that question, the researchers used the same samples for all the tests. “A lot of the strains we’ve used in the past are similar, but depending on where you grow them or who propagated that virus, you can get different results. In one lab, your sensitivity can look very good, whereas if a different lab propagates the virus based on how they prepare the virus stock, the sensitivity may appear lower. The point with this study was to use the same virus stock for all of the tests to make them comparable,” Dr. Beck says.

The researchers tested roughly six viruses of each subtype, then broke down the results by how many of the subtypes tested were positive at a certain concentration. “We were encouraged that at higher concentrations, the tests still picked up the currently circulating viruses for the most part. But some didn’t do that well no matter what the concentration was or what the virus was,” Dr. Jernigan says.

In general, the more positives found for any particular test, the more sensitive the test proved to be, and that’s the most critical criterion the researchers were studying in this evaluation, says Dr. Beck. “The other thing was we wanted to see across different viruses or different subtypes that the test would have similar reactivity. For any given test, you want to see that it is capable of detecting seasonal influenza, the 2009 flu, and others. Some are definitely more sensitive for influenza A than for influenza B. So if you have a season where influenza B is one of the predominant strains that is circulating, then that particular test is less effective.”

A laboratory considering which assay to use might want to look for consistency between results with different subtypes in a given test throughout multiple seasons, Dr. Beck says. “Sensitivity does have some bearing on how well the tests perform. However, the purpose of the study was not to make any claims as to which tests are best or to label them that way. It was more intended to show which tests are consistent and to come up with a way to evaluate tests in the future as to their consistency across different subtypes of influenza, to make sure any new tests are at least as good as what’s currently being offered.” It’s important to keep in mind, he adds, that this was all done analytically. “It doesn’t necessarily reflect what the performance of the tests would be in the clinic.”

Often, doctors used to performing HIV or strep tests have grown to think all the tests perform similarly, but with rapid antigen testing for influenza viruses, that’s not true. There is great variability, Dr. Jernigan points out. “There’s variability from one test to another; if you’re using a rapid test you may actually get different sensitivity than with another brand. In addition, some tests are better at detecting influenza A, while some are better at detecting influenza B.”

Most of the differences in manufacturing methods are differences in antibodies, Dr. Jernigan adds. “Each has a slightly different way of preparing specimens, and what we wanted to show in the evaluation is how these current tests are going to work with currently circulating flu viruses. A lot of the tests were designed in the late 1980s and 1990s using influenza antibodies from very old viruses, some from the 1930s and some of them from the 1960s.” These viruses remain in a lot of the research reserves and manufacturers’ reserves as kind of the “workhorse” viruses, he explains. “They have been extremely well characterized, they’re very well known, and they grow well. So they are not necessarily a bad choice.”

But it’s been hard for researchers to do a comprehensive evaluation under standardized conditions, Dr. Jernigan says, because a person and a swab cannot be replicated multiple times. “The problem is that you can’t take a person and swab them 70 times in order to evaluate all the tests equally. If you swab a person one time you’ll get a certain amount of virus, and the second time you’ll get a different amount of virus. For that reason you have to have a virus stock fully characterized in terms of the concentrations and dilutions of virus.” In this study, one mL of virus stock for one of the tests was exactly the same as another mL of virus stock for another of the tests, thus making comparisons possible. The downside of that approach, however, is that the comparison may not reflect actual performance in clinical settings. In part, that’s because there are other things in respiratory secretions in addition to what’s in the virus stock, such as proteins and cells, Dr. Jernigan points out.

During the H1N1 pandemic, laboratories’ use of the rapid antigen tests increased considerably, Dr. Jernigan says. “That was not a bad thing. It just meant that a lot more doctors became aware of the tests and started using them and that use is continuing to grow.” He believes manufacturers are continuing to make the tests better, and notes that several recent improvements such as the automatic readers were not available when the CDC began the study.

Demand for the tests could become high, as the CDC has already indicated it thinks the 2012–13 flu season could be severe. “I’m sure it will be worse than last year,” says Dr. Beck. “We didn’t really have a major flu season last year. We all got off pretty light, and Milwaukee wasn’t alone in that.” He suspects that the 2009 pandemic may also have prompted a lot of people who hadn’t had a flu shot for 10 years to get immunized, and that may have helped tame later outbreaks. “But generally you assume viruses are going to mutate, and there’s always something new coming around the corner. Once it gets to the point where a virus has mutated enough that people aren’t immune to it anymore, we’ll see what we saw in 2009. You don’t necessarily think every three years there will be a pandemic, but the thought is always there.”

However, the need for the rapid tests has to be balanced with caution in using their results, the CDC has emphasized. “One conclusion was these rapid flu tests should be used cautiously. The specificity of the tests is pretty high, so if you get a positive it should be a true positive and you can probably take it to heart—especially if you are using the information for infection control purposes or trying to figure out whether to increase treatment for a child or do prophylaxis for a grandmother with multiple underlying conditions—things like that,” Dr. Jernigan says. But the sensitivity varies, which means you may be getting a false-negative for some of the tests, especially for specimens that are related illnesses or are poorly collected. “If a pregnant woman comes in with a bad respiratory illness and the rapid test is negative, you do not want the doctor to treat the patient based on that result. We saw clearly with H1N1 that pregnant women were having illness and even death from flu. So for certain patients, you need to use caution.”

Expert opinions vary, but the gold standard for influenza testing is technically still culture, Dr. Beck says. “They have rapid cultures that they can do in a day or two rather than five days, so culture is much faster than it used to be. But still, it is much slower than PCR, or rapid antigen testing for that matter. And most clinicians will say they would order a PCR assay before they’d order a culture.” PCR testing has made great progress in speed and practicality, Dr. Beck notes. “It’s improved quite a bit and a lot of that has to do with greater automation. The new platforms that manufacturers are producing make it possible to run a small number of tests at a time in a cost-effective manner. With some of the more conventional PCR tests, it isn’t that they take all that long necessarily; it is just that to do them in a cost-effective manner, you have to batch the tests.”

Though he strongly favors the use of PCR testing for detecting influenza viruses, Dr. Beck isn’t sure that PCR will inevitably take the place of rapid antigen testing. “As manufacturers continue working on RIDTs, they are getting more and more sensitive, and they will still always be faster than PCR. However, I suspect the manufacturers are going to have to work fairly hard to make sure their sensitivity is closer to PCR if they want to persist, because with a lot of newer PCR tests, you can get results out in maybe two hours now.” He says that might still be too long to expect patients to stay on site. “I guess I would not sit there for two hours.”

Laboratories in the field report that those kinds of turnaround time issues with ambulatory patients continue to be pivotal. In Richmond Heights, Ohio, for example, the flu season was already underway by early December and was running well ahead of last year’s numbers, says Sherri A. Gulich, MT(AMT), laboratory supervisor for University Hospitals Richmond Medical Center, a campus of UH regional hospitals. In the first 12 days of December, the laboratory reported three results positive for influenza A and six results positive for influenza B out of 43 tests, compared with only one positive test over the same 12 days in 2011. But Gulich feels the laboratory is well prepared for the flu season with the BD Veritor system it acquired last June to handle rapid flu test orders, the majority of which come from the hospital’s emergency department.

The Veritor system’s ease of use and a five-minute faster turnaround time are two of the features that attracted Gulich. And an automated reader makes the Veritor more standardized and objective than the laboratory’s previous system, she says. “With the manual test, if something is on the paler side, you might be asking five people ‘Does this look positive or not?’” So while the automated reader costs a little more—about $350 for a reader that is good for 3,000 tests—Gulich feels that the increased sensitivity and standardization make it well worth the additional cost. If the test result is negative, the laboratory does not reflex for verification by PCR or culture unless the doctor requests it, she says.

However, the rapid flu test is not routinely used on patient floors. “They will request a PCR rather than the rapid test because of the specificity of the PCR. It’s what our infection control doctors request, whereas we have a lot of walk-ins in the ED and the rapid test is used more as it would be in a doctor’s office.” To stay prepared, the laboratory tries to keep an extra 100 tests on hand. “It’s not unusual to go through that many tests in less than a week. If there’s a heavy flu season, our ED would get inundated.”

CAP TODAY
X