Home >> ALL ISSUES >> 2015 Issues >> Trials for errors: how one lab fixed reporting flaws

Trials for errors: how one lab fixed reporting flaws

image_pdfCreate PDF

Ann Griswold, PhD

February 2015—Cincinnati Children’s Hospital Medical Center has all but eliminated errors in laboratory test reporting thanks to a project performed through the Intermediate Improvement Science Series, a nationally accredited course offered by the medical center’s James M. Anderson Center for Health Systems Excellence to leaders from Cincinnati Children’s and other health care systems.

“We call it ‘I2S2’ and it covers just about everything you can imagine related to quality improvement,” says Kenette Pace, MT(ASCP), BS, senior director for clinical laboratory operations at CCHMC.

Pace

Pace

Pace enrolled in the six-month course in May 2013, amid the hospital’s often-frustrating quest to lower laboratory test reporting error rates. At the time she enrolled, CCHMC had been trying for three years to mitigate the problem of corrected results, beginning in early 2010 with a rate of 14.1 errors per 10,000 reported results.

“We’re doing 30,000 tests in a week. So that’s a lot of errors,” says Kathy Good, CLS(ASCP), clinical laboratory director at CCHMC, part of a two-hospital system with several satellite stat laboratory locations. Good became the laboratory’s first-ever senior quality assurance specialist in December 2009, and it was at her insistence that CCHMC enrolled in the CAP’s Q-Tracks program in early 2010 to see how the hospital compared with other institutions.

“The Q-Track told us that compared with the median for other institutions in our group, the pediatric median was somewhere between four and five errors per 10,000, and that the best match for our facility demographics was 2.3. We were a long way from that, so we had a lot of room for improvement,” Good says.

Though the goal of 2.3 errors per 10,000 results proved elusive in the three years before Pace enrolled in the course, a handful of important changes made during that time reduced the hospital’s error rate dramatically, from 14.1 to 5.4.
After receiving those initial discouraging findings from the CAP, Good recalls sitting down to discuss various quality indicators with Pace and Paul Steele, MD, medical director of the clinical laboratory at CCHMC. The three developed a plan to track down and systematically eliminate all sources of the errors, beginning with a campaign to raise staff awareness.

“The stories behind these errors drove our message home and helped the staff understand why we were going down this road,” Good says. One error was related to a CSF cell count; someone made a mathematical error and then misreported the result, which was entered manually into the LIS. “This error could either potentially delay treatment that’s desperately needed, or put a patient through unneeded procedures.”

Good

Good

Simply announcing that the laboratory’s error rates towered over those of other institutions seemed to raise the bar for CCHMC staff. By the end of May 2010, the institution reported just over nine errors per 10,000 reported results.

From there, the team tackled the low-hanging fruit: implementing a better system of checks and balances in the laboratory’s clinical interfaces to prevent the transmission of incomplete or flagged results.

In July 2010, CCHMC revised the chemistry test reporting interface so that a flagged result, for example, would not be sent through the interface but would require a member of the laboratory to enter a response manually. Though the interface seemed a step in the right direction, the error rate remained the same.

Six months later, the laboratory implemented a double-check process in the special chemistry manually entered tests that improved reporting accuracy to about seven errors per 10,000 results. Despite additional interface changes made over the next 18 months, including an update to the hematology flagging process in May 2011 and changes to the chemistry and coagulation platforms in January and August 2012, their progress stalled at about 5.4 errors per 10,000 results, well short of the hospital’s goal.

“We kind of hit a wall, and that’s where things got interesting. There were no other big things to do. There were no other interfaces to change, or instruments to replace,” Good recalls. “There was a lot of, ‘Now what?’”

Pace also remembers the uncertainty of that time. “I wasn’t sure it was going to work. We had already reduced our errors from 14 to about 5.4. I think there were a lot of questions on people’s minds about whether this was going to be successful. We knew that last bit of improvement was going to take creative interventions,” Pace says.

Pace enrolled in the Intermediate Improvement Science Series determined to meet or surpass the target error rate. She dubbed her course project “Improving Laboratory Results Reporting Accuracy.” Its goal: to decrease the number of result reporting errors in the laboratory from a baseline of 5.4 per 10,000 reported results in April to 3.8 by September 2013.

Through the course, Pace was assigned a quality improvement coach, James Brown, senior quality improvement consultant from the Anderson Center at CCHMC, and recruited a six-member team of laboratory personnel consisting of Good, technologists and technicians representing each area of the laboratory, and a lab information systems representative.

Through the course, Pace and the other team members learned to perform a methodical trial of each prospective intervention in a small population before scaling it up. “Part of the theory behind our improvement science was to give each new process a try in a small group—figure out what people are going to object to, what doesn’t work, or what worked well—and focus on that going forward,” Good says.
The trials relied on the expertise of staff technologists from relevant areas of the lab to supplement the team’s expertise as needed.

“If we were doing a test in microbiology, we would start out small with one test and one tech, observe, and discuss the effects of that change. Learnings from the small test of change allowed us to adapt, adopt, or abandon the test.” In most cases, they adapted the process, modified the test of change, and scaled the new process to include more tests and more technologists and technicians. In that way, the various changes were gradually scaled up to full-blown policies affecting microbiology, hematology, chemistry, and other laboratories.

“We learned early on that the interventions we employed needed to be of high reliability, such as error transparency, making systems visible, easy-to-use and follow processes, use of technology, and data systems to help support decision-making,” Good says.

Among the first strategies implemented was a double-check process for every manual result entry. “The process was cumbersome and not well received by the technologists,” Pace says. The technologist entering results would have to wait until a second person was available to verify the result off the instrument printout.

While the new process did not affect reporting turnaround times, objections started to roll in, especially from senior lab members who resented the idea of someone peering over their shoulder. “Some voiced concern directly to their manager, which then came back to the team. Some employees went directly to the team members, and then they would bring it back to our team meetings,” Pace recalls. “We used the concerns to help drive the next wave of communication from the team.”

Satellite laboratory staff were the most affected by the new policy. “Oftentimes, there was only one individual working there. We obviously didn’t want to delay results by making them wait until the next day when staff arrived, so we had to come up with an alternative,” Pace says. “We had them save the result, log out of that patient chart, log back in, and verify a match between the paper log and the lab information system. The LIS made it easy to determine if the process was not followed. There was some checking, especially if somebody did make an error, to see if they went straight from nothing to releasing a result.”

Good, Pace, or other team members spoke with staff who did not follow the new policy to find out what barriers were keeping them from doing so. In most cases, the policy was bypassed when laboratory staff felt rushed or simply didn’t agree with the change, Pace says. “It can get really emotional. It’s someone thinking, ‘I’ve been a tech for 20 years, and now somebody else has to review my results?’”

The importance of reassuring laboratory personnel quickly became apparent. “If I’m pulling over Kenette to take a look at my results, it doesn’t mean that nobody trusts me. It’s just a factor of where errors come from, and we’re trying to prevent them. Anybody can make a mistake,” Good explains.

The team made a number of other changes as well. They analyzed workflow volumes by hour of day and bench and adjusted staffing patterns accordingly. They drew up better guidelines to communicate changes in test results. Autoverification functions were implemented on high-volume, high-risk systems. “This technology allowed us to introduce more high-reliability steps to detect potential errors,” Good says.

Not all of the changes were successful. “We thought we had hit it big when we came up with a computer function that was going to tell lab techs about a result that required a double check. But we realized that was going to create more work for the techs. We discarded that and moved on to something else.”

But the team’s enthusiasm quickly spread and, with more and more potential errors ensnared in the initiative’s preventive web, protests began to subside.

“One of the errors we caught prior to reporting was a manual result of a syphilis test,” Pace says. “This was when we did a small test of change in microbiology, on the serology bench. The tech had entered a whole list of results in the computer, but left them in the unverified state. Another tech came along and compared what was in the computer against the instrument printout. That’s when we found where the first tech had been off by one line. They could have reported a positive syphilis on a baby, but because of the double-check process, we caught and corrected the error. It never reached the patient.”
This case was a turning point. “When they caught that mistake, that was huge. It opened a lot of people’s eyes,” Pace says.

The CCHMC administration provided support that highlighted the importance of their work. Dr. Steele, medical director, was involved in the whole process. “And very supportive,” Pace says. “I think the others saw that as well.”

In mid-2013 the team began implementing daily huddles, which Pace calls “another high-reliability intervention to drive communication and awareness among the staff.”

“The departments, from shift to shift—first to second, second to third, third to first—get together and talk about what’s going on. They talk about instrument and staffing problems, interesting patients, errors that have been made, whether they reached the patient or didn’t reach the patient, so they could find better ways to do things,” Pace says. The identities of staff who made errors were never shared.

CAP TODAY
X