October 8, 2012
A Powerful Combination
By Elizabeth S. Roop
For The Record
Vol. 24 No. 18 P. 18
Studies show that natural language processing tools and EMR data pack a wallop in terms of quality improvement.
When talk in healthcare circles turns to natural language processing (NLP), the discussion generally focuses on its potential in three areas: converting free text from dictated notes into structured data within an EMR, parsing clinical documentation to generate appropriate diagnosis and billing codes, and enhancing clinical decision support by analyzing clinical references in response to clinical queries.
However, a growing body of research is making the case for NLP’s use at the point of care to leverage the information in a patient’s EMR to improve quality performance and care outcomes. Specifically, hospitals and physician groups that have fully embraced EMRs can be utilizing NLP at the bedside to improve diagnoses, identify complications, close care gaps, and even improve clinical practice.
It’s not a new concept. For years, researchers have been applying NLP to patient information to study everything from biosurveillance and population health trends to genetic risk factors. However, several recent studies have brought the idea of NLP at the bedside into the realm of possibility for the average community hospital.
For example, in 2009 the National Institutes of Health awarded the University of Minnesota Duluth a three-year, $1 million grant to develop NLP software that would analyze EMR data to more quickly identify adverse drug reactions. And this January, the Annals of Internal Medicine published a study from the Mayo Clinic in which NLP was used to evaluate whether biosurveillance using data from an entire encounter note was superior to using data from just the chief complaint field.
NLP and Colonoscopy Quality
An even more recent study examined the use of NLP to assess colonoscopy quality, something that is not done regularly due to the challenges of doing so manually. In this cross-sectional study, an NLP program analyzed more than 24,000 colonoscopy reports and associated pathology notes contained in the University of Pittsburgh Medical Center’s EMR system.
“For me, the application of colonoscopy quality is a test case. It’s an area where everyone agrees quality needs to be measured, but no one is doing so,” says lead author Ateev Mehrotra, MD, an associate professor in the division of general internal medicine at the University of Pittsburgh School of Medicine and a researcher at RAND.
Researchers looked at quality-of-care measures, including an indication of the patient’s anesthesia risk, documentation of informed consent, description of the quality of bowel preparation, notation of cecum landmarks, detection of adenoma, notation of the time the scope was withdrawn, and whether a biopsy sample was taken.
The findings, published in the June issue of Gastrointestinal Endoscopy, noted that while performance on some colonoscopy quality measures was poor, others were at benchmark levels. Across hospitals, the adequacy of preparation was noted overall in only 45.7% of procedures; cecal landmarks were documented in 62.7%; and the adenoma detection rate was 25.2%. More importantly, the study documented a wide range of performances across hospitals and physicians.
Mehrotra notes that one interesting observation was the difference in quality between reports that were dictated vs. those that were generated via a structured electronic record system. Dictated reports had lower quality than those generated by a structured template with one important exception: adenoma detection.
“Finding an adenoma is a performance issue, not a documentation issue,” Mehrotra says. “Across metrics, those who used a template had better performance except on adenoma. That’s important because we assume that structure will improve documentation and performance. But for the one measure we care about the most because it can prevent cancer, it didn’t make a bit of difference.”
That is the type of finding that may never have taken place without NLP. While gastroenterology specialty societies advocate routine performance assessments, they rarely happen due to the complexity, inconvenience, and expense involved with the manual review of reports.
It is a challenge shared across specialties. Greater emphasis is being placed on quality performance, yet the costs and logistics involved in analyzing data create barriers to effectively measure outcomes and apply those findings to effect change. NLP has the potential to overcome those barriers and, in the process, drive quality improvements.
“There are two aspects here where NLP can be very powerful. The first level is that we can start generating quality scores on a regular basis and that feedback will start to improve care. Right now, we often have no idea how to compare our performance to our peers,” Mehrotra says. “Where NLP has an even more powerful potential impact is right at point of care. [Physicians] dictate notes and … [NLP] analyzes them in real time and prompts for treatment decisions [that may] improve care.”
NLP and Postoperative Complications
Another key study examined the implications of utilizing NLP to identify postoperative complications from data contained within a patient’s EMR. The cross-sectional study “Automated Identification of Postoperative Complications Within an Electronic Medical Record Using Natural Language Processing” appeared in the August 2011 issue of The Journal of the American Medical Association and involved 2,947 patients undergoing inpatient surgical procedures at six Veterans Health Administration hospitals from 1999 to 2006.
The study sought to identify postoperative occurrences of acute renal failure requiring dialysis, deep vein thrombosis, pulmonary embolism, sepsis, pneumonia, or myocardial infarction through medical record review as part of the VA Surgical Quality Improvement Program. The sensitivity and specificity of the results using NLP to identify complications were compared with the results of doing so using patient safety indicators used in discharge coding information.
“We were interested in identifying safety concerns. Anyone in this area knows that, at present, you’re at a loss because you’re reliant on spontaneous reporting, which doesn’t work so well. So if you’re interested in quality improvement, these outcomes can be challenging to identify,” says lead author Harvey J. Murff, MD, MPH, an associate professor of medicine at Vanderbilt University Medical Center who also is affiliated with the VA’s Tennessee Valley Healthcare System.
The findings led researchers to conclude that NLP analysis had higher sensitivity and lower specificity compared with patient safety indicators based on discharge coding. And while the results were “proof of concept that these types of tools can work at [identifying] quality issues,” Murff says the more exciting aspect of the study is what it means for coupling electronic documentation with NLP tools to identify complications in real time for early intervention.
“This does make a strong case for going completely electronic because you have other tools, not just CPOE [computerized physician order entry], that can reduce medication errors and can be used for electronic chart reviews,” he says. “You can make the argument that an organization might want to seriously consider having more of its documentation entered [electronically at the point of care]. … There is a lot of utility in using NLP for identifying quality improvement issues.”
However, doing so is not without challenges. While more EMR vendors are embedding software with NLP capabilities, its usefulness is limited if clinicians are not documenting electronically in real time. Without that, NLP will be relegated to postencounter analysis.
Another issue is clinicians’ fondness for abbreviations. “That sometimes gets you,” says Murff, who notes that a research group in Michigan attempting to use NLP was tripped up by the frequent use of MI for the state rather than myocardial infarction.
“There are always issues like that that come up, but those are solvable,” he says. “The biggest bottleneck is the adoption of very comprehensive EMRs. You have to have the data in order to process and look for these things with NLP tools.”
Unlimited Potential
Murff, who has since expanded his research to include more outcomes as well as outpatient applications, says NLP’s potential goes far beyond quality. For example, work under way at Vanderbilt uses NLP tools to identify clinical variables that could potentially correlate novel genome types to clinical conditions. In the field of pharmacoeconomics, NLP is being used to identify potential side effects of medications so they can be tailored to increase effectiveness or reduce adverse events.
As these studies demonstrate, tapping into the power of NLP to influence quality outcomes and improve patient safety is no longer limited to major academic medical centers and health systems. Any facility or even medical practice with access to NLP software and electronic documentation can find ways to use it to improve clinical performance.
“Any place you have clinical documentation where specific events may be represented is fair game to search as long as you have the resources and computing power to do it. … It’s just a matter of clinical experts thinking of how things might be represented in a chart and codifying that,” says Murff. “I would like to see more people going at these challenging outcomes, the more clinically complex outcomes that we’ve been hampered at assessing due to manpower issues. [We need] more people trying to push the envelope on what type of information they want to pull out.”
— Elizabeth S. Roop is a Tampa, Florida-based freelance writer specializing in healthcare and HIT.