May 2019
Emerging Voices
By Sarah Elkins
For The Record
Vol. 31 No. 5 P. 22
Natural language processing is the catalyst behind health care’s foray into technologies geared toward making data retrieval a snap.
Ten years from now, a man may find himself sitting at a coffee shop when the personal health monitor strapped to his wrist sends a notification to his phone. The biometric data collected by the device indicates a blip in his physiology.
“Perhaps you should schedule an appointment with your physician,” the text suggests.
From his phone, he navigates to his physician’s website, where a bot engages in a quick conversation. The bot helps him decide, through a series of questions, whether his symptoms warrant an in-person appointment or a telehealth video call. In response to his answers, an in-clinic appointment is automatically scheduled through calendar integrations.
Before he can say “macchiato,” he is on his way to the physician’s office. But even before that, the physician has received a populated list of follow-up questions to ask. These questions have been extrapolated from the data collected from the bot chat.
As she approaches the exam room where the patient is waiting, the physician is talking, seemingly to no one. “What’s on the problem list?” she asks. She’s wearing a discreet earbud in her left ear while another bot is delivering patient health history in succinct natural language that only she can hear.
The physician’s hands are free; there is no laptop wheeled into the room on a cart. The exam may seem usual in every other way except—thanks to advances in natural language processing (NLP)—the EHR is populating itself. The boxes are checked and the assessment and care plan are written from the words spoken during the course of the encounter.
Many experts believe health care will look a lot like this before too long.
Talking to Technology
Siri and Alexa are going to medical school, and the impact on health care is already apparent. In fact, 10 years may be a conservative estimate when it comes to how soon health care will resemble the above scenario. Already, technologies employing humanlike inference and communication are being developed to improve the delivery of care.
Yaa Kumah-Crystal, MD, an assistant professor of biomedical informatics and pediatric endocrinology at Vanderbilt University Medical Center, who grew up watching Star Trek and Knight Rider, says, “The concept of being able to talk to technology and get it to respond makes so much sense to me intuitively.”
Kumah-Crystal is working on ways to reengineer the EHR to better deliver patient care. Her primary project is the Vanderbilt EHR Voice Assistant (EVA). Her objective? Devise “clever ways to use voice technology and natural language interfaces to get information out of the EHR,” she says.
The scenario in which a physician asks aloud, “What’s on the problem list?” and receives an answer isn’t all that futuristic—Kumah-Crystal can do that today. The technology is being tested in low-risk applications with pediatric endocrinology patients.
Kumah-Crystal is confident in EVA’s ability to accurately extract information from the EHR. Of course, trusting the technology to import data into the EHR is another thing altogether, but very soon the Vanderbilt team will run a pilot. This spring it will begin testing how well EVA can place physician orders for health maintenance.
In the pilot application, a physician will be able to ask aloud, “Is this patient due for any labs today?”
EVA might answer, “It looks like this patient is due for an A1c. Would you like to order that?”
The physician can just say, “Yep,” and a pending order will be generated and finalized with a one-click confirmation.
This exchange may seem no more complicated than telling Alexa to play Springsteen, but, according to Kumah-Crystal, the amount of time saved is enormous. In the current state, “you have to actually go through the chart, look at all the labs that have been done already, do some math in your head to figure out when they were last due, figure out if any are due, then go to the order section, type in the order, draw the lab up, sign that, and repeat that for every individual lab,” she says.
Chatbots Learning Customer Service
Not all applications of NLP currently employed in health care involve speaking aloud and receiving verbal feedback. At Providence St. Joseph Health, a bot named Grace is helping patients make decisions about what level of care they should seek via online chat.
Grace makes recommendations based on a patient’s answers to her questions. Depending on the complaint, the bot will help a patient decide whether to go into the clinic, use telehealth, or request a home visit. Grace isn’t making any decisions for patients, but she is helping them have a clearer view of their options.
According to Aaron Martin, chief digital officer at Providence St. Joseph Health in the greater Seattle area, this technology, while not particularly glamorous, is reducing costs and improving customer service.
“The amount of UTIs [urinary tract infections] and strep throat conditions being treated in the United States with $120 primary care visits is massive. You can take a lot of cost out of the system if you just tackle these simple use cases,” he says.
Martin emphasizes the importance of low-risk, high-volume use cases for testing early NLP technologies. Machines learn faster when there is an abundance of opportunities to apply algorithms. And when actual patients are involved, low-risk customer service interactions, like the ones Grace is handling, are the safest tests of the technology.
While a future in which machines diagnose cancer is not outside the realm of possibility, Grace isn’t ready to send anyone to surgery. She might, however, be getting savvy enough to recommend you go see your physician sooner rather than later.
Translating Images to Language
NLP is changing the relationship between the patient and the provider—even when the patient has no idea a bot has joined the conversation. But the back-end and inner workings of the health care ecosystem are also reaping the benefits of NLP.
At Ciox, a health information gateway that processes health record requests for payers, hospital legal teams, patients, and various other requesters, NLP is being used to “clean up the noise,” according to Florian Quarré, the company’s chief digital officer.
That’s a bit of an oversimplification of the technology Ciox is actively using, however. Quarré explains that approximately 60% of the records Ciox transmits are images: PDFs and complex-to-read documents. During the first pass, computer vision technology, also called intelligent optical character recognition, transforms the image into digestible text. Artificial intelligence is then used to apply rules to structured data. For example, if a field should contain a Social Security number, artificial intelligence will ensure each character is actually a number. If there is a “B” in a field that should be numeric, the notation will be corrected to an “8.”
Next, NLP is used to translate notes into codes to remove the bias inherent in human language. “Two different nurses might type the exact same observation very differently,” depending on variables such as the nurse’s seniority, specialty, and even gender, Quarré says, adding, “When we go through those millions and millions and millions of lines of notes, each and every person capturing the notes is going to create a bias.”
Not only is NLP hard at work removing the bias in human language, it also removes human verbosity by interpreting long-form notes, deciding what’s important, and making succinct summaries that take less time to read.
Early Lessons, Early Caution
At least when it comes to health care applications, NLP is new, and just as the voice assistant on one’s phone sometimes gets verbal cues comically wrong, so too have these early bots.
Martin recalls an experiment he was conducting with a vendor in which a bot was being designed to follow up with patients. He laughs, “A year later, the bot asked me, ‘How’s your headache?’”
Even small errors like this help improve the technology, but they are also exactly the reason Martin stresses the importance of low-risk experiments. It’s better to chuckle about a bot waiting a year to ask about a headache than it is to repair the damage done by a bot making a life-threatening error in time assessment.
“The biggest mistake that was made early on was there were some big tech companies looking at low-volume, high-edge cases like cancer that are just really hard to tackle,” Martin says.
Besides the inherently complex and variable nature of cancer, there are just not enough occurrences for a machine to learn very quickly. The learning curve was too steep and the successes too small. Conversely, there are enough common cold viruses floating around that machines have the opportunity to be much quicker on the uptake. And lives aren’t on the line.
“NLP and text dictation have been around for a while, but it’s always been a little janky,” Kumah-Crystal says, noting that since around 2016, the technology has been on par with a human’s ability to recognize language.
Still, researchers and early adopters take a cautious approach to expanding the technology’s responsibility for decision making. “It’s one thing to ask for information and for it to misunderstand and tell you something else. It’s another thing to write an order or prescription into the EHR and for that to be wrong,” Kumah-Crystal says.
Quarré urges users to consider context. “When is it correct enough? When is it as correct if not better than a human agent?” he asks. If a bot uncovers a disconcerting trend or a relationship in a patient’s data that doesn’t make sense, it may not be able to deduce a diagnosis but its ability to parse the data, find an invisible problem, and communicate it to a decision-making human is good enough.
Because, as Martin puts it, it’s still “super important to have a primary care physician quarterbacking the care.”
“If we’re talking about full-blown accuracy whereby I am making a blind decision, like surgery, without questioning the system, that might be dangerous,” Quarré says.
The Big Three Driving Innovation
At present, Amazon, Google, and Microsoft are positioning for a piece of the NLP pie. Amazon Comprehend, Google’s AutoML Natural Language, and Microsoft Azure are three products driving positive innovation in the famously slow-evolving health care space.
Martin says, “The great thing is a lot of the platforms [the big cloud vendors] are building are reusable across industries because they solve generic problems. They’re not idiosyncratic to health care.”
He warns that it’s far too early for brand loyalty. “Each of these companies is making different investments. They’re making different headway and different rates of progress across the different algorithms they’re building. Any health system is going to have to build the capability to use all of them,” Martin says.
Quarré, who is enthusiastic about the advances of the large vendors, says their foray into the space is a positive for the health care industry, noting that he’s happy to be kept on his toes.
Kumah-Crystal echoes those sentiments. “This isn’t a zero-sum game,” she says. “There’s more than enough work to be done to make it better. The more people working on it, the better it can get.”
She adds, “Our role here at Vanderbilt with the EVA project is to establish best practices because we’re researchers. We’re not a Fortune 500 company.”
Those at the forefront of HIT seem to understand this. They are grateful for the big-budget competitive advancements coming from the tech giants but also appropriately cautious when it comes to ensuring generic solutions move toward the level of compliance necessary in the health care world.
“Neither Google nor Amazon have HIPAA-compliant interfaces yet (Editor’s note: Amazon’s Alexa is now HIPAA compliant.), which is why we weren’t able to leverage their technology, but they’re doing work to get to that phase. Once those players move in, it will help validate the field. It’s going to push things forward,” Kumah-Crystal says.
While the application of NLP within health care is necessarily narrow and conservative, Martin recommends keeping a broad network for sharing ideas across industries. He received some of his best advice from an executive with Air New Zealand, who said, “Jump in, find a narrow use case, go get some open-source [artificial intelligence], and start experimenting.”
Martin’s advice for others? “Make sure you’re not just talking to other health care folks,” he says.
NLP’s Future
Machines have gotten adept at distilling and deducing conclusions from structured data, but they aren’t yet good enough at making inference from human-generated long-form text.
Kumah-Crystal explains: “Right now, if someone has a CT scan, the radiologist will write their impression. I would ask, ‘What did the radiologist say?’ What I would want [the machine] to say is, ‘It looks like the radiologist is concerned about a fracture.’ Right now, it can’t do that. It can read me back the entire radiology report, but it’s not smart enough to extrapolate the meaning of the paragraph.”
However, she’s confident that “in 10 years we will have this nailed down.”
Many worry the prevalence of new intelligent technologies will result in a sharp decline in jobs. Quarré allays those fears. “The use of NLP, in itself, is not necessarily going to replace decision-making nurses, note takers, and coders. But I am a firm believer in the augmentation of the workforce,” he says, adding that it’s imperative to spend less time trying to understand raw information and more time making decisions.
Martin points to the growing chasm between the number of providers entering the workforce and the growing population of people seeking care. He views the move toward the advanced decision-making capabilities of NLP and similar technologies as a necessary evolution.
“We have a huge primary care shortage coming our way, and we can’t gap fill it with nurse practitioners. If we don’t algorithmically diagnosis and treat the simpler use cases, we’re going to have a problem,” he says.
— Sarah Elkins is a West Virginia–based freelance writer.