January 30, 2012
Proceed With Caution
By David Yeager
For The Record
Vol. 24 No. 2 P. 10
Speech recognition can help reduce costs, but healthcare organizations should take steps to ensure they’re not sacrificing quality in the process.
Everyone loves to save money but in today’s healthcare environment, cutting costs is more a matter of necessity than desire.
Rather than allow the quality of care to suffer, hospitals typically look for ways to work more efficiently and with fewer employees. For this reason, one process that has been of interest during the past five to 10 years is physician dictation. Because dictation typically requires four minutes of transcription for every minute of dictation, it’s often viewed as an area of potential cost savings—provided the process can be automated to a degree that allows for a significant reduction in worker hours.
As a result, an increasing number of hospitals have adopted or are considering adopting speech recognition technology. The specific reasons for doing it may vary among institutions, but it’s a safe bet that cost savings will be atop nearly everyone’s list. Unfortunately, hospitals that expect speech recognition to produce a windfall may be disappointed.
“I think there’s no question that the expectation that speech recognition can help dramatically reduce the cost of document production is out there in the marketplace,” says Dale Kivi, MBA, director of business development for FutureNet Technologies Corporation. “The reality of it is, because it is not foolproof and there is an editing phase that’s needed to go along with the technology, the actual savings are considerably less than what people think it might be.”
That’s not to say there aren’t good reasons for installing the technology; it just means that hospitals need to look at the big picture when they’re figuring out their bottom line. Although there may be savings on up-front transcription costs, factors such as document quality and the way speech recognition is used within the facility play a significant role in the ultimate cost.
The Big Picture
Many speech recognition vendors tout the technology’s ability to increase productivity, and it’s generally agreed that speech recognition holds the promise of streamlining operations and reducing staffing needs. The challenge is to translate that promise into savings without sacrificing quality. Because the medical record has ramifications not only for patient care but for regulatory compliance and billing as well, the quality of physician reports is of the utmost importance.
“Keep your eye on what’s really important, which is the quality and usefulness of the documentation. There’s a big misconception with speech recognition that you should change the processes to only accommodate productivity. I think the purpose of any documentation process is to produce useful, quality documentation,” says Lynn Kosegi, director of health information services for M*Modal. “And as soon as you take away from that quality, just to [speed up] a process, you can pretty much guarantee that you’re going to make something else more expensive down the road.”
Speech recognition requires not only an up-front investment in the technology but also a comprehensive assessment of how it will fit into a hospital’s existing workflow. Considerations of back-end and/or front-end solutions typically begin here. Although many administrators allow transcription savings to strongly influence their choices, they may be overestimating value if they don’t account for their physicians’ level of interest or the cost of their physicians’ time. For example, some back-end solutions can be implemented with little or no disruption to a physician’s routine, but the process works best when physicians are willing and able to work with it.
“You can get a degree of accuracy from back-end speech recognition without any effort on the part of the doctors, but you’re certainly not going to optimize your efficiencies and your return on investment if you don’t get some cooperation on the part of the physicians to work within the restraints of whatever platform you happen to be using,” says Jay Vance, CMT, director of product development and deployment for Superior Global Solutions, Inc. “The reality is the technology is not perfect, and the more effort the user is willing to put into it, the better the return is going to be.”
Back-end speech recognition technology has the benefit of allowing physicians to dictate in a way that’s more like traditional dictation, but it still requires transcriptionists or medical editors to function effectively. Depending on the dictator’s speech patterns, the speech recognition engine may capture what was said quite well or rather poorly, and it is not uncommon for similar words such as “comma” and “coma” to be transposed. Convincing physicians to modify their dictation habits can help, but some simply do not “translate” well to speech recognition whether it’s because of an accent, a cadence, or some other factor. For this reason, human intelligence is required to determine whether what’s written in the report matches the doctor’s intent.
To avoid transcription on the back end, some facilities opt for front-end speech recognition. In this scenario, physicians edit their own reports before signing off on them. However, if poorly implemented, the technology can backfire. In many cases, physicians will end up either seeing fewer patients or working unsustainably long hours. Often, they will put less information in their reports and may not notice some of the subtle details that transcriptionists or medical editors would spot.
“In practice, what happens is that the number of errors that are pushed downstream, not necessarily from content but from picking the wrong patient visit or some of the other workflow issues, increases dramatically when left solely to the physician dictators,” says Kivi.
Unless physicians buy in to the new process and receive support from the institution in adapting speech recognition to their workflow, it won’t achieve the desired results. To balance clinical needs with administrative needs, it’s helpful to provide physicians with choices. What may work well for one department may not work for others, and many facilities use more than one speech recognition option. For example, many radiology departments have had success with front-end speech recognition because their reports are shorter, and they use a smaller range of clinical terms.
“One of the keys there, however, is to give the doctor the choice to be able to do their own editing or, if they decide to, just send it on to be edited by an MTSO [medical transcription service organization] or someone in-house,” says Linda Sullivan, CEO of New England Medical Transcription.
Other specialties, however, may do better with back-end speech recognition. A specialist’s consultation report or an operative report may be impractical for physician editing. The important thing to remember is that the people who use the technology—physicians, transcriptionists, and medical editors—determine how well it works.
You Can’t Just Plug It In
Nick van Terheyden, MD, chief medical informatics officer for Nuance Communications, believes it’s possible that some of the inflated expectations for speech recognition are a result of healthcare’s experience with other types of technology. In this plug-and-play generation, van Terheyden says simply installing a solution doesn’t provide the level of efficiency that most hospitals are seeking. There is training and workflow reengineering that needs to be done.
Good Samaritan Hospital in Vincennes, Indiana, followed that blueprint when it installed SpeechMotion’s documentation platform, which includes back-end speech recognition. Although the system was in place by October 2010, the transcription staff didn’t begin editing from it until February 2011. The idea was to give the staff time to learn the new software and give the system time to learn the doctors’ voices. Good Samaritan also wanted to make sure everyone on staff was transcribing in the same way and using the same formatting rules to take advantage of the system’s ability to remember what’s inputted. The training was spread out to minimize disruptions, but by the end of March 2011, the transcriptionists were becoming proficient at medical editing.
Wendy Mangin, MS, RHIA, director of medical records at Good Samaritan, says the hospital was motivated to install speech recognition because more physicians were being hired, and many of them were requesting their office notes to be transcribed. The additional requests were creating a larger volume of work for the transcriptionists, prompting the hospital to replace aging equipment with more efficient technology to avoid having to add to its transcription pool.
“It is not perfect. It still takes the expertise of an experienced medical transcriptionist to edit that document to make it complete and correct,” says Mangin. “We knew going in that it wouldn’t handle every report that comes across, and that’s OK. If you can have 75% of the reports come across successfully for editing, you are definitely gaining productivity out of that.”
Mangin says it took a while for the transcriptionists to become comfortable with editing and learn all of the system’s keyboard shortcuts rather than using a mouse or a foot pedal. It was worth the wait, however, because those shortcuts have helped reduce turnaround time more than she expected. Prior to going live with editing, there were often 50 hours of work in the transcription queue. Now there are usually around five.
The increased productivity has afforded the transcription group extra time to help out in other areas, such as with progress notes for hospitalists, transcription notes for home care, and plan of care notes for the physical medicine department. Plans are also underway to begin editing for the radiology department, which currently uses front-end speech recognition.
Good Samaritan’s experience highlights an important aspect of speech recognition adoption: Hospitals can get more out of the technology by automating tasks that don’t require human intellect and concentrating manpower in areas that do. By relying on the system’s ability to apply headers and subheaders and its extensive medication dictionary—which significantly reduces the need to look up drug names—Good Samaritan’s transcriptionists are able to focus on more substantive tasks. This increased efficiency allows them to make medical information available sooner throughout the healthcare enterprise.
It’s Here to Stay
Quick access to medical information is becoming increasingly important. As speech recognition technology evolves, it will, by necessity, play a larger role in many hospitals. van Terheyden believes it will eventually help clinicians interact more efficiently with EMRs.
“As people implement EMRs, what they find in many cases is that they’re hard to navigate, they’re hard to learn, and speech can be more effective as a navigational tool,” he says. “So instead of having to go to the menus, select patients, do all these things through keyboarding, I can say, ‘Show me the current labs,’ and that carries out a whole activity behind the scenes and allows me to get the information to the screen without remembering a series of keystrokes or menu options.”
van Terheyden says advances in natural language processing combined with speech recognition will allow more clinical data to be mined from physicians’ dictated reports. That data may benefit patient care through functions such as extracting drug information and checking for patient allergies, or it may help hospitals collect meaningful use data.
Incorporating speech into mobile platforms will also improve efficiency, according to van Terheyden. However, while speech recognition technology offers greater efficiency, there are steps hospitals should take to ensure the system meets their needs if they’re considering adoption.
Perhaps the most useful information comes from facilities already using the technology. Find out their firsthand experiences and ask for a demonstration.
Cost and reliability are important, but it’s also vital to read the service contract. Kivi says some vendors require every piece of documentation to go through speech recognition, which may not be in a hospital’s best interest if it has numerous physicians who labor with the system. Other vendors will allow facilities to choose which jobs go through speech recognition, which can save time and money.
Kivi cautions there are vendors that include add-on costs, such as per-user licensing fees, service contracts, and up-front implementation costs. By comparing all costs and considering how the system fits into the existing workflow, a hospital can derive maximum benefit and save money.
“Absolutely we need to increase productivity and reduce costs, but we need a more holistic picture of what it means to reduce cost,” says Kosegi. “If you look only at transcriptionist productivity or the transcription piece as the means to reduce cost and you don’t look at using a technology like speech recognition to improve the usefulness of the documentation across the enterprise—for coding, for billing, for all of those other purposes—then, in the end, you’re actually creating a more expensive process and not a less expensive one.”
— David Yeager is a freelance writer and editor based in Royersford, Pennsylvania.