Vol. 16 •Issue 15 • Page 26
Back-End Speech Recognition Piece of Cake for Docs to Dictate
The favored trend in speech recognition might be the easiest way for physicians to maintain their workflow while utilizing digital dictation technology, but is it easiest for everyone?
(Editor’s note: This is part one of a two-part series on speech recognition. The July 31 issue will feature part two: “MTs as Editors.”)
Docs take care of patients, period. That’s the mindset most physicians have in the health care industry today.
Even with the increased benefits of information technology (IT) and digital documentation, many doctors would rather focus solely on patient care than be bothered with the latest IT. Some of the most forward-thinking physicians, who volunteer to test out the latest IT toys, will admit adjusting to new technology features can interfere with patient care.
So, rather than throwing their hands up in frustration and labeling docs as old-fashioned and stubborn, health information technology (HIT) engineers created a speech recognition approach to ease physicians into the digital age and improve turnaround time in HIM.
Front vs. Back
Doting on digital seems like our country’s latest health care trend, but realistically, electronic health records (EHRs) offer many immeasurable gains and are here to stay. Physician dictation is no exception.
According to Nick van Terheyden, MD, chief medical officer for Philips Speech Recognition Systems, health care is a dictation-intensive field. He explained the Medical Transcription Industry Association stated the U.S. spends in excess of $6 billion to $12 billion per year transcribing medical dictation into text. “Because clinical medicine has moved away from individual clinician delivery to a team-based approach, it’s imperative individual clinicians are able to communicate effectively with each other in medical records,” van Terheyden assured.
For clinicians to communicate fast enough, information must be available in digital form. “Speech recognition automates the capture of physician dictation, because it is the most natural form of communication,” van Terheyden explained.
Speech recognition technology (SRT) is an important tool in the EHR effort, especially because it “allows for near-instantaneous recording and distribution of information, essential for the delivery of high-quality care,” stated van Terheyden, who helped implement one of the first hospital-wide paperless medical records in the early ’90s.
But all forms of speech recognition are not created equal. Front-end speech recognition demands more interaction and training from the physician, which we all know docs could do without.
“Front-end requires physician interaction whereby the physician/radiologist views the structured text document and edits the document as it is dictated,” said Randy A. Baker, PHR, senior vice president and chief operating officer, Diskriter, Pittsburgh. Front-end usually requires the health care facility to seek acceptance from the physician staff to ensure success, because it requires physicians to edit their own dictated reports for ideal results vs. the traditional method of simply dictating and leaving the transcribing of dictation to others, Baker explained.
Front-end speech recognition also forces the doctor’s dictation to go directly into a PC, making portability an issue as well. So to improve on this HIT function, back-end speech recognition surfaced.
“Back-end speech recognition lets physicians dictate as they always have: into a telephone, a portable digital voice recorder, a PDA or even directly into a PC-based EHR,” explained Joe Webber, vice president of AssistMed. “The voice file is run through the recognition engine in the background, without the physicians even needing to know this is going on. The payoff is MTs can substantially improve their productivity by editing a draft report rather than having to transcribe it from scratch.”
The Users’ Preference
While back-end SRT offers seamless benefits for physicians, do other health care professionals suffer?
After the doc dictates, a draft of the dictation is produced by the back-end SRT software and sent to the HIM department, along with the voice file, for the “medical editor” to listen, read and correct any mistakes; either recognition errors or dictation errors.
This process can actually be an adjustment for MTs–editing rather than transcribing–but it’s something most MTs are willing to learn (see sidebar). As an MT and editor of back-end speech recognition, Elizabeth Roberts, RHIT, MedQuist Inc., Yorktown, VA, explained, “I personally love it! It is particularly helpful when I am working on a new account or have a new dictator,” she explained.
In the 6 months Roberts has been using back-end SRT, her production has increased approximately 50 percent. “It’s still far from perfect, though,” she divulged. “Some dictators’ reports need very little editing, while others still require heavy editing. And if it’s a dictator you’ve worked with for a long time, it could be easier and quicker to transcribe those reports ‘from scratch.'”
Martha Sheridan, transcription supervisor, Advanced Healthcare, Milwaukee, explained her facility’s experience using Dictaphone’s back-end SRT software. “Most physicians have no idea if their voice is recognized in speech recognition or not,” she said. “Not all physicians’ dictation is recognized by the program, but the majority is distinguishable; thus we have a 2-1 ratio between ‘editors’ and MTs.”
So if physicians aren’t even aware of the new software deciphering their words, back-end SRT must be the answer, especially because front-end SRT has proved unappealing.
Piedmont Hospital in Atlanta has used Eclipsys’ Knowledge-Based Transcription (eScription) for dictation and transcription since 2003. “After a year of implementation with Eclipsys’ back-end software, a pilot project using front-end speech recognition was started to judge the feasibility of using the front-end function as an additional input method,” Suzi Liang, HIM special projects manager, Piedmont Hospital, explained.
The physicians chosen for the project were progressive and computer-literate, but still their feedback concluded back-end speech recognition was preferred, Liang divulged. “Because of: 1) the time required using front-end SRT for first time training and template setup, as well as further tweaking of their voice profile on an ongoing basis; and 2) even with 98 percent accuracy, dictation still took less time than that required for editing their own voice draft,” she stated.
Even Greater Benefits
Vendors continue to work on improvements in their own back-end speech recognition software, because HIT tools can always improve, even with all the positive feedback. Weber noted his company Assist-Med enhances Nuance’s (previously ScanSoft) Dragon technology, providing a speech recognition module “that automatically formats the engine’s output and plugs into an existing dictation/transcription workflow, rather than forcing a switch to a whole different platform.” Not having to adopt a new program on top of new software would definitely decrease the learning curve and cost for all parties involved.
And van Terheyden believes back-end speech recognition is on its way to recognize more than just words. “It will soon automatically filter information that’s irrelevant to the final report, for example, ‘ummmm,’ ‘errrrr’ and/or ‘this dictation was dictated by Dr. McCoy.’ Intelligent Speech Interpretation in Speech Magic is already moving in that direction,” he assured.
But even with all the success stories, Weber admits there are still some policy and procedure areas that need tweaking. “We need to figure out how editors can be fairly compensated, how to best determine which dictators qualify for this approach and how speech recognition solutions can be optimally utilized,” he stated.
van Terheyden feels even with the “minor” tweaks, back-end speech recognition is in a league of its own. “It has a checkered history with many failed implementations littering its historical path. As a result, significant skepticism exists in the clinical and HIM community. But the current adoption rate of speech recognition proves the technology is industrial grade.”
Tricia Cassidy is an associate editor at ADVANCE.
MTs as Editors
With the use of speech recognition software on the rise, the role of the MT is changing. But even though new technology is shifting MTs’ job descriptions, it’s not kicking them to the curb. In fact, many vendors and organizations cherish their skilled MTs and are determined to utilize them as editors of all medical record documentation and dictation.
Read more about this new role in Part II: MTs as Editors in our July 31, 2006 issue.