Vol. 17 •Issue 23 • Page 25
A Perfect Pairing: Speech Recognition and EHRs
The marriage of speech recognition technology to an EHR can make an ideal couple.
You’ve just been through the whirlwind of EHR implementation. After months or even years of work, you feel pretty good about the system you helped choose, and everyone seems pretty pleased with the results. Why, then, are only a fraction of the physicians taking advantage of the new technology?
Speech recognition technology integrated with an EHR may prove the incentive that those finicky physicians need to get on board with the new technology. In a recent survey by the Medical Records Institute that polled 891 respondents, speech recognition is being used most with automatic uploads to the EHR or with direct dictation into the EHR, leaving stand-alone speech recognition systems, well, standing alone at the rear.
ADVANCE took a look at the options that exist for speech recognition integration in the EHR as well as how integration could affect the roles that HIM professionals play in the medical records process.
Dictation in the Driver’s Seat
Every patient encounter produces a story. When was the last time you stopped in the midst of a story, trotted over to your computer to point here and click there, and then continued on with the story? Chances are that the story would lose some of its appeal and character if this interruption happened. Therein lies some of the problems with direct data entry, according to Joel Fontaine, director of business development with M*Modal, Pittsburgh.
In direct data entry, the physicians add data elements into the record using a series of point and click moves. Rather than a smooth conversation with a transcriptionist, physicians must instead interrupt the patient story to add information into data fields like allergies or medications. This method is disruptive to physician workflow, according to Fontaine, and can prove tedious and time consuming for physicians who’ll often need to slow down in a fast-paced environment. “Most importantly, we’ve been told by physicians that direct data entry doesn’t capture the expressiveness to fully document clinical findings or reasoning. This further impairs physician acceptance of EHR. Clinical decision making is based on narratives, which recount significant and diverse events that shape the patient’s history,” Fontaine explained.
He hopes to see dictation in the driver’s seat, with structured and encoded data elements from dictation driven directly into the medical record by the patient’s story. “That really is our vision for the future,” Fontaine said. “Narratives from dictation are an essential and common approach for physicians to document a patient’s clinical profile. By using technology to structure and encode the data elements within the narrative and export them into the EHR you achieve the best of both worlds: fast and easy dictation and computable records within an EHR.
Important to that vision is the validation factor. The information driven into the medical record through dictation must be confirmed by the physician or by an editor to assure that what goes into the record is correct, Fontaine noted. M*Modal uses a simultaneous process as the approach to validating a document. “Whether it’s a real-time review of a draft document by the physician or back-end editing by the MT, the validation of the discrete data elements occurs at the same time so you not only have the finished report, but you have validated the information transferred into the EHR, all done in one process,” Fontaine said.
Enter the MT. Formerly the creator of the text document, the MT experiences a role change when speech recognition is implemented. M*Modal refers to the new role as medical language specialist (MLS). While for years MTs have heard that their jobs would be replaced by speech recognition, the new role of MLS makes them even more important in the documentation process, according to Victoria MacLaren, director of training services with M*Modal. At first, MTs are a little leery about making the transition to MLSs. “No one wanted to work with it; everyone hated it,” MacLaren admitted. “They thought it was going to replace them. I think a lot of the attitude was why should I work to make this system better just so that it can take over my job in the long run and replace me.”
With some training, MLS newbies end up embracing the technology, from what MacLaren has witnessed. “They don’t ever want to go back to transcription. We hear that over and over again,” MacLaren said. “The technology is not meant to replace them. It’s meant to enrich their professional career and really focus more on patient care and making them even more important to the documentation process and more essential than they may have been in the past.”
The Direct Approach
Another approach to speech recognition integration with the EHR is a front-end approach, where the physician dictates directly into the EHR. Dictaphone, a division of Nuance, Stratford, CT, uses this technology in its Dragon Naturally Speaking (DNS) Medical technology. Frederik Brabant, MD, senior product manager with Dictaphone, has also heard complaints about the interruptions in physician workflow when physicians must point and click their way through an EHR system. “They say, well in the past we would just pick up the phone and start dictating, and now we need to look for a PC, log in and start typing with two fingers,” Dr. Brabant explained.
With DNS, the physicians can dictate and capture all the complexities of the patient-physician encounter directly into the medical record with no pointing and clicking required. “In places where the physician would typically type, the physician actually dictates and the words appear right on the screen, right into the medical record just as if they’d been typed,” said Keith Belton, senior marketing director of Dictaphone.
EHR implementation and speech recognition implementation can happen at different times, and DNS is compatible with all Windows- or Citrix-based EHR systems. According to Belton, many customers in the middle of EHR implementations will look to DNS to speed up the adoption of the EHR. “You have an increased amount of users who start using an EHR if they can use speech. It facilitates the use of an EHR,” Dr. Brabant said.
The EHR DNS combo also eliminates or reduces transcription costs because physicians dictate directly into the record, and the document is available instantly for other physicians to view. Also, Belton mentioned that because the physicians can dictate their own words and capture the complexity of the case, revenue can be maximized because at times, physicians can get slightly higher charges per code. “The more detailed the documentation the stronger the case for a higher charge,” Belton explained. The documentation can back up the need for a higher level of code, which may change a $75 visit into a visit that can be reimbursed for $120, for example, because of the complexity of the review of systems, patient plan and other information that can then be quickly captured in the record. “It’s perfectly legal, it’s just that Dragon gives them that flexibility,” Belton said.
The Speech Secret
In some cases, speech documents are edited by an MT and then uploaded directly into the EHR system. At Dean Health System in Madison, WI, HIM Supervisor for Transcription and Abstraction Laura Cantrall-Cordio looked to Dictaphone’s EXSpeech from Nuance and found a solution that would keep physicians across the 600-provider system still dictating as usual. The physicians who are put onto the speech system don’t notice any difference, and their workflow remains exactly the same. The MTs are the ones who experience the differences. “The physicians pretty much dictate as usual into the telephone into our Dictaphone system, and then it runs through the speech engine. It’s then presented to the transcriptionist, and she’s presented with the voice as well,” Cantrall-Cordio explained.
The MT signs off on the document after editing it, and the document can then be uploaded and viewed in the Epic Systems EHR that is in place at Dean Health. “Basically, as soon as the transcriptionist or editor signs off on the document, it is available to view in Epic,” said Celia Fine, medical transcription quality analyst/trainer at Dean Health. “It happens almost instantly.”
The MTs at Dean seem to like their new roles as editors, although there was some hesitation in the beginning. “I love it. I didn’t want to take the training and I was very reluctant, but after I had it and have now worked in it for over a year, I really like it,” Marcia Johnson, general MT, said.
What the Future Holds
As more facilities implement speech recognition integrated with EHRs, keeping physician workflow the same or similar to what it was before the implementation is key to the success of the system. “Our objective is to not only produce efficiencies in generating a very user-friendly draft document from a physician who doesn’t even need to know the technology’s in place, but to be able to use the discrete clinical content from the dictation to drive many other outputs with it besides generating a text,” Fontaine said.
The one-two punch of the EHR and speech recognition can also help with the standardization of content within a clinical document. Physicians often use different words that have the same intent; for example; they may dictate history or HPI or history of present illness, or they may refer to findings or results. Technology allows the health care provider to implement best practices by coding all common terms alike to normalize the sections or subsections of the medical report and to enable a comprehensive retrieval of this information.
Fontaine went on to say that Health Level 7 Clinical Document Architecture (HL7CDA) is a way to have commonality of terms within a document, and serves as the basis to enlarge and enrich the flow of data into the EHR.
One thing’s for sure—EHRs and speech recognition seem to be something that will be together for the long haul. Physicians like that the use of speech with an EHR means little disruption in workflow and usually signals an increase in productivity, along with a list of other benefits. In the last year, Dr. Brabant said that there’s been a steep increase in the number of licenses for DNS Medical. The increase is about 10 to 15 percent, Belton said, and it’s almost always for use of DNS with an EHR.
Lynn Jusinski is an assistant editor with ADVANCE.