Recognizing the power of speech
A discussion examining trends relative to voice-driven documentation.
By Jason Free, Features Editor, May 2014
Speech recognition has come a long way from its infancy during the 1950s when Bell Laboratories designed the “Audrey” system, which could only decipher numerical digits spoken by a human voice. Today, we encounter sophisticated voice technology when we call our bank, seek directions from our car navigation systems, ask Siri for a baseball score or countless other everyday interactions with computer systems. Few industries place more demand on speech recognition than healthcare. To learn more about speech recognition, as it is used in healthcare, I spoke with Keith Belton, Senior Director Clinical Documentation Solutions Marketing, Nuance Communications.
Keith Belton: There is the saying, “In medicine, you don’t get paid for what you do. You get paid for what you document.” When we talk about documentation, we are talking about anything from a surgical note, to a progress note that is done in a hospital for a patient, to an encounter note that may be done by an ER physician that sees the patient, then sends them home, or a primary care physician in their office who needs to provide some background information, or a radiologist who sees an image, and on and on. So when we say documentation, we are talking about any sort of note about a patient encounter.
As opposed to handwriting, speech has long been considered the most natural way for clinicians to document these types of patient notes, however, many have not felt comfortable with the technology for one reason or another. Today, we encounter many physicians who have skewed expectations of speech recognition, and when they work with the most recent technologies, they are very surprised with how easy and reliable they are to use.
What are some of the pain points hospitals feel that lead them to speech recognition products and away from the traditional approach of writing notes by hand?
Belton: Specific pain points vary from hospital to hospital, but in general, I would have to say that just the sheer volume of documentation that is required today is reason enough to consider speech recognition as a substitute for handwriting.
On top of the issue of volume, hospitals now have so much incentive to create documentation processes that are as efficient and accurate as possible. The American Recovery and Reinvestment Act provided billions of dollars worth of incentives for physicians and hospitals to adopt an electronic health record (EHR). The American healthcare industry is probably about halfway through the process of moving from paper-based records to electronic records. A problem many are facing relative to this changeover is that so much information is, in effect, locked away and only accessible via a computer. The input and output of clinical notes in a computer can be very time consuming and prone to human error. Speech recognition is one way for physicians to quickly, efficiently and completely document their notes without having to worry about the nuances, so to speak, of the computer system they are working on.
In healthcare, we also have a push toward ICD-10, the new coding schema, that’s going to triple the number of diagnosis codes. Congress just voted to push the compliance date for ICD-10 until, at least, Oct. 1, 2015. A transition this year would have put a lot of pressure on physicians, as so many smaller practices where left flat-footed and unprepared to meet the demands of the new coding system. Speech recognition is going to be one tool these practices will be able to consider as they prepare for the next deadline.
So there are a lot of pressures on doctors to document their notes in an accurate and timely fashion. However, physicians did not go to medical school to become typists or data entry staff. They want to focus on their patients, and they don’t want to have to worry about becoming computer programmers. They want to maintain focus upon their patients, and speech recognition products can help them a great deal toward this goal.
What are some of the solutions Nuance provides to help with all the issues you described?
Belton: Within our Dragon Medical 360 solutions, we can offer a great deal of assistance. One solution we offer is called PowerScribe 360. It allows a radiologist to dictate the results of X-rays and MRIs. That solution is used by upward of 55 percent of all radiology reports in the United States. It allows someone to basically pick up a microphone, while they are looking at the image, and dictate the results of their work. The dictation is automatically transcribed in real time into the radiology information system (RIS) or the picture archiving innovation system (PAIS). It eliminates transcribing. The report is available immediately to whoever the referred physician is. Whether it is a primary care physician, the ER physician, the patient or the surgeon who may have to make a rapid decision on whether they operate or not operate, the documentation is ready to use.
That is really the heart of what we do. We offer reliability and flexibility, two hard-to-come-by attributes when dealing with medical documentation. With our Dragon Medical 360 solutions, physicians can choose the way they want to document care. They can dictate in real time with Dragon Medical 360 Network Edition, which allows doctors to dictate and, as they speak out loud, the note created goes directly into an EHR, like Cerner, Epic or Allscripts. Or, if they are more old school and would rather dictate and not be responsible for the editing themselves, they can use our Dragon Medical 360 eScription product, which is traditional transcription on steroids. It is aimed at meeting the EHR initiatives outlined by the federal government. As I mentioned, the government has provided incentives for hospitals and practices to deploy EHRs as part of what they are calling Meaningful Use. The challenges have been that a lot of hospitals have eliminated transcription services, so physicians are forced into becoming typists. So, when dictating into an iPhone, or a wall phone, or a microphone plugged into a computer, eScription captures the physician’s audio file and sends it off, along with the patient’s demographics, to a transcription pool. Before the transcriptionist gets the documentation, our speech recognition solution makes a first-pass draft, and it reformats the document to meet the standards of the hospital. Then the transcriptionist listens to the audio file to make any needed minor edits. After the transcriptionist makes the needed changes, the physician signs off on the document. That’s it.
We are essentially giving our physicians freedom of choice based upon their preferences of whether they want their documentation in real time with the Dragon Medical Network Edition or if they want to use a transcription pool with Dragon Medical eScription product. The response to these solutions in the industry has been great as there are about 150,000 practitioners who use the eScription product in the United States, and there are around 200,000 practitioners using the Dragon Medical Network Edition product.
Where do you see speech recognition going in the future?
Belton: The future is very exciting, and we are looking forward to the next steps on the horizon.
Nuance is in a unique position. We have the largest database of intellectual property – upward of 3,000 speech patents. We have a strategic partnership with IBM where we essentially acquired all of IBM’s intellectual property in speech. We feel like we have the most robust set of speech products. There are 500,000 physicians in the United States using some flavor of our speech recognition. This work is shaping a number of new dimensions for future speech recognition applications.
One step ahead is the move to stronger intelligence. Applying clinical language understanding to improve the physician experience during the documentation process is clearly a future direction for us.
Another element involves a combination of changes relative to the devices that physicians are using, and where their documentation intelligence is being stored. Physicians are mobile. There was a study done by a Manhattan research group that said 95 percent of all physicians have a PDA or a smartphone. Physicians are coming into hospitals and telling their Chief Information Officers, “Look, I have two iPhones and an Android. Why can’t I access patient information on those mobile devices? Why can’t I dictate onto those mobile devices?” Because of these trends, Nuance is very aggressively moving our speech solutions toward the cloud. This move will open many exciting possibilities in terms of speech recognition.
Editor’s note: You can read the remains of my conversation with Keith Belton on the HMT website within our “Online Only Features” section.