Five ways speech will transform medicine
Recent advancements will expand speech recognition opportunities.
By Dr. Nick van Terheyden, November 2012
A substantial amount of a clinician’s time is consumed by administrative tasks. While important to the care process, these tasks are invisible to patients and payers and can pose an added burden to a resource-challenged healthcare system. In a recent study published in the Arch Intern Med, the author documented 70 orders placed; 30 prescriptions written; 19 clinical notes reviewed, edited and signed; and 15 dictations – totaling close to 60 minutes of dictation each day.
To maximize the efficiency and derive the most value from our resources, we need to minimize the effort and time taken to carry out these tasks. Speech and its associated cousins of natural language processing (NLP) and clinical language understanding (CLU) technology are boosting existing technological innovations to let clinicians focus more on direct patient care and less on documentation.
Here are five innovations coming to healthcare that will take the meaning of efficiency and documentation to a whole new level with core speech and NLP/CLU technology:
1. Cloud-based speech recognition
A recent survey issued by Spyglass Consulting Group states that 98 percent of physicians use mobile devices in their personal and professional lives. Speech recognition has long been tied to the desktop, but healthcare is now moving to a more mobile platform, freed from the shackles of a desktop or computer on wheels (COWs).
This is not just about convenience for the clinical team, but also about the move to more direct bedside capture of information, removing the opportunity for error or omission. Until now, speech recognition on mobile devices has been difficult, but the move to the cloud opens the door to an untethered clinical workflow. Any mobile-connected device with a microphone now offers both on-the-go access to current patient clinical data and the ability for the clinician to record his or her notes. The medical decision-making process is now directly connected through the device and patient record.
2. Navigation and control using speech intelligence
Building off of the availability of speech in the cloud, physicians are able to utilize voice as the method to navigate the clinical systems. Apple’s Siri showed the potential for speech by permitting the technology to use location, date, time, and calendar and contact data as inputs to the voice commands. By accessing similar data in a clinical setting, we can offer clinicians improved efficiencies at carrying out some of the more time-consuming administrative tasks. Speech intelligence allows for a simple command, such as, “Prescribe enalapril 10 mg daily, metformin 10 mg daily and order a hemoglobin A1C test,” to enter the data for the two prescriptions and enter the order for a lab test for the patient.
3. Cloud-based medical intelligence
We’ll start to see intelligent voice solutions equipped with the ability to access patient medical records, the context of the disease and knowledge base systems, including evidence-based medicine (EBM) and clinical decision-support systems (CDSS). As medicine is becoming increasingly complex, the ability of clinicians to process all the data and apply it at the point of care becomes difficult, for example:
One in eight older Americans suffer from Alzheimer’s disease, yet an April 2012 study in the Journal of Neuropathology and Experimental Neurology found that between 17 and 30 percent of those diagnosed with Alzheimer’s disease had been misdiagnosed and had other conditions.
Clinicians capture the data using speech, and the cloud-based medical intelligence analyzes the patient data in the context of extensive knowledge bases offering relevant insights and supporting information to the clinicians and the patient.
4. Medical intelligence from the narrative note
Today, the vast majority of clinical information is generated through clinical narrative dictation. Information is processed either by background speech recognition or by a front-end speech-recognition input device. CLU technology allows applications to take the meaning of patient-centric, data-infused clinical information capture and utilization to a whole new level. Extracting data directly from the narrative dictation turns regular text-based information into clinically actionable data that can drive clinical workflow. As clinicians use intelligent speech interactions, their medical notes and clinical decision making are captured and understood, offering the potential for real-time clinical support and automated workflow based on the patient’s clinical data.
5. Analytics, alerts and tracking from narrative dictation
Medical intelligence derived from the narrative dictation using CLU can then be sent to the clinical data repository and linked to multiple other sources of data (e.g., laboratory, pathology, imaging and other diagnostic services). When linked to appropriate sources, clinicians and healthcare facilities obtain a complete picture of individual patient data and aggregated population and disease trends, realizing the potential of “big data.” With the narrative decoded and an analytics tool, clinicians now have a complete picture of their patients.
Today, speech recognition offers efficiencies, but recent technological advancements will expand the horizon of medical opportunity. Speech recognition will change the human/computer interface by reducing the administrative burden, decreasing costs and, most importantly, increasing the efficiency and safety of healthcare delivery.
About the author
Nick van Terheyden, M.D., is CMIO, Nuance. For more on Nuance, click here.
Tags: Speech Recognition