According to a recent report from HIMSS Analytics, speech recognition technology is showing one of the highest potentials for growth within the hospital IT market. The technology is listed among the top-two technologies, with a 21.5 percent compounded annual growth rate at a comparatively low current market penetration of 47 percent. The study leveraged the HIMSS Analytics Database to profile the use of 22 support service applications in hospitals and tethered ambulatory and home health agencies.
Some industry experts have commented on the fact that speech recognition products for healthcare professionals have been around for many years, and raised the question of why we now see physicians and other care providers adopt these products at a much faster growth rate than we have seen in the past. Did the technology improve to the point that speech recognition products are now widely understood to be essential productivity tools for efficient clinical documentation? Or are there other factors at play too?
To answer these questions, we need to look more broadly at the recent trends in physician clinical documentation. Ever since the HITECH act made billions of dollars available to hospitals and providers for adopting “meaningful use” electronic health records (EHR) technology, we have seen accelerated growth in EHR deployments. The Department of Health and Human Services (HHS) announced recently that it has exceeded its goal of 50 percent of physician offices and 80 percent of eligible hospitals having EHRs by the end of 2013. Adoption of EHRs by physicians began a steady increase about two years ago.
Along with the accelerated adoption of EHR systems, however, we have also seen what some observers have called “rampant physician dissatisfaction” due to workflow disruptions and productivity losses. Many physicians find it challenging to maintain their historic levels of efficiency while struggling with usability issues associated with EHR user interfaces. Clinical documentation modules of EHRs, in particular, have been found to cause major productivity losses if not implemented carefully. The increasing trend toward data-driven healthcare has led to a misguided focus on keyboard- and mouse-driven, templated structured data entry that overburdens the physician with a plethora of check boxes, radio buttons and drop-down menus – with significant risk of reduced quality of documentation when there is heavy use of standard EHR templates and copy-and-paste tools.
Many healthcare organizations have realized these pitfalls of overly structured clinical documentation, which has led to a renaissance of the physician narrative in EHR-based clinical documentation. Patient histories as well as assessments and treatment plans benefit vastly from a thorough physician narrative that explains his/her thought process, as well as captures the different levels of certainty that are inevitably associated with different diagnoses and treatment options. These aspects simply cannot be captured via structured templates or mapped to structured data models without a loss of information. It is no wonder, therefore, that the hospitals that have achieved the highest rates of adoption of their EHR physician documentation modules have implemented a healthy mix of structured data entry and free-form physician narrative.
This is where speech recognition technology comes into the picture. There is no faster and more natural way for a physician to tell a patient story than to narrate it verbally. Speech recognition technology has matured to the point where most physicians can dictate such narratives at blazing speeds with highly accurate automatic transcription happening in real time. Not only that, but advances in technology have led to a new generation of speech understanding (SU) systems that are capable of understanding the meaning of a physician narrative in addition to just transcribing the dictated words. That means that narrative documentation has become actionable and can be analyzed along with the structured data captured by EHR systems. Clinically focused natural language understanding (NLU) technology can now identify key patient information in free-form narrative, analyze it in context of all other available patient health information and ultimately assist healthcare organizations in making more informed decisions across a mix of structured and unstructured data.
About the author
Juergen Fritsch, Ph.D., is chief scientist, M*Modal.
For more on M*Modal: click here