By Kayla Matthews, Productivity Bytes
While health professionals increasingly use voice recognition and artificial intelligence-backed software programs for clinical documentation, they yield errors in seven percent of dictated words. While this number may seem like a small margin of error, it still leaves room for compromises in healthcare quality and patient safety.
Up to 98,000 people die each year due to preventable medical errors, some of which may link to critical inaccuracies in documentation. These mistakes, like incorrect drug prescriptions and diagnoses, are much more likely to occur with AI-powered technology. Before it can fully replace human dictation and transcription, innovators must examine existing methods and work out a number of kinks.
Experts build electronic health record systems to share data easily between healthcare providers and different organizations. Yet how people document this data can vary. Since the average physician can only type 30 words per minute, most prefer to dictate their medical documentation.
One method of medical dictation involves the use of front-end speech recognition software. The physician's words are transcribed in real time by AI and appear as text on the screen. Ideally, the doctor corrects each sentence before continuing on.
A traditional alternative involves the use of a human transcriptionist who types the physician's dictation as he speaks. A worker might also transcribe information based on recordings.
Voice Recognition Vs. Transcription
The comparison in accuracy of these two methods is crucial since the smallest error can result in injury or death. In nearly every study, speech recognition had a higher rate of error than transcription.
AI voice recognition accuracy ranges from 88.9 to 96 percent, though vendors often market these products as 99 percent effective. Dictation and transcription, however, have a 99.6 percent accuracy rate, significantly reducing the risk of error and malpractice. Plus, for clinical documentation, speech recognition has proven slower than keyboard and mouse documentation.
Medical professionals can discover a few advantages to voice recognition, however. It improves radiology report turnaround times in community-based hospital practices. Since software transcribes dictation, it can be sent to the radiology lab and receive a quick response. Nevertheless, these reports may be inaccurate, rendering them useless.
Another advantage of using AI software is that physicians who use it can attend to more patients and earn more money. It also reduces expenditure by eliminating the need for a transcriptionist. Once again, though, one must ask if the potential benefit is worth the risk of error.
In most cases, if a physician decides to implement voice recognition in his office, he still needs to either re-edit documentation, or hire a transcriptionist to edit it in real time. As a result, either the doctor is wasting his time fixing records or he is obligated to hire a transcriptionist to ensure everything is accurate and organized properly.
Where Technology Falls Short
The most apparent issue with voice recognition and AI-backed programs is the lack of contextual understanding. They're only capable of hearing and transcribing words individually, with no background to shape which term they decide to write. Therefore, uncommon, longer vocabulary may be misrecognized and mistranscribed.
Human medical transcriptionists, however, have extensive background knowledge on anatomy, medications, maladies and testing — not to mention a basic understanding of the English language — which helps guide their sentence structure and word choices. This knowledge makes their documents more accurate and comprehensible.
Moreover, physicians often want to add or remove a note to a certain section of a patient's health record. This task requires numerous mouse clicks and movements from the transcriptionist — which are largely impossible for a voice recognition program to accomplish independently. For the most part, doctors don't want to go back and do this, so they'll hire a transcriptionist to fix the AI-generated text or transcribe it from the start.
The Future Of Smart-Dictation
The prevalence of dictation errors in health records and medical documentation, combined with a lack of physician review, suggests developers should focus their efforts on integrating both the programming and sufficient review into the existing clinical workflow.
It also implies the need for improved contextual understanding for more accurate transcriptions. Future studies may commit to creating a system that recognizes repeated phrases and sentence structure based on the user's own vocabulary and grammatical mannerisms.
Health systems that adopt AI technology and don't implement review safeguards open themselves up to liability, including costly medical malpractice suits. While smart dictation can aid medical workers and make processes more efficient, it's not able to fully replace humans — yet.
When will AI software fully replace humans? Even for experts, the answer is unclear. Smart dictation must be error-free before it can be fully implemented in healthcare offices around the world, a task that will take a while to accomplish.
Until smart robots become available for widespread use, humans will be needed to transcribe and edit physician dictation.
About The Author
Kayla Matthews is a MedTech writer whose work has appeared on HIT Consultant, Medical Economics and HITECH Answers, among other industry publications. To read more from Kayla, please connect with her on LinkedIn, or visit her personal tech blog at https://productivitybytes.com.