By Justin Mardjuki, VP Product Marketing at LifeLink
Futurists paint a bright picture of smart clinical technology: ask a chatbot what the right dose of ACE inhibitors is to give a patient with moderate aortic stenosis, and the all-knowing bot will spit out the right answer, based on millions of patient cases. Appealing? Absolutely. Ready for prime time? Definitely not.
Many industries have made technological advances in Natural Language Processing (NLP) and deployed conversational solutions that can learn and improve over time. Companies like Kayak and Bank of America have launched digital assistants that answer frequently asked questions and attempt to offload calls from overworked support teams. Some of these digital assistants, like Bank of America’s Erica, have seen significant use and traction, saving millions in support costs. Despite spending billions of dollars on repetitive tasks, the industry still struggles to deploy chatbots that can answer basic patient questions.
What lessons can healthcare learn from other industries? And how can healthcare make NLP technology work within its clinical and regulatory frameworks? Here are three barriers to NLP success in healthcare.
1. Intent Parsing Is Risky
Natural Language Processing (NLP) frameworks, like Amazon’s Lex and Google’s DialogFlow, take a user’s input, parse it into entities, and try to ascertain the user’s intent based on the entities it has recognized. Once the NLP machine makes its highest confidence guess about what the user was asking, it will return the answer to that question.
Intent matching is never 100% accurate. The machine may misinterpret the user’s question and return the answer to a completely different question. Outside of healthcare, the result of misinterpretation may be relatively innocuous (“Alexa, set a time for 30 minutes, NOT 13 minutes!”). In healthcare, misinterpretation opens organizations up to heightened adverse event risk (for CROs and pharmaceutical manufacturers) and clinical interpretation risk (for provider systems).
Imagine: a patient asks a chatbot “Is it normal to feel light-headed after using my inhaler?” The machine has been trained to recognize the word ‘light’, and answers: “Yes. Please store your inhaler out of direct light.” Oops. Turns out the chatbot wasn’t adequately trained on questions related to lightheadedness and dizziness.
The ongoing risk of question misinterpretation is enough to spook even forward-leaning compliance and risk teams.
Leading NLP models built by companies like Amazon and Google are trained on millions of examples. When Google’s NLP technique, Bidirectional Encoder Representations from Transformers (BERT) was published in 2019, it achieved state-of-the-art marks on several industry-recognized NLP benchmarks like GLUE (General Language Understanding Evaluation). Given the complexity and investment required, healthcare organizations should look to adapt widely used, commercially available models rather than build their own.
Healthcare organizations should wrap a widely used, powerful NLP model with features that help guard against known misinterpretation risks. This may include some of the following features:
- Only present answers to a patient after they’ve proven they are who they say they are
- Filter relevant answers based on the patient’s specific state (e.g. pre-appointment, post-appointment)
- Prevent the chatbot from collecting a potential adverse event until a live agent is ready to chat
- Enable human support teams to refresh approved content in real-time
Healthcare organizations that can successfully mitigate clinical and compliance risks while harnessing the power of enterprise-class NLP will see the best results.
2. Supervised Learning Is A Time Sink
How do chatbots get smarter over time? To improve the accuracy of intent matching, NLP frameworks rely on humans in the loop. Trained support agents or designers review questions the chatbot was not able to answer, craft new answers, add new intents, and prune the parsing algorithms.
Supervised learning is an ongoing, expensive process that has an unclear upside. Technical and innovation teams may be able to set up a chatbot framework for a clinical or operational team, but the ongoing investment in training is a tough pill to swallow.
How many conversations will the chatbot need to have before it reaches 90% accuracy? Does my team really have to review chat logs every week? Do we have to come up with all the questions and answers ourselves?
NLP should be pointed at specific use cases with demonstrable clinical or business value. Healthcare organizations should identify specific areas of patient inquiry that are both high volume and highly scripted. These areas are the most likely to benefit from automation. Working with partners that understand the use case in question can dramatically increase the chances of success, as they come to the table with domain-specific modeling, real-world training, and FAQ templates.
Medical information desks, the clinical teams of experts that answer calls about specific therapies, are a perfect example of such an area:
- High volume: medical information teams field thousands of calls each month
- High value: Providing on-demand access to clinical expertise can cost $25-$50 per call
- Highly scripted: Answers must be pre-approved by clinical experts, organized by therapy subject area, and no new answers can be generated on the fly
- Better patient experience: Patients prefer on-demand answers instead of long holds
3. NLP Alone Does Not Generate Value
A large healthcare organization manages millions of patient interactions every year, ranging from pre-surgical education and clinical intake to billing support and satisfaction surveys. These interactions are manual (in person, phone call, live chat), can occur over multiple sessions, and are typically staffed by a Patient Access agent or a Clinical Research Associate.
Investing in NLP trained in a specific domain can improve these manual patient experiences but making FAQ chatbots smarter isn’t enough. In healthcare, many initial interactions result in a recommended workflow.
- “What should I do next” > “Please schedule a follow-up with Dr. Lee”
- “My inhaler’s broken” > “If you’re having trouble with your inhaler, please let the Patient Assistance team know”
Too often, chatbots are launched as siloed solutions — deployed on websites, installed as Alexa skills — that can answer the patient’s first question but can’t help them accomplish the full workflow they need assistance with. Chatbots that can’t guide a patient through a full workflow often create more work than they solve, as frustrated patients repeat their questions over the phone with agents, and agents attempt to piece together the patient’s interaction history.
Healthcare organizations should think about empowering chatbots beyond answering frequently asked questions. What’s the upside? The most successful digital assistants can accomplish and automate 80% of scripted and repetitive administrative workflows.
The recipe: combine NLP technology with domain-specific workflow automation (Robotic Process Automation) and backend system of record integrations to deliver useful chatbot experiences. Chatbots that can interact with and update data in backend systems of record, reach out to patients multiple times throughout a journey, and guide patients through dozens of steps in a protocol, will unlock thousands of human hours.
Imagine — a chatbot that can schedule a patient’s appointment, write the appointment back to the scheduling system, collect all of the patient’s medical history forms, and remind the patient to show up on time, all without an agent’s help!
Mobile, on-demand engagement is the future of healthcare. It may not yet be open season for HAL 2000, ‘diagnose me’ AI quite yet, but clinically compliant, outcome-driven chatbots tackling specific use cases in healthcare are ready for prime time. Innovative chatbot solutions that can harness the combined power of NLP and workflow automation will unlock an outsized win-win-win: better for patients, better for clinical accuracy, and better for the bottom line.