By Josh Morgan, SAS
The World Health Organization’s priorities this year include health as a human right, universal health coverage, appropriate access to care, and high-quality care. These themes increasingly echo as part of international public discourse, especially in the United States where we continue to explore the most efficient and effective ways to take care of our communities.
Data is critical in both assessing how well (or how poorly) we have achieved these goals, and it should be central in program planning and policymaking. Across health and human service systems, data can be used to empower whole person care initiatives and encourage continuous quality improvement. Yet despite access to massive amounts of data, this empowerment hasn’t been easy, nor has it been the norm.
Artificial intelligence (AI) and machine learning (ML) are attracting more attention to progress such efforts. These technologies have great potential in improving health outcomes, but as with any analytics, the quality of the data input is critical to the quality of the output.
Further, one cannot overlook the legitimate concerns around the ethical use of advanced analytics, including AI and ML. Algorithms can be taught or trained, however unintentionally, to be biased or unethical. Say an algorithm uses underlying data representing only certain populations, for example, it would inherently reflect certain biases.
If we want true health - not just the absence of illness -for everyone, health equity is imperative. Health equity is defined as a lack of differences or disparities in access and quality of care across various populations. Data can be a powerful tool in identifying areas of inequity to help improve our systems, but biased data can have the opposite effect.
As the adoption of these technologies transitions from novelty to norm, healthcare leaders have a moral imperative to promote ethical ML and AI. Here are three places we can start:
- Algorithm transparency – What data do we need to be confident that an algorithm is as fair as possible?
AI is considered “black box” when we don’t know or can’t explain how we got the result. Such algorithms are not only highly concerning but sadly very commonplace as new players flock to the growing market. If we don’t know why a result or conclusion came to be, can we trust it? How do we know whether it’s biased?
Data lineage and transparency are essential. Thankfully there are ways to make health equity algorithms more transparent, like vetting and validating population-level trends with individual results. For example, providing not just a risk score but an explanation of what data fields influenced the risk score assigned to an individual. This dual assessment can help detect algorithm bias that may not be caught on a wide-scale, black box algorithm application. It provides a means to critically examine the results rather than accept them blindly.
- Data transparency – What are the limits to data transparency? How would more open data change our insights and curiosity?
In a recent conversation with my friend and colleague, Dr. Tyrone Smith, (former Chief Health Information Officer and Technology Chief for San Bernardino County, CA) we discussed how the more transparent we make our training algorithm data sets, the more effectively we can assess the ethics of the algorithms we use. That’s especially true in the public sector. He argued that we shouldn’t be using “private” data to make systems that make “public” decisions.
Put more plainly, making decisions about me based on data about me to which I lack access can lead to problems on many levels. As our society continues to wrestle with data privacy, the trend toward giving people more transparent access to their own data is not likely to go away.
- Natural language processing (NLP) – How might NLP help more people be represented in identifying their needs and assessing quality in their own words? How does this speak to improving equity?
A key issue in health equity is doing our best to ensure all voices are heard. The unfortunate reality is that many voices are inadequately captured in traditional, structured data. The result is qualitative data that lacks the rich culture and nuance of human experience. Yet when considering true whole person care, human experience is invaluable.
While qualitative data can be powerful, the often time-consuming and laborious processes required to analyze such data mean its full value is frequently unrealized. Enter natural language processing, which is creating access to these missing narratives in new ways.
With early versions of NLP, like computational linguistics, the results and methodologies were often hard to understand. In modern NLP, however, coding and clustering processes can be more automated. A human must still name the themes, but the technology can validate the relationships of words and phrases. AI systems using NLP also can ensure that literally all voices are heard, even across different languages.
Data Moves The Story
Advances in technology can empower advances in health policy, quality and care. Keep in mind, though, that data is not the end of the story but just the beginning – particularly as we try to address health equity.
Data can help identify potential areas of strengths and opportunities for improvement. It also can trigger new conversations and improved civic engagement. Whatever our aims, we can more effectively and ethically wield data, analytics, AI and ML when we remember these tools help propel our human story rather than deliver definitive answers.
About The Author
Josh Morgan, PsyD, is National Director of Behavioral Health and Whole Person Care at SAS, where he helps public sector agencies use data and analytics to support person-centered approaches for better health outcomes. He is a #data4good evangelist and advocate for improving benefits, access to care and more holistic services. Find him on Twitter, @DrJosh.