Gallaghers Resulting

View Original

Lessons from the adoption of AI in healthcare finance

By Pamela J. Gallagher 

The emergence of artificial intelligence (AI) is the topic on every healthcare leader’s mind. In many ways, though, AI is old news to those of us in healthcare finance. I can say from experience that once an AI technology comes onto the scene, there is no stopping it. AI is here to stay. The question leaders must grapple with is how to embrace it wisely and with purpose.

We’ve increasingly used these technologies in an administrative setting for years now for functions such as insurance verification, billing, and collecting. This has allowed for the streamlining of essential processes for the organization, provider, and patient. As healthcare organizations look to possible clinical applications of AI, there is vast potential to bring the benefits we’ve seen on the administrative side to the care providers offer to patients, the foremost being a quicker and more accurate diagnosis.

However, more caution is required in a clinical setting than an administrative one, for obvious reasons. An automated system can make a mistake on a bill, and while that’s certainly not good, it’s also not a life-or-death situation. Human beings are complex, and no amount of data in the EMR can paint an entire picture of a person.

Still, there are several lessons we learned in the adoption of AI in healthcare finance that can serve as a guide as considerations are made for the advancement of AI in a clinical setting.

Get clarity on your mission.

The adoption of AI may be inevitable, but being on the cutting edge of the available technology makes for a terrible mission statement. Technology is foremost a tool. As with any tool, you can use it for its best purpose, or others that stretch it beyond the bounds of what it was designed to do.

The only way to be able to evaluate whether an emerging technology is a good fit to meet the particular needs of your organization or patients is to be crystal clear on your organization’s mission and values. What is it that you are trying to accomplish? That must be your guide.

Define your terms.

AI is being defined as it is being developed. It seems the numbers of types or categories of AI is constantly increasing. There are many calls for a set of standards to regulate the development and use of AI, and I agree that those are necessary (as I will discuss below). But it will be nearly impossible to have useful standards without a clear understanding or definition of the “product.”

Even if definitions change as the product develops, I think there should be a reference or framework for clarity. At the very least, I have seen that it is essential to ensure that those responsible for making AI-related decisions for your organization are working off of shared definitions.

Be honest about the risks and unknowns.

Part of the responsible use of AI is an awareness of its benefits, limitations, and risks. For example, data accuracy is a major issue with the adoption of AI. There is a wide range of methods for collecting data from organization to organization, and the interpretation of that data varies as well.  We must consider how that might impact the data output of AI-assisted diagnoses, for example.

It is important to be aware of the risks and have mitigating strategies in place. You might start by asking yourself the following:

  • While taking the “humanness” out of the interpretation equation is being touted as a benefit because of variability, what do we lose by a lack of human interaction?

  • How can we ensure we determine potential bias ahead of time? How can we discern whether the originating data is reliable?

  • How would we handle the accidental inclusion of unreliable data? How can we determine if the inclusion of such data was purposeful, and what following actions would be necessary?

Create standards and a system of evaluation.

The advancement of AI is out-pacing its reliability and credibility. This was true when AI first made its way into healthcare administration, and it’s true now as clinical considerations are being made. We will not be able to unlock the full clinical potential of AI in an ethical manner without developing standards for use and evaluation.

Upendra Patel elaborates on the crucial challenge for AI usage in healthcare:

 “The underlying principle in most, if not all, AI projects is the garbage-in-garbage-out principle. Without the massive chunk of data fed into the AI systems, it is practically impossible to get results. This is why it is important to source high-quality healthcare data – a move that has become increasingly difficult over the years. The difficulty is attributed to the fragmented and unorganized health data spread across various data systems and organizations. Patients change insurance providers and healthcare providers too frequently, making data acquisition a challenge.”

The defensibility of an “if it’s on the internet it must be true” attitude is long gone. As with management of teams and leadership of organizations, we must “trust but verify.” Healthcare organizations need to develop systems to evaluate the credibility and reliability of the data inputs, as well as the outputs they are seeing from the AI technologies they have adopted.

 

Resources:

Artificial intelligence in healthcare: Top Benefits, Risks and Challenges, Tristate Technology

Artificial Intelligence in Healthcare: Benefits and Challenges of Machine Learning Technologies for Medical Diagnostics, US Government Accountability Office

AI is in its regulation era, Morning Brew

What is AI Bias in the Healthcare and How To Avoid It in 2023, AI Multiple

The ‘Godfather of AI’ fears his own creation, Morning Brew