What principles should guide artificial intelligence innovation in healthcare? – HealthCareExecIntelligence

April 22, 2024 -Artificial intelligence tools (AI) are proliferating in healthcare at breakneck speed, amplifying calls from healthcare leaders and policymakers for greater alignment on the responsible use of these tools.

The FDA authorized692 artificial intelligence and machine learning devices in 2023, 171 devices more than its 2022 list. In response to this abundance of tools, various organizations have released their own definitions of what it means to use AI responsibly, including organizations like Pfizer, Kaiser Permanente, Optum, the White House, and others. However, the industry lacks an overarching set of principles to improve alignment. Brian Anderson, MD, CEOand co-founder of the Coalition for Health AI (CHAI), discusses the need for guidelines and standards for responsible AI in healthcare. He highlights key principles of responsible AI and emphasizes the importance of having various stakeholders--including patients--involved in developingthese standards and guidelines.

Brian Anderson, MD:

Is there an agreed-upon consensus around what trustworthy, responsible AI looks like in health? The answer we quickly found is no. You have a lot of very innovative companies and health systems building AI according to their own definitions of what that means and what that looks like.

Kelsey Waddill:

Welcome to Season 2 of Industry Perspectives, coming to you from HIMSS 2024 in Orlando, Florida. I'm Kelsey Waddill, multimedia manager and managing editor at Xtelligent Healthcare.

From 2024 to 2030, healthcare AI in the US market is expected to experience a 35.8 percent compound annual growth rate. As the tool proliferates and in view of the risks inherent to the healthcare industry, guidelines for AI utilization are essential and many are asking: what does it mean to use artificial intelligence responsibly in the healthcare context? Brian Anderson, CEO, chief digital health physician, and co-founder of the Coalition for Health AI, or CHAI, founded CHAI with this question in mind. He's going to break it down for us on today's episode of Industry Perspectives.

Brian, it's great to have you on Healthcare Strategies today. Thank you for making the time in a busy HIMSS schedule, I know, so we're glad that this could work out. I wanted to start out by asking about CHAI, the organization I know you co-founded in 2021, and so I wanted to get some more background on just how it started, your story.

Brian Anderson, MD:

Yeah, really its roots came out of the pandemic. So in the pandemic there were a lot of organizations that were non-traditional partners that were inherently competitive. You had pharma companies working together, you had technology giants working together trying to address this pandemic that was in front of us. And so coming out of it, there was a lot of goodwill in that space between a Microsoft, and a Google, and an Amazon, or various competing health systems wanting to still try to find a way to do good together.

And so one of the questions a number of us asked was: is there an agreed upon consensus around what trustworthy, responsible AI looks like in health? The answer we quickly found is no. You have a lot of very innovative companies and health systems building AI according to their own definitions of what that means and what that looks like. And inherently that means that there's a level of opaqueness to how we think about trust and how we think about performance of AI in these siloed companies. In a consequential space like health, that can be a problem.

And so we agreed that it would be a worthwhile cause and mission for our merry band of eight or ten health systems and technology companies to come together to really begin trying to build that consensus definition of what responsible, trustworthy, healthy AI looks like. And so that's been our north star since we launched in 2021, and it quickly took off.

It started with those eight or ten. It quickly grew to like a hundred. The US government got involved right away. Office of the National Coordinator and the Food and Drug Administration--which are the two main regulating bodies for AI--quickly joined our effort. White House, NIH, all the HHS agencies. And then, excitingly, a large number of health systems and other tech companies joined, to the point today where it got to the point where we had 1,300 organizations, thousands of people, part of this coalition of the willing.

That became a challenge. How do you meaningfully engage 1,300 organizations, 2,000 to 3000-plus individuals with a volunteer group? Not very well. And so we made the decision to form a nonprofit, started that in January of this year. The board was convened. They asked me to be CEO. I'm very honored and humbled by that. And so I took that role on. Today is day two, I think, technically, in my new role.

Kelsey Waddill:

Congratulations!

Brian Anderson, MD:

Thanks. So I'm really excited to be here at HIMSS, take part of the vibrant conversation that everyone is having about AI here.

Kelsey Waddill:

Yeah, well, I mean it's definitely one of the major themes of the conference this year. And for good reason because, in the last year alone, there's been so much change in this space in AI specifically and its use in healthcare.

I did want to zoom in for one second, you mentioned this phrase, I know it's throughout your website and your language, and it's "responsible AI in healthcare." And I feel like that's the question right now, is: what does that look like? What does that mean? And so I know that's a big part of why CHAI convened to begin with. So I was wondering if you could shed some light on what you found so far and what that means.

Brian Anderson, MD:

Yeah. It's an important thing to be grounded in. So it starts with, I think, in the basic context that health is a very consequential space. All of us are patients or caregivers at one point in our life. And so the tools that are used on us as patients or caregivers need to have a set of aligned principles that are aligned to the values of our society, that allow us to ensure that the values that we have as humans in our society align with those tools.

And so some of those very basic principles in responsible AI are things like fairness. So is the AI that's created fair? Does it treat people equally? Does it treat people fairly? When there's this concept in AI that I remind people, all algorithms are at the end of the day, is they're programs that are trained on our histories. And so it's a really critical question that we ask ourselves is: are those histories fair? Are they equitable? Right? And you smile, obviously the answer is probably not.

Kelsey Waddill:

Probably no.

Brian Anderson, MD:

And so then it takes an intentional effort when we think about fairness and building AI to do that in a fair way.

Another concept, there are concepts like privacy and security. So the kinds of AI tools that we build, we don't want them to leak out training data that might be, especially in health, personal identifiable, protected health information. And so how we build models--particularly, there's been some news in the LLM space that if you do the right kind of prompting or hacking of these models, you can get it to reveal what its training data is. So how, again, we build models in a privacy-preserving, secure way that doesn't allow for that is really important.

There are other concepts like transparency, which is really important. When you use a tool, you want to know how that tool was made. What were the materials it was made out of? Is it safe for me to get into this car and drive somewhere? Does it meet certain safety standards? For many of the common day things that we use, from microwaves, to toaster ovens, to cars, there's places where you can go and you can read the report, the safety reports on those tools.

In AI, we don't have that yet. And so there's a real transparency problem when it comes to understanding very basic things like: what was this model trained on? What was it made of? How did it do when you tested it and evaluated it? We have all of the car safety tests, the famous car crash videos that we are all familiar with. We don't have that kind of testing information that is developed and maintained by independent entities. We have organizations that sell models, technology companies that make certain claims, but the ability to independently validate that, very hard.

And so this idea of transparency in terms of how models we're trained, what their testing evaluation scores were, what their indications and limitations are, and a whole slew of other things go into this concept of transparency. So principles like that.

Other ones like usability, reliability, robustness--these are all principles of responsible AI. I won't bother detailing them all for you, but those are the principles that we're basing CHAI around. And so when we talk about building the standards or technical specifications, it means going from a 50,000-foot level and saying fairness is important to saying, "okay, in the generative AI space, what does bias mean? What does fairness mean in the output of an LLM?" We don't have a shared agreed upon common definition of what that looks like, and it's really important to have that.

So I'll give you an example. A healthcare organization wants to deploy an LLM as a chat bot out on their website. That LLM, if you give it the same prompt five or six times might have different outputs. A patient trying to navigate beneficiary enrollment might be sent in five different directions if they were to give the same prompting. So that brings up concepts like reliability, fairness, and how we measure accuracy. These are all things that are principles in responsible AI that for generative AI, for chatbots, we don't have a common agreed upon framework about what "good" looks like and how to measure alignment to that good. And so that's what we're focusing on in CHAI because it's an important question. We want that individual going to that hypothetical website with that chatbot to have a fair, reliable experience getting to the right place.

Kelsey Waddill:

Yeah, I think that captures pretty well the complexity of the role that AI plays in healthcare right now. The questions that are being asked right now in each of those points we could dive into for a while. But I am curious: we want to do this well, we want to build out these guidelines well, but there's also a bit of time pressure it seems like from my perspective, in terms of there's people who are, as you kind of alluded to, the privacy and security piece, there's those who want to use this and use any holes that we haven't quite figured out how to cover yet for malicious intent. There's that time pressure. There's also just the time pressure of: people are generating uses of AI at a very rapid pace, and we don't have a structure for this yet that's set in stone.

So I was curious what you would recommend in terms of prioritization in these conversations. Obviously that list you just mentioned I'm sure is part of that, but is there anything else that you'd say about how to pinpoint what we need to be working on right now?

Brian Anderson, MD:

It's a good question. In the health space, it's really hard because it's so consequential. A patient's life could be on the line, and yet you have innovators that are innovating in fantastic, amazing ways that could potentially save people's lives, developing new models that have emerging capabilities, identifying potential diagnoses, identifying new molecules that have never been thought of by a human before. And yet, because it's so consequential, we want to have the right guardrails and guidelines in place.

And so one of the approaches that I think we are recommending, two-part. One is when we formed CHAI, we wanted it to have innovators and regulators at the same table. And so these working groups are going to be focusing on developing these standards, developing the metrics and the methodology for evaluation, with having innovators, and regulators, and patients all at the same table. Because you can't have innovation working at the speed of light developing it without an understanding of what safe and effective, and what the guardrails are. You can't have regulators developing those guardrails without understanding what the standards are and the consensus perspective of what good responsible AI looks like coming from the innovators. And so one part of the answer to your question is bringing all the right stakeholders to the table and having patients at the center of what we're doing.

The second part is--so, because health is so consequential, there's risk involved. I would argue that there's layers or different levels of risk. An ICU, one might agree there's a level of risk that's pretty significant, a patient's life. They're on that edge in terms of life and death. AI that might be used in helping to move beds around in a hospital, not so consequential. So a patient's life might not necessarily be on the line determining efficiencies about where beds are moving.

And so from that perspective, we are looking to work with the innovation community to identify where they can accelerate and innovate in safe spaces, in those bed management efficiency, back office administration, drafting of emails, variety of different use cases that aren't as consequential or aren't as risky. Whereas the more risky ones, the ones like in the ICU where life and death is a matter of potentially what an AI tool is going to recommend or not recommend, those ones require going much more slowly and thinking through the consequences with more rigor and stronger guidelines and guardrails.

And so that's how we're approaching it, is: identifying where the less risky places are, looking to support innovation, building out guidelines around what responsible, trustworthy AI looks like, while slowly building up to some of those more risky places.

Kelsey Waddill:

That makes sense. And in our last question here, I just wanted to hear what you're excited about for this next year in this space, and specifically what CHAI's doing.

Brian Anderson, MD:

Yeah. I would say we're at a very unique moment in AI. I had shared earlier that all algorithms are are programs trained on our histories. We have an opportunity to address that set of inequities in our past. And that could take the form of a variety of different responses.

One of them, the one I hope, and the one we're driving to in CHAI is: how do we meaningfully engage communities that haven't traditionally had the opportunity to participate in so many of the digital health revolutions that have come before? As an example, models require data for training. To create a model that performs well, you need to have robust training data for it to be trained and tuned on whatever that population is. And so how can we work with marginalized communities to enable them to tell their digital story so that we can then partner with the model vendors to then train models on data from those communities, so that those models can be used in meaningful ways to help those communities and the health of those communities? That's one of the exciting things I'm looking forward to for the next year.

Kelsey Waddill:

Great. Well, I'm excited too. And thank you so much for this conversation and for making time for it today.

Brian Anderson, MD:

Absolutely. Thanks, Kelsey.

Kelsey Waddill:

Thank you.

Listeners, thank you for joining us on Healthcare Strategies's Industry Perspectives. When you get a chance, subscribe to our channels on Spotify and Apple. And leave us a review to let us know what you think of this new series. More Industry Perspectives are on the way, so stay tuned.

Excerpt from:
What principles should guide artificial intelligence innovation in healthcare? - HealthCareExecIntelligence

Related Posts

Comments are closed.