Specialty Podcast: AI Regulation in Healthcare - Executive Orders, State Laws and Managed Care Risk
By Alliant Specialty
Join Kenny White, Alliant Healthcare, and Kathy Roe, Health Law Consultancy, as they examine recent federal and state developments shaping the regulation of artificial intelligence in healthcare and managed care, including a new executive order establishing federal priorities. Their discussion highlights state-level activity around prior authorization and mental health AI applications, alongside the growing tension between federal oversight and state enforcement. They also share practical considerations for organizations, from governance and risk management to compliance and insurance implications as AI adoption continues to evolve.
Intro (00:00):
You are listening to the Alliant Specialty Podcast, dedicated to insurance and risk management solutions and trends shaping the market today.
Kenny White (00:08):
Welcome back to another Alliant Specialty Podcast. I'm your host, Kenny White. I'm the senior vice president and director of the Alliant Managed Care Industry Group, and that is a dedicated group to risk and solutions for managed care entities of all types, sizes and structures. I'm a healthcare lawyer with about 30 years of experience in the courtroom representing healthcare companies and another 12 years as a risk and risk financing consultant. Joining me today is Kathy Roe, who is the founder and principal of the Health Law Consultancy in Chicago, and was recently named the Lawyer of the Year in Health Law. In addition to Health Law Consultancy, since 2009 I believe, Kathy's had a number of years of experience in health law at several large law firms in the Midwest. She's a leader of the managed care and related groups at the American Health Lawyers Association, AHLA, and is published and is a frequent speaker in the area of managed care and health law. Today we wanted to spend a few minutes educating you about regulations pertaining to artificial intelligence in healthcare, both at the federal and at the state level. AI, the reality, the hype, the promise, the investment, potential, serious problems, et cetera, is way, way, way more than we would have to handle in 20 minutes. We're going to hit the high spots related to healthcare and managed care and that aimed at regulations in the industry. One of my favorite authors, Doug Adams, who is the author of "The Hitchhiker's Guide to the Galaxy," once said that all we really want when it comes to technology is something that works and doesn't kill us and that you know something is technology if it comes with a manual. Well, most of those manuals that you see in reality are based on all the regulations and compliance functions according to regulations at the federal and state level. We really don't have a lot of time, Kathy, to get into EU and U.K. and other foreign regulations, but just really quick, I think you'd agree with me that most of the regulations that are going to be impactful for healthcare, particularly in the United States, are going to come out of the U.S.
Kathy Roe (02:22):
I would agree because healthcare is local, as we know, so I don't think we'll be looking abroad.
Kenny White (02:31):
Unless you have an entity that's large enough to have subsidiaries overseas, you're in the pharma industry being one of them, or you've got call centers located in India or the Philippines, et cetera, you really wouldn't have much to worry about in the next couple of years with regard to AI regulations. If there's going to be a problem, it'll be coming from the U.S. regulatory side anyway. The other pieces of that puzzle would be China, and we're not going to know whatever regulations they have or don't have because they're not going to tell us. Then of course there's Israel, and Israel is following basically what the U.S. is doing at the moment. Here in the U.S., we are really kind of two schools of thought: regulate now or regulate later. Governments are approaching AI development, use, adoption, implementation and compliance and enforcement as well, with a significant degree towards what the use of the AI is. If the AI is something that's in procurement, there's going to be less regulations. If it's going to be on the other end of the scale related to healthcare, diagnostics, clinical involvement, decision making, things like that, you're going to probably end up with more regulation. So Kathy, if we look at the Trump administration, the current one, not the previous one, what is the Trump administration doing with regard to regulation of AI and healthcare?
Kathy Roe (03:56):
Well, I think the Trump administration is taking the mindset of regulate later rather than now, let's see the promise of AI, let's allow for the continued advance of AI, so that we achieve that federal goal of global AI dominance. Most recently, the Trump administration issued an executive order that was about AI. The week would've been the week of December 8th. That AI order really was focused on sending the message that here in the U.S., federal law should be supreme and should take a senior spot or superior spot to state regulation. That's the policy of the land. The important thing to remember though about executive orders is they aren't laws. An executive order is simply a directive to various departments and agencies within the federal executive branch, but the executive order definitely sent a message that I think was intended to be heard beyond those agencies and departments about what laws should control AI regulation.
Kenny White (05:25):
If we go back a couple of administrations to the Obama administration, the first Trump administration, the Biden administration, there's been fewer than 15 pronouncements, guidelines, laws, executive orders, pertaining to AI. Half of those were undoing ones that had been done before. Trump issued a few, Biden undid them. Biden issued a few, Trump undid them. That's half of the ones that have gone into effect. Obviously as a government policy, if you build it, they will come, concept from "Field of Dreams" is a reality. Some people may or may not agree with that particular government posture or strategy, but it's certainly a legitimate one. Let's build the infrastructure, let's get the uses out there, and then worry about trying to regulate it later. They tried to do something ERISA like by having federal supremacy with regard to regulations of AI and the one big beautiful bill. That did not pass the Congress, that was taken out. He's tried to do sort of the same thing by executive order now, directing the agencies that are under his control to do whatever they can to limit the impact of state regulation, so that we don't have a hodgepodge of 50 plus different jurisdictions trying to regulate AI in a different way, which obviously would be one difficult from a compliance perspective and an enforcement perspective, but it also significantly would hamper the innovation that's involved. Now, whether or not that's an appropriate choice or not, we'll find out later, but it's certainly a legitimate government purpose, whether or not it turns out to be right or not. In terms of state law, because we know that the feds, their primary focus is don't touch it, let's get it out of infancy through toddlers, at least into adolescence before we start putting limitations on it. At the state level, states are doing things differently, so Kathy, what have you seen on the state level with regard to AI regulation?
Kathy Roe (07:29):
There has been some regulation at the state level in terms of state regulation relevant to AI in healthcare. There have been multiple states that have gone after the application of artificial intelligence, especially as applied to prior authorization and in particular prior authorization denials with states advocating generally that you need a human in the loop when you're making a denial decision. That's one area where there's been activity. Another area where there has been some activity is around regulating AI chatbots. In particular, this year in 2025, there's been some legislative activity relative to mental health AI bots and wanting to preserve a line between how AI is used in the mental health therapy context versus what is licensed mental health therapy. Right now the line is AI is not the licensed practice of mental health therapy. Those are areas that stand out to me, but honestly, when you talk about AI regulation laws that specifically get at AI, there's been a lot of bill activity but not a lot of final laws that have been enacted. I think it's fair to say that there has been relatively slow activity at the state, whether it's been chilled by the tech industry, lobbying, by Trump administration, pronouncements or some other factor. It's been slow moving. I would say one place that impacts AI but is not specific to AI, that has been moving at the state level, are comprehensive consumer privacy laws. Those really got started rolling at the state level even before AI became hot, but they continued to roll forward in many states. I expect that will continue, so that's where I'd see the potential for most AI-related activity in 2026, at the state level, if I'm a betting person.
Kenny White (10:07):
If we were looking at it, we'd be looking at things related to privacy, things related to consumer protection. Nobody, no state legislature wants to kill the goose that might lay the golden egg, even in California where Silicon Valley is located. They're having to compete with Texas, and they're having to compete with Tennessee, and they're having to compete with Florida and other places.
Kathy Roe (10:31):
And Virginia, yeah.
Kenny White (10:31):
That are going to be far less regulatory heavy-handed on the industry. Everybody wants those investment dollars, so that's one thing. The other thing is, I think it's important for most people in the industry to realize and remember is that, just because you're using artificial intelligence doesn't mean that all of the rest of the laws and regulations that exist out there right now went away. They all still exist, and they're still there. You have guardrails. You can't violate the False Claims Act by using AI...
Kathy Roe (11:04):
Bingo! Yes.
Kenny White (11:04):
Any more than you could by using your brain or a 20-year-old that you hired to go do it, or algorithmic pricing, which is the new thing of the day. You can't violate privacy laws, HIPAA privacy laws, by using AI any more than you could anything else. It does make laws that are based upon criteria involving intent or fraud to be more difficult to enforce because computers don't have intent, which is required for a large number of legal actions against entities or people that are violating things like the False Claims Act. Intent isn't necessarily required, but it can be, particularly if you want criminal fraud, then you have to be able to prove it. Well, computers don't do that. It's sort of like having a computer, an AI program, provide diagnostic clinical pathways for clinical treatment and then suing for malpractice when something goes wrong. Computers don't have medical licenses and med-mal laws in states are based upon who has a medical license and who doesn't. You can't sue a computer for medical malpractice because they don't have a medical license. So same thing, all of these laws still exist, and there are agencies and agencies and subagencies out there that have something to say about this. They're just not saying a lot about it now because I think a lot of them are relying on the fact that they have all of these other laws, they have all of these other regulations that are still in place, and you have to do that. Let's jump forward a little bit beyond that and look at the industry itself and go, okay, you're a managed care industry or entity and you're going to employ AI to help you do things, whether or not it's procurement, whether or not it's hiring and HR functions, supply chain, keeping your medical directories up to date, anything like that. Then you jump into utilization review or prior authorization or anything like that. What would a managed care entity want to do in order to reduce its risk of having an adverse claim, a loss, associated with the use of AI?
Kathy Roe (13:25):
I would say if people want to look for guidance, and I put the emphasis on guidance, I would go out to the National Association of Insurance Commissioners' bulletin on AI use by health insurers. I don't care what business line you are talking about relative to health insurance or health benefits, or whether you're a PBM or TPA or a MSO. I think that bulletin offers really good insight. It basically says, put in a governance infrastructure, and it goes through steps as to what constitutes a reasonable governance structure. Then it talks about put in a risk management program. It goes through the steps about what is a reasonable risk management program. Then it says, either now or in the future, there might be a government regulator that's interested in seeing what you have done relative to managing this AI risk or other third party like you were suggesting, Kenny, that you may need to be prepared to defend yourself and your organization's actions. Keep records. Be thinking about how am I going to actually document and substantiate all that we've done to try and prevent some bad thing happening from our deployment or our development of artificial intelligence. I don't think you need a law. I think you can look to a resource like that or look to your own experience of setting up a compliance program for a particular category of laws, setting up a cybersecurity program.
Kenny White (15:16):
Well, literacy, you and I have talked about this in the past, having literacy so that people in your entity know what they're using and how to use it and know when they can use it and when they can't. You shouldn't be allowed to wear AI-enabled glasses and then look at a computer that's got all sorts of sensitive and protected data on it because you're automatically violating HIPAA by doing so. Entities need to be aware of that. You need caution, caution, caution, caution. You cannot throw caution to the wind here. It has to be front and center. You need a detailed risk management program and a detailed enterprise risk management program to assess this. Then you need oversight and training and oversight and training and oversight and training to make sure, because there aren't any other guardrails out there. Your entity is going to have to create its own. You have all these other laws and regulations you need to comply with. You need to make sure that whether you're using your AI to buy toilet paper or whether you're making benefit determinations for medical necessity using AI, that it's being used properly, it has a high accuracy rate, that there are humans in the loop, et cetera. There's no corner cutting there. You can't do that or else you're going to find yourself in front of a regulatory agency or in a class action lawsuit, et cetera. That would lead me to the last thing I wanted to make sure that we touched on, which was, what happens if there is a problem? You've got coverage issues related to risk. We have risk identification. We've talked about some of those. Risk quantification, and that's an exposure issue. It could be small or it could be very, very significant. Then you have mitigation, which is risk management and ERM and legal and other people being involved in trying to mitigate the risk. Then you've got risk finance because you have to finance for risk. So, what do you do? Oversight issues fall under your D&O coverage. Third party claims for loss would fall under, probably your E&O coverage or your professional liability coverage. Then you've got cyber obviously and then cyber tech, which is if you're developing software and licensing it or selling it. You need the cyber tech coverage because your normal cyber program wouldn't cover you for things like that. You also have issues related to EPL if you're using it for hiring and employment liability coverage. Then you've got property and commercial general liability coverage. When we get to automated cars, even your auto-coverage is going to be impacted. Insurers, just like health insurers or managed care plans, evaluate risk from an actuarial science perspective too. They're going to be looking at frequency and severity and then looking at terms and conditions of their policies of what is covered, what isn't covered and what they don't want to cover. They probably don't want to cover your terrible misuse of AI for something that you shouldn't be using it for. With all of those things in mind, I think the managed care industry as a sub-sector of the healthcare industry, the healthcare industry itself has got a lot to look at. There's a lot of hype with regard to artificial intelligence on whether or not that meets the hype or not in the future, we don't know. Kathy, I want to flip it back over to you, and what would you tell a company in this space? What do they need to pay attention to in the next six months, 12 months, in order to walk the line that they ought to?
Kathy Roe (18:58):
I would say don't let themselves be blinded by the hype to the work that needs to be done alongside analyzing and putting in place AI tools. There has to be the work that you and I both just talked about relative to governance, relative to risk management, because those aren't one and done things. They have to be updated and they really ought to be examined on a use case by use case basis. They also should be not sitting back and thinking that their insurance coverage is there, but asking questions now to try and understand the scope of those different coverages that you mentioned. Because if they don't have third parties that are going to cover that risk, then they need to be thinking about other ways they'll cover that risk through their own self-insurance or through contracting with third parties. Whether those third parties are vendors who are selling them AI models or components or testing data or training data, their own customers. What are they committing to their customers by contract, and are they potentially committing or promising too much? I'd also be looking at what are the novel theories that plaintiff lawyers are already putting forward relative to construing existing laws on the books and their application to AI. What can you take in terms of insight from those players in the legal marketplace to inform your own protective risk mitigating activities?
Kenny White (20:46):
Thank you very much, Kathy. This has been very educational. I appreciate you coming on the podcast with us to help us educate our listeners on AI regulations that impact healthcare and managed care. With that, I'll close us off for the day.
Alliant note and disclaimer: This document is designed to provide general information and guidance. Please note that prior to implementation your legal counsel should review all details or policy information. Alliant Insurance Services does not provide legal advice or legal opinions. If a legal opinion is needed, please seek the services of your own legal advisor or ask Alliant Insurance Services for a referral. This document is provided on an “as is” basis without any warranty of any kind. Alliant Insurance Services disclaims any liability for any loss or damage from reliance on this document.
Thanks for your message.
We’ll be in touch shortly
News & Resources