Specialty Podcast: From Efficiency to Exposure - Understanding AI’s Impact on Managed Care
By Alliant Specialty / November 13, 2025
CJ Dietzman and Tara Albin, Alliant Cyber, are joined by Kenny White, Alliant Healthcare, to examine the evolving role of artificial intelligence in the managed care industry. Their discussion explores how AI is streamlining administrative processes and compliance functions, while simultaneously introducing new exposures around privacy, vendor accountability and cyber risk. Together they highlight how proactive strategies such as implementing governance frameworks, employee training and AI literacy in an organization can strengthen resilience and manage risk in an increasingly automated environment.
Intro (00:00):
You are listening to the Alliant Specialty Podcast, dedicated to insurance and risk management solutions and trends shaping the market today.
CJ Dietzman (00:09):
Welcome everyone to another episode of the Alliant Specialty Podcast. CJ Dietzman here with you once again, cyber consulting leader for Alliant. Folks, I am thrilled today that we have two special guest colleagues to really get into key risks related to artificial intelligence in the managed care industry. Without further ado, let's get right to it. I want to invite my colleagues, Tara and Kenny, please, why don't you give a brief introduction to let our audience know who they're speaking to today. Tara, go ahead.
Tara Albin (00:43):
Hi, thanks for having us. My name is Tara Albin. I am part of the Cyber practice here at Alliant with a focus in managed care.
Kenny White (00:53):
CJ, this is Kenneth White or Kenny White. I am the director of the Managed Care industry group for Alliant Insurance Services on the specialty side. I lead a team of industry specialists and experienced placement and consulting personnel for the managed care industry.
CJ Dietzman (01:13):
Wonderful. Thank you so much for being here. Listen, folks in the audience, I told you we were set to have a fulsome discussion regarding artificial intelligence risks specific to the managed care industry. Let's get right to it. Kenny, we'll start with you. When I think about artificial intelligence, and this could very well be the great innovation of our time or one of them. This could also be one of the great disruptors of our time and really cause a significant RIF, if you will, and impact to our respective client organizations that we work with every day. From a cyber risk perspective, from a compliance standpoint, what are you seeing in industry right now in terms of key themes? Where are you seeing a tendency to want to go more progressive with adoption? Where are you seeing key areas of focus and maybe some of those killer use cases in the managed care industry around artificial intelligence?
Kenny White (02:09):
Certainly, let's start with the fact that artificial intelligence isn't really artificial or intelligence. It's predictive analytics. The more and more information or data sets, the more complete they can be, the better your predictive analytics would be. We can skip the Skynet problem with the movies and jump into real use. We've had 300 billion annually in the last year put into development of artificial intelligence platforms and uses. In the managed care industry, most of those are aiming at administrative process, streamlining issues related to compliance, either contractual compliance or governmental regulatory compliance, purchasing and inventory, revenue cycle management issues, research, production, benefits, claims processing, things like prior authorizations, medical diagnostics and medical necessity determinations, treatment pathways, those kind of things. It really depends on whether or not it's the PBM industry or the traditional payer industry that we're talking about at the moment. But as AI develops, the use of an AI platform to streamline, make things more efficient, reduce labor force, reduce inaccurate claims, determinations, et cetera, is exponentially improving. As the products and the platforms get better and better, you'll see these things being used more and more. Those are the primary ways that it's being used in managed care right now.
CJ Dietzman (03:38):
Thank you so much for that, Kenny. Tara, I want to pivot to you now and talk about some of those undesirable side effects, dare I say, of innovation and technology revolution, if you will, that may in fact come along with artificial intelligence adoptions in the managed care industry. When we see this type of new, revolutionary, innovative technology, we typically see a rise in risk driven by some of the vulnerabilities that so often happen, whether they're related to third parties, that type of thing. Tara, what are you seeing in terms of key risks that you're tracking? Specifically, what are you seeing around expectations from underwriters and carriers from a cyber insurance standpoint? Maybe you can talk about that.
Tara Albin (04:25):
With the carriers, there's definitely more questions coming around the use of AI. Frankly, they're asking to all industry classes because let's face it, everybody's adopting it in some way. Carriers are asking questions around it, more about how it's being utilized within the organization, whether there's employee training and an acceptable use policy. Because what I find, and many clients do even admit this, they may say, don't use ChatGPT or one of the other AI platforms, but they're using it unless it's specifically disabled. Employees need to understand if they're going to use it, what they're putting in it. Especially with managed care industry and other healthcare type organizations, you put PHI or other sensitive information in there, it's out there. Employees need to be very aware of the information they're putting into it, but I haven't seen any exclusions from carriers around AI. Definitely more questions coming, like I said, I think carriers are just trying to understand their potential exposure and what industries are routinely using it for certain uses. I have seen some endorsed coverages, broadening coverage. It's not that AI was previously excluded, let's use it as an example on a social engineering type event. It technically wasn't excluded if AI was the reason for the social engineering claim. However, I have seen carriers expand their language and add specific language where they're saying, including artificial intelligence. Whereas companies are adopting AI, of course, threat actors are adopting AI because they need to get more sophisticated with the way they're getting into organizations. They are definitely using AI to their advantage. The voice AI that they use, the video AI that they use, I mean it's very, very convincing. For now, I would say carriers are gathering the data around AI and their exposures. Right now we are not seeing exclusions, but there's definitely more questions coming.
CJ Dietzman (06:46):
It's fantastic. Great stuff. These are fascinating times. I was meeting with a client just yesterday in fact, on-site, and we were going pretty deep around some of their cybersecurity controls and enhancements and the topic of artificial intelligence came up. This particular organization had some serious concerns around some of their business units and operational functional units that were ahead of the curve, dare I say. Very assertive and aggressive around looking at various third parties and additional additive services from existing vendors, or in some cases looking at new potential vendors and service providers or making some pretty big promises around some of the use cases and the ROI around AI adoption. What it's driving is a lot of cavalier behavior, candidly, with that particular organization. I know one of the things that I'm seeing across industry sectors is that organizations really need to get back to basics, particularly around things such as third-party risk management, to the extent that we're leveraging the AI services and solutions of third-party service providers, whether they're software providers, technology providers, customer service solutions, others. Organizations unfortunately are forgetting some of the tough lessons learned in the misses around good third-party risk management. Knowing your vendors and managing those risks. Kenny and Tara, has that been consistent with what you're seeing in managed care or is it different? What do you think?
Kenny White (08:15):
I'll leap in here first. On the managed care side, most of the claims and issues that we've been seeing fall into a couple of different categories, use and for employee hiring, which normally falls to an EPO policy. Or use in what we refer to as managed care professional services, which would normally fall into an E&O policy coverage, or it's a business use thing, which would normally fall into a D&O policy. The use related primarily to cyber and managed care that people are concerned about is privacy. Issues related to actions involving the computer system of the managed care company or its vendors with regard to privacy or a ransomware attack where their systems are locked up for a period of time, in those situations that would normally fall to the cyber tower. One of the things that we've been harping on a lot is vendor accountability. The underwriters in the cyber world are very interested in vendor accountability, contract language related to insurance clauses, indemnity clauses and certainly limitation of liability clauses, usually are designed to protect the vendor from significant liability even though they can cause outsized loss. As AI gets more involved, as underwriters become more interested in looking at vendor accountability with regard to AI and to ensure that you know what your vendors are using with regard to AI as well as your own business units, those things are going to come to the forefront more and more on the cyber risks piece. Mainly because the regulatory efforts directed at AI at the moment are primarily aimed at privacy issues.
CJ Dietzman (10:01):
A hundred percent. That's why I led with that, Kenny. The whole concept of what we refer to as vendor cyber risk management, vendor risk management, it was relevant and key 10 years ago and 20 years ago. It's by all means, even more critical today with the advent of artificial intelligence. Unfortunately many organizations are forgetting some of those lessons learned. There's nothing new under the sun here from that regard, although the risk is heightened and it's elevated. Tara, what do you think?
Tara Albin (10:28):
Yes, I agree. The vendors using AI, like Kenny said, I think that specific language needs to be worked into the contracts, and I'm not sure clients are really thinking about that. Also, let's not forget that the HIPAA updates, there's a significant focus on AI with healthcare organizations, so that's another area. Speaking with clients and AI comes up, I think overwhelmingly, my sense is companies, they know they have to use it, it's coming, it's going to be great for the organization, it comes with the risks. There is a little bit of denial, too, when it comes to AI and how impactful it really can be, both good and bad. I think organizations are struggling to get their arms around it, especially the large ones that have many entities and subsidiaries affiliated with them. It's almost like everybody has to take a collective breath, take a step. We know we've all got to get ahead and keep implementing it.
Kenny White (11:29):
I was just going to mention that if you're in one of the normal states where regulations are a significant issue, California, Washington, New York, Illinois, Connecticut, Massachusetts, et cetera, those states are enacting additional regulations related to the use of AI in their states. Those impact managed care quite directly because a lot of them are aimed at issues related to prior authorization or security measures around privacy. But the Trump administration has taken a very hands-off approach. It's a tactical decision. They've made an effort at rowing AI in terms of an industry and an economic driver, one for national security or economic supremacy or whatever you want to call it. When you start putting guardrails up or significant regulations on a startup industry, generally the money and the innovation runs away from that industry. The Trump administration does not want that to happen, which is why they propose the 10-year-ban on state regulations of AI platforms, but it's going to happen. Most of it in the first couple of years is going to be, as you and Tara both said, at the industry level where the industry participants, the entities in the industry are going to have to come up with their own policies and procedures and protocols that they will have to put in place and enforce at their own industry to limit the possibility of a loss or a claim related to AI use, either externally affecting the company or internally affecting the company.
CJ Dietzman (13:11):
These are fascinating times for all the reasons you both mentioned. On this topic, I always tell my clients when we're talking about either AI risk or other technology-related risk or innovation risk, the last thing you want to be doing, client, is waiting for regulation to save the day because it's not. Regulatory compliance is not security, is not cyber risk management. That's why I claim a top-down, risk-based approach to the adoption and implementation of any innovative AI solution or service rules the day. Let's make sure we identify our critical use cases, map that to sensitive data, privacy was mentioned, map that to other key risks, whether they may relate to transparency, auditability, accountability. All those things that can be challenges with the algorithms and the output from AI, managing bias, all of those things. If we do a thorough walk and analysis of our use cases, if we identify critical sensitive data that could lead to a significant privacy event, if we look for key risks in our business processes with AI related to things like bias, heaven forbid, if we take that principles-based approach, we're going to get ahead of it. We may not be perfect on it, but we will get ahead of the critical risks. We'll at least be able to track them. We'll be able to mitigate them. We'll be able to transfer some of them, but regardless of what the regulatory agencies and bodies do, we will be directionally aligned with where they go if we take this principles-based approach. The concern though, and what I think a lot of clients and organizations are going through right now is absent that formal guidance, some organizations are getting quite, as I say, cavalier about it and are potentially getting ahead of their skis. That's what we're afraid of.
Kenny White (15:04):
AI literacy is going to be a catchphrase that you hear over and over again. As with any new technology, there are folks like myself who are old dogs. These are new tricks. I am fortunately not alone in the industry. There are lots of people like me who went to school and started in the industry before computers were readily available to anybody. The concept of AI literacy, understanding what it is, how to use it, when not to use it, how to put effective guardrails up, how to use cyber and IT security risk management to effectively mitigate your risk here. That would be for any industry, certainly in the managed care industry where cyber privacy, the collection of data that is protected, whether it's financial or medical, and then all of the connecting points that entities that are in the space have where there are so many touch points to their computer systems. So many different ways for nefarious actors, whether they're criminal or activists or hacktivists I guess is what they call them, or whether it's a mistake and an accident. Those things are very real to this industry. The more you turn over critical infrastructure to machine learning and machine actions, the more you need cyber risk management and cyber coverage for the potential for a loss.
CJ Dietzman (16:30):
Excellent, excellent. Listen, this conversation, I feel like we could go all day, Tara, Kenny, talking about this, but I'm going to put you both on the spot here. A bit of a hot seat question. Tara, I'll start with you. In terms of forward-looking advice and suggestions for our clients out there in the risk management space, in the cybersecurity realm, what are one or two things that you would strongly suggest, proactive actions our clients could take right now, things to think about going forward? Tara, what do you think?
Tara Albin (17:00):
For me, it's continue, I know I always say train your employees, but continue to make sure that the use of AI is being worked into those quarterly annual trainings for employees. Make them understand the risks that come with using AI to perform their duties. Organizations, just as much as AI is coming and already here for many organizations, making sure you have that all important governance policy in place for the use of AI. Just getting your arms around the exposure and not to be afraid of it. If you need help with a governance policy, make sure you are reaching out. Get that help and get that policy in place before it becomes, there's so much AI usage within the organization. I feel that it's going to be a much bigger undertaking to get your arms around than starting when you are in the early phases of using AI.
CJ Dietzman (18:04):
Great stuff. Good advice, Tara. Thank you. Kenny, what about you? Quick hit, one or two things that you'd like to see clients in managed care focus on. What do you think?
Kenny White (18:14):
First thing is to reiterate, regulatory inactivity or regulatory activity is not a substitute for risk management. You mentioned that before. It is 100% true. Regulations normally come in the form of things after something bad has already happened, and you need to be ahead of the game. Second thing is vet every single AI application. Run it, test it, swap it out if you need to. Don't use it if it's in any way going to impact your accuracy or efficiency. The loss or cost of a bad claim related to AI use will dwarf what you have saved using AI anytime in the future, in the near future. In that situation, vet everything. The third thing that I would say is AI literacy. Train your employees on what they can use, what they cannot use. Eliminate their ability to use things that they shouldn't be using. If you don't want them using a large language model AI like ChatGPT, don't let them have it on their phones and their computers. Don't let them use it until they have literacy and the ability to use it effectively and efficiently. The last thing that I would say is compliance. We will have regulations. There are states that have them now. We will have federal regulations. There are EU regulations already in place, but remember something, there is almost no entity in the managed care space that is not considered a government contractor. The Trump administration has made it a point to use compliance with regulations as the basis of FCA or False Claims Act claims because all of your contracts with the federal government require you to certify that you're in compliance with other relevant federal regulations. If regulations come out and you're not in compliance and you bill the government, they can come back and say that you've committed a False Claims Act violation because you weren't in compliance with your AI regulatory compliance as part of the FCA claim. Those things can be extraordinarily expensive. So compliance, literacy, vetting and risk management.
Tara Albin (20:28):
I have one final thought. Don't forget about how your third-party vendors that you rely on are using AI because that can affect you as an organization.
Kenny White (20:38):
Absolutely.
CJ Dietzman (20:39):
Fantastic. Great stuff. Listen, I think if I could put a cap on that, some of the things that in addition to what you guys shared, and you covered a lot of ground and covered it really well. What I would say is, clients out there in managed care, don't go it alone. Don't make the mistake of going it alone. Right now, the Alliant Cyber consulting and Managed Care industry practice, we are working with clients actively to help guide them on this journey. We're conducting workshops around AI, cyber governance. We're assisting with developing some of those foundational draft policies and governing controls. We're advising from a cyber brokerage standpoint and broader risk transference standpoint strategies to make sure that not only are we mitigating risk, but we're also considering broader risk management and progressive strategies around that, leveraging insurance vehicles. Folks, this has indeed been truly a fulsome discussion. Thank you so much, Tara and Kenny, and thank you so much dear audience for attending another episode of the Alliant Specialty Podcast. It’s been a pleasure speaking with you this afternoon about artificial intelligence and managed care. CJ Dietzman here, signing off. Until the next one. Be well.
Alliant note and disclaimer: This document is designed to provide general information and guidance. Please note that prior to implementation your legal counsel should review all details or policy information. Alliant Insurance Services does not provide legal advice or legal opinions. If a legal opinion is needed, please seek the services of your own legal advisor or ask Alliant Insurance Services for a referral. This document is provided on an “as is” basis without any warranty of any kind. Alliant Insurance Services disclaims any liability for any loss or damage from reliance on this document.
Thanks for your message.
We’ll be in touch shortly
News & Resources