Specialty Podcast: How AI and Misuse of Section 533 Are Affecting the Insurance Landscape
By Alliant Specialty
How do insurance companies navigate the fine line between denying coverage and honoring claims for deliberate behavior in management liability? Steve Shappell and David Finz, Alliant Claims & Legal review the alarming trend of Section 533 misuse, denying coverage for deliberate acts in management liability claims. Plus, AI's impact on insurance, from security concerns to potential liabilities, highlighting the need for proactive risk management.
You are listening to the Alliant Specialty Podcast, dedicated to insurance and risk management solutions and trends shaping the market today.
Steve Shappell (00:08):
Good afternoon. Thank you for joining David Finz and myself, Steve Shappell, for this month's podcast following our monthly newsletter, Executive Liability Insights. This month we're going to talk about a couple of different things. I'm going to kick off first talking about an issue that's causing me a lot of concern. This is an issue that's been out there for years, and it seems like month after month I'm seeing more and more troubling behavior. The troubling behaviors’ use of Section 533, out of the California Insurance Code, to deny coverage for management liability claims. This is problematic in the financial lines world, just by the very nature of the allegations that we see against our clientele, our directors, our officers, our leadership of our clients - do not accidentally make decisions. They very deliberately make decisions, and the challenge with Section 533 is when they get sued for deliberate behavior, deliberate fraud or willful behavior, with willful being very intentional conduct, we run this risk that insurance companies are all of a sudden going to point to section 533 of the code and I'll say, fabricate a basis to deny a large loss rather than honor the approach we take. We anticipate our clients are going to be accused of willful conduct, deliberate conduct and fraudulent conduct. Shareholder class action litigation are suits for fraud, which requires scienter. So, this is a big deal. And so, this month's newsletter, we have Markel and I'd encourage you to read this letter because Markel spends a lot of effort here finding reasons to not cover a claim. And what's really disturbing about it is we have, like, we often do, many allegations against the insureds in this case, some of which in a vacuum would very clearly be covered by management liability policy, like allegations of breach of fiduciary duty.
But here, our carrier labels everything willful conduct under Section 533 and argues that because this is not a duty to defend, therefore indemnity, they're not obligated to cover the claim. And correspondingly anything that is related to those allegations of willful similarly will not be covered. Inseparably intertwined is the language asserted by the carrier and disturbingly here, accepted by the court. And there's a lot of Section 533 litigation out there. And so, a couple of things here, I think we've got to spend a lot of time looking hard at carrier behavior. There are a lot of carriers in the marketplace placing management liability, and all carriers are not created equal. Not all carriers are going to assert section 533 each and every time they can, as opposed to they have to. And so, we have to spend some time on that. And then the other question is, in light of this disturbing trend where carriers are aggressively using 533, are we at a stage now where we can no longer predict and trust when carriers are going to use Section 533 against directors and officers of companies?
And do we have to start crafting language around this, making duty defend contractually obligating them to not assert 533 until there's a final adjudication, ironically, right? Do we need to go back to puny wrap type of coverage for California exposures? If Markel's going to take this position, then Markel needs to issue a policy out of Bermuda where they're not going to assert section 533 under Bermuda law. So, a whole lot to think about. And I think one of the takeaways is we should all be alarmed as to the gutting of the predictability of these policies, responding and providing defenses to directors and officers. And so, I think this is a good chance for us to take a step back, look at the people we're partnering with, and then again look at the language in these policies. With that, David, I know one of the other issues you and I were talking about is AI. I was commenting to you that we just got a couple inquiries in the last week about AI specific issues. Talk to us a little bit about what you're seeing.
David Finz (04:24):
Absolutely. Thanks Steve. Yes. We've been getting peppered on a pretty regular basis with inquiries from clients around how their insurance programs might respond to claims, and what types of claims might arise under the use of AI. And some of this might have been prompted by the White House's executive order this past week to help manage the risks around artificial intelligence. So, I want to take a few minutes to break down some of the details of that executive order and also some of the insurance coverages that may be implicated by the use of AI, and surprise, it's not just cyber. So, first let's talk about security. The order requires companies that are developing AI models that may pose a serious risk to national security, the economy or public health to notify the federal government when they train the model. And they must share the results of all their red team safety tests with Uncle Sam as well.
This order directs the National Institute for Standards and Technology, or NIST, to develop the standards for testing. And the Department of Homeland Security is going to apply those standards to sectors within the nation's critical infrastructure. Now there's obviously some enforcement exposure here for those companies, but there may also be an opportunity for service providers to do that red team testing that's now going to be required. And with that is going to come the potential for errors and omissions or E&O claims against those service providers arising out of any alleged negligence in the rendering of those services. So, you could see there where there's an opportunity for some of these companies to go in and do the testing, but there's also a risk associated with that. The order also directs the Department of Commerce to establish standards and best practices for detecting content that was generated with AI.
They want this content to be clearly watermarked or labeled as AI generated content so that the general public can distinguish between authentic government communications and imposters. This technology could have applications beyond just verifying content from federal agencies. There's some real potential here that this watermarking could be used to crack down on copyright infringement, deep fakes, misappropriation of image, portraying people in a false light. Now all of those are typically the types of wrongful acts that would fall within a media liability policy. And so, we would take the position that regardless of how that content was generated, even if it was through the use of AI, that those are standard media torts and we would expect a media liability policy or even a media liability ensuring agreement that's part of a cyber policy to pick that up. Now, one thing that's beyond the White House's control, which they're pushing for, is bipartisan support around data privacy legislation.
There is federal support for the use of cryptography or other types of technology that will preserve the right of individuals to privacy when their data is being harvested to help train AI models. And you could see where, depending on how this legislation develops, that the failure to comply with these requirements could result in a privacy liability claim. And that could take the form either of a regulatory inquiry by the government or maybe even a private right of action by the aggrieved citizen if they're allowed to bring that action under whatever data privacy law might result. So, we need to watch carefully what Congress does in response to the White House's order, it may not be something that gets done during this congressional term, but maybe after the 2024 election, this could if this legislation results trigger coverage under a cyber policy. And then last but not least, the executive order says that the federal government is going to be providing guidance to federal contractors, to employers, even to landlords around the use of AI in a non-discriminatory fashion.
Now, the reason for this is that many civil rights activists have cautioned that AI models have the potential to embed bias into the training model. And as a result of that, help to foster discriminatory practices. Now what a lot of businesses don't realize is that their employment practice liability policies will sometimes cover claims of unlawful discrimination that are brought by a non-employee, but it's a category of discrimination that would've been covered had the allocation been made by an applicant or an employee. You sometimes see these claims brought by customers alleging violations of the Americans with Disabilities Act or saying that they were wrongfully denied service on account of being a member of some protected class. But these third-party civil rights claims can manifest themselves in many ways. And it's entirely foreseeable that the use of AI could generate claims that could be covered under this provision within an EPL policy. So there's a lot in play here. You've got cyber, you've got media, you've got professional liability and potentially even employment practice liability claims. We haven't even touched on the potential for bodily injury and property damage. That's a whole separate category of coverages. But the bottom line is that the use of AI is going to permeate pretty much the whole world of liability and litigation in our specialty claims area.
Steve Shappell (10:06):
Yeah, and it's going to do it at an alarming pace, David. So, thanks for that insight. Very, very helpful. You know, closing comments. One of the things I would ask you to do is please go to the Alliant website and look at our monthly newsletter. It's really good. It does a nice variety of topics, particularly some of the coverage litigation that I've already talked about and I'll touch on real quick. One of the claims or stories that we write about is a professional service litigation. And we see this issue come up a lot in our world. Professional services defines the scope of coverage for some lines of our placements, for some of our E&O professional liability policies, and it serves as an exclusion for others. Any D&O policies, for example, will have professional services exclusion.
And one of the cases this month, it deals with professional services and this exclusion. And what's really interesting is the court looked at this issue from a carrier that writes a lot of insurance, both for the D&O professional service exclusions and the E&O cover the professional liability policies. And one of the things very interesting and in the face of an exclusion, which was alleging based upon a rising out of attributable to professional services, the court did a nice job of rolling up its sleeve and looking at how this particular carrier had previously defined professional services as it pertained to the affirmative grants of cover, and found that the position being taken was inconsistent and then really did a nice job of dissecting the allegations by the government for False Claims Act on whether certain loans qualify for certain programs.
And the court concluded that the False Claims Act by the government here was not for professional services, even though you know clearly. But for issuing put place in these loans, the False Claims Act of claims wouldn't have existed. The court did a nice job of distinguishing that here. The allegations and the wrongful conduct was really centered around the failure of the insured to comply with government duties and obligations, and thus found that it was not a professional service and therefore not excluded under management liability policy. So very interesting. It goes to the heart of the breadth of some of these exclusions and kind of the difficulty and the frequency we're seeing of coverage litigation in these times of social inflation and claim frequency increasing. I'd encourage you to read the newsletter and reach out to us. And with any questions you have about this, you'll find it to be a more rewarding way to manage risk, this kind of proactive approach we're taking on these issues. And with that, I thank everybody for the time, and David, as always, thank you for your insight till the next time.
Alliant note and disclaimer: This document is designed to provide general information and guidance. Please note that prior to implementation your legal counsel should review all details or policy information. Alliant Insurance Services does not provide legal advice or legal opinions. If a legal opinion is needed, please seek the services of your own legal advisor or ask Alliant Insurance Services for a referral. This document is provided on an “as is” basis without any warranty of any kind. Alliant Insurance Services disclaims any liability for any loss or damage from reliance on this document.
Thanks for your message.
We’ll be in touch shortly.
News & Resources
Specialty Podcast: Cybersecurity for Federal Contractors & ERISA Litigation
How does the ruling on ERISA excessive fee cases affect your business? Join Steve Shappell and David Finz, Alliant Claims & Legal, as they look at recent legal developments, including new rules proposed by federal agencies on federal contractors' cybersecurity requirements and excessive fee claims in ERISA litigation.
2023 Financial Lines Executive Liability Newsletter
Executive Liability Insights, a monthly review of news, legal developments and information on executive liability, cyber risk, employment practices liability, class action trends and more.