Page of | Results - of

Podcast

Specialty Podcast: Artificial Intelligence - The Thin Line Between Tool and Threat

By Alliant

Brendan Hall, CJ Dietzman, Bobby Horn and David Finz, Alliant Cyber, discuss the rapidly evolving technology of artificial intelligence along with its risks and potential. The team sheds light on the intricacies of AI, encompassing EU regulations, the implications for insurance, and proactive measures aimed at optimizing risk management strategies.

Intro (00:00):
You are listening to the Alliant Specialty Podcast, dedicated to insurance and risk management solutions and trends shaping the market today.

Brendan Hall (00:09):
Hello and welcome to another Alliant Specialty podcast. I'm your ‘host with the most’ today. My name is Brendan Hall from Alliant Cyber. Today we have a panel of my esteemed colleagues from the cyber team, Bobby Horn, David Finz, CJ Dietzman. Gentlemen, everyone's fired up today to talk about artificial intelligence. What is it? Is it scary? Is it good? Is it bad? I like the analogy I heard recently. People said, AI is kind of like fire because when fire first came out, everyone's like, fire's good because you can cook food with it. And then, oh, fire also burned down our whole village. So, is it good? Is it bad? And the answer is probably, we're still trying to figure that out. I think there's great applications for it, but there's things to be conscious of. I thought we could start out by just defining it.

I think too often you start throwing around these terms and everyone pretends that they know what it means. So, at a high level, it's the ability of machines to take very large data sets, lots of information; we're creating tons and tons of terabytes of data every day and discerning some kind of meaning from it, with various inputs. So, these machines are able to learn what a human being could take a lifetime, if not more, to try to pull apart and get some bullet points for. Extremely useful tool to get at the information you want quickly from an otherwise incomparable dataset. CJ, in terms of cybersecurity, when you think about cyber and how AI is being used every day by cybersecurity professionals, what are some of the risks that we're seeing out there that clients of ours should be aware of?

CJ Dietzman (01:38):
Absolutely, and welcome everybody. What an important topic; something critical for us to be discussing right now. So, beyond the buzz and the hype, Brendan, a couple things. One, what we're largely talking about here in the near term is the machine learning capabilities and functions. It's already out there. Most of us have used things such as, chatbots for customer service, whether you realize it or not, you are actually embracing and using the principles of machine learning and artificial intelligence. If you use things like Siri or certain intelligence within Amazon applications, you are leveraging AI and machine learning. Do you drive a Tesla? Do you drive any type of vehicle that has automation and intelligence, self-driving capabilities, collision avoidance, that type of thing? The varying degrees of adoption of principles and tenets that are leveraged in AI and machine learning. So, it's around us today, it's here, but moving beyond the hype, let's talk specifically about organizational and business use cases and key risk considerations, especially in the cyber risk management and security realm.

First things first Brendan, that I want to get out there is, let's not be overly optimistic about expectations on the near-term benefits. Again, move beyond the hype, but there can be early misses, disappointments, and other negative outcomes if businesses or organizations do not see those ROIs. The other thing too is just in the realm of key risk, the rapid adoption and implementation of something like AI and machine learning in any use case can actually increase your cyber-attack surface. It can jeopardize privacy, intellectual property, and potentially cause the disclosure or compromise of sensitive data. There's a lot at stake here. Something else that gets some buzz and hype these days that cyber attackers are already using AI methods and techniques in an adversarial way, just as businesses, organizations, municipalities can adopt it, the bad guys can adopt it too. And they are, in some cases they're probably ahead of the curve. That's just an undesirable side effect, but also something to be aware of. A couple other things that come to mind: Organizations will likely underestimate the level of effort required to develop, plan, manage, implement, tune, wash, rinse, repeat any AI or machine learning use case or technology. There's all kinds of other challenges. Brendan, I really want to hear from David specifically get into the compliance implications. David, what do you think?

David Finz (04:13):
Thanks, CJ. So, what I'm looking at right now are what are the standards of care that businesses will be expected to follow as they implement AI. And I think as we saw with data privacy, what's happening in Europe can be very instructive in this area. So earlier this month, the EU parliament reached an agreement with the European Council Presidency around new rules for artificial intelligence. I'm not going to go through all of them here, but one of them sticks out like a sore thumb to me. And that is a requirement that deployers of certain high-risk AI systems are going to be expected to conduct what's being called a fundamental rights impact assessment prior to rolling out the system. You know, if you think back to the 70’s and 80’s with the EPA saying that businesses had to file an environmental impact statement. So, the people would understand the effect of whatever new plant or whatever process they were going to be using would have on the ecology of the certain area or surrounding area.

Think of this as the environmental impact statement for AI only the focus here is really going to be on human rights. And the penalties under this legislation are pretty substantial. For certain types of offenses, they could run as high as 35 million euros or 7% of a company's annual turnover, whichever is greater. So, this is very instructive. California's already looking at some proposed regulations around AI. We can expect that as we saw with GDPR, what happens in Europe eventually finds its way over here, if not at the federal level, then certainly at the state level or through the kind of industry standards that are developed. For example, right now, the draft framework that's been developed by NIST, The National Institute of Standards and Technology, that provides some guidance to businesses around governance, mapping, measurement of AI. These are all factors that are going to inform the standards of care that companies are expected to follow.

Brendan Hall (06:21):
And Dave, what would a violation look like from the EU’s perspective as it relates to AI and how would a company get tripped up by these regulations?

David Finz (06:29):
So broadly speaking, there are three types of penalties for non-compliance that companies could face. And those are not all necessarily subject to that highest range penalty that I just described for violations of banned AI applications, things that they should not be using them for, like behavioral manipulation, that could wind up in that higher penalty range for violations of the ACT's obligations around how permissible uses of AI are deployed. That's a mid-range penalty. Those could range as high as 15 million euros or 3% of annual turnover. And then for simply supplying incorrect information to the public or to the authorities about how the AI is being deployed, it can go down to 7.5 million euros, or 1.5% of the annual turnover. And then there's some plans to roll out more proportionate fines for small and mid-sized businesses and startups.

Brendan Hall (07:26):
I think it's interesting and I think it's going to be difficult to actually come after companies because as we all know, we don't really know how the outputs are being determined.

David Finz (07:34):
Right, and that's really something for the data scientist technicians around this to figure out is what kind of rigor are they putting around? How does the AI model come up with its responses? And what are the implications of that for the users? Are they infringing on the intellectual property of another party? Are they applying the model in a way that has a discriminatory impact? Are they invading the privacy of individuals in terms of how they collect and use the data? So there are a lot of potential exposures here from a liability standpoint.

Brendan Hall (08:05):
Well, talking about liability seems like a good time to ship this over to Bobby and talk about the implications from an insurance perspective and how AI is coming into play here. Bob, what do you think?

Bobby Horn (08:16):
Yeah, touching on some of the things David mentioned around compliance, I think one of the concerns that we're tracking is large language models and the use of that within these AI companies like chat GPT around collection of personally identifiable information. So, if you're using any sort of these AI technologies to collect information on consumers or employees, whatever it may be, do those data retention policies align with your company's data handling policies? And there could be some issues around that. So, certainly being more aware of the privacy implications that you have if you are using this type of technology. At rise of now the policies, there's no sort of exclusionary language around AI usage. Certainly, I think underwriters are becoming a little more aware of the capabilities and we are starting to see a few carriers ask specific questions around, if you are deploying AI what sort of technology are you using, what sort of policies and procedures you have in place in order to make sure that you're in compliance, whether it's EU or US data privacy laws.

That's one thing. I think the other concern that we have is around cybercrime. So again, the use of these LLMs, we've all seen these spearfishing or business email compromises in the past where the language was very broken. It was broken English. Poorly worded attempts at getting people to send over personal information or some sort of financial information to a bad actor. With the use of this new technology, the bad actors are getting much more sophisticated in how they deploy their attacks. So, you're going to see a lot more sophisticated emails coming from purported individuals that seems a lot more legitimate than they have in the past. So, I think from that perspective, a lot more vigilance is going to be needed on the part of companies and particularly their employees to make sure they're not getting duped.

Brendan Hall (09:53):
Yeah, actually Munich Re has a product out now called aiSure you can use if you're both a creator of AI or if it's your tools or if you're using the tools. You can insure the risk of AI getting it wrong. And CJ, just in terms of what clients should be aware of, what should they be doing now? Because it's still such early days, legislation is still coming through. Everyone's just figuring out what this means and how it works and what the risks are. What should clients be doing preemptively now to sort of protect themselves?

CJ Dietzman (10:23):
Yeah, absolutely. And wow, what a fantastic discussion fellows. Everything that David shared, Bobby shared, certainly Brendan, it's an incredibly important moment in the industry in the context of AI and risk management. The last thing an organization wants to do, Brendan, is to run headstrong into this ready, fire, aim, and then the wheels come off. I've seen it too many times over the years with the launch of the “dotcoms,” with big data and analytics, with social media when it goes wrong, and it will, it can go wrong in a big way. And we don't want to see organizations, municipalities, others run afoul of regulation of good business principles of the foundations of cyber risk management. So, what do I say? Let's not forget lessons learned from those use cases I just mentioned and countless others. Let's learn from failed implementations of revolutionary innovative technologies in the past.

I'm very serious when I say that. For example, let's adopt principles of a multi-layer cyber risk and security strategy. Let's embed cybersecurity tollgates, process and controls throughout the process. We're not looking to slow it down necessarily. We're not looking to say no. What we're looking to do is to enable it with good risk management principles that are going to protect the organization, not slow it down, not degrade the use case, but protect the organization. All the things Bobby mentioned, that's going to play a part in the risk management, but it's not the silver bullet. We've got to have preventative, detective monitoring controls. We've got to embed these principles into the project, make sure it's safe, secure, and resilient. To David's point earlier, is the model explainable and interpretable? Is privacy embedded? Do we have any unfair or harmful bias built into the system? Not intentionally, but because we didn't manage it and address it. Is the platform accountable and transparent? Is it valid and reliable? Now that may sound like a lot and perhaps it's overwhelming. Now, the good news, this has been done before, we can apply these principles to AI and machine learning. We can apply them right now. And we're talking to some clients about what that looks like. Brendan, those are some thoughts that come to mind.

Brendan Hall (12:54):
Yeah, I think those are really important and I think it's super helpful for clients to think about that. So, with that, I think we've surrounded this issue pretty well here and given some folks some practical advice. Any parting thoughts from anybody before we wrap up?

David Finz (13:10):
Yeah, I would just add to that, that we have to always look at the plaintiff's bar here in the U.S. They are going to be trying to formulate a theory of the case, as we call it. What is a justification for them to bring an action? In order for them to do that, they have to establish a standard of care to show that a company fell short of it. Now, private party litigation isn't as common in Europe, but again, I go back to the standards in the AI Act. I go back to the draft standards that NIST has come up with that the state of California has come up with in their proposed regs. These are things that companies should be looking at over the next 12 to 24 months because those standards could become the basis of the plaintiff's bar theory around how they could proceed on behalf of an individual, which again, is something that folks in Europe don't have to worry about nearly as much as we do here. But those standards need to come from somewhere and they're going to be something the plaintiff's bar's looking at. Then they're also something that businesses should be looking at in terms of establishing their policies around AI.

Brendan Hall (14:17):
Yeah. To wrap it up, just make sure your kids are getting degrees in computer science: AI, data science, because that field is going to be a very hot place for a long time. Actually, I was just reading about folks now talking, especially with these regulations, having to have chief AI officers. We’ve got chief information security, chief information officers, and you know, somebody's going to have to be responsible for AI. Continue to watch this space. Thank you all out there for listening. We appreciate it. We'll see you next time.

Alliant note and disclaimer: This document is designed to provide general information and guidance. Please note that prior to implementation your legal counsel should review all details or policy information. Alliant Insurance Services does not provide legal advice or legal opinions. If a legal opinion is needed, please seek the services of your own legal advisor or ask Alliant Insurance Services for a referral. This document is provided on an “as is” basis without any warranty of any kind. Alliant Insurance Services disclaims any liability for any loss or damage from reliance on this document.