Specialty Podcast: AI Business Strategy Risks for PE Investors
By Alliant Specialty
Private equity investors are leveraging AI as an integral part of their business strategy. With this exciting technology comes a growing concern: data privacy. Join Chad Neale, Alliant M&A | Cyber, and Sara Haneef, BluDot Advisors, as they discuss AI due diligence as well as data compliance, cybersecurity and governance frameworks for PE firms to ensure a successful and responsible acquisition.
Intro (00:00):
You are listening to the Alliant Specialty Podcast, dedicated to insurance and risk management solutions and trends shaping the market today.
Chad Neale (00:09):
Hello everyone, and welcome to another Alliant M&A podcast. Today, we'll wrap up our two-part series on due diligence strategies when acquiring AI enabled businesses. In the first part, we discussed due diligence considerations when acquiring businesses that have developed an AI enabled proprietary application. In that podcast, Haytham Allos shared areas of focus and his approach to executing due diligence in these areas. If you haven't already listened to that informative podcast, I highly recommend you do so if this is a topic of interest. Today, we'll dive into the same areas, the same types of questions, but instead we're going to be focusing on businesses that are leveraging AI as an important or critical part of their overall business strategy. A 2024 survey by Forbes advisors indicates that over 50% of companies are currently using or exploring AI tools in their business operations today. This has grown from almost 0% just five years ago. This rapid adoption is transforming industries with PE firms increasingly looking at AI powered companies for potential acquisitions. However, with this exciting opportunity comes a critical risk: data privacy. Joining us today is Sara Haneef, legal counsel and privacy data compliance consulting leader at BluDot Advisors. Sara has been in the trenches during both pre- and post-acquisition phases of a transaction and has joined us today to share key areas PE firms should consider when evaluating a target's AI business strategy. We'll explore how to ensure proper data compliance, cybersecurity and governance are in place to mitigate and protect valuable information. Hi Sara, thank you for joining us. Let's start by having you give the group a little background on the work that you're doing today for PE and the area of privacy, particularly when it's related to AI.
Sara Haneef (02:17):
Thanks, Chad. It's always nice to collaborate with your team and it's an exciting time in the world of AI, so happy to connect. So, a little bit on my background, I've been working with PE firms for many years now on due diligence deals, both pre-acquisition and post-acquisition from a privacy perspective, and now with a boom of AI, how they're utilizing AI technology within these target companies and how that poses the risk from a privacy and a cybersecurity perspective.
Chad Neale (02:45):
Fantastic. As the Forbes report showed, we're seeing this more and more when we're involved with transactions. Thank you so much for joining us today. Let's start with thinking about some of the areas of risk. Obviously, the innovation, the opportunities are there, but just like anything, managing the risk upfront is really important. With that in mind, how would you describe some of the specific data privacy risk that investment teams should be thinking about when evaluating a company's AI business strategy?
Sara Haneef (03:20):
I would start by saying that AI poses many of the same privacy risks that privacy professionals such as myself have been tackling for at least the past decade. All of those critical conditions from a privacy perspective are still very relevant in AI. Factors such as obtaining consent from individuals, allowing individuals the capability to exercise certain rights over their personal information or the risks associated with sharing that personal information with third parties. These are still relevant when assessing a target company that's utilizing AI. The specific risks within the world of AI, and which is important to investment teams at PE firms, is scale. So, AI systems are data hungry, and we have even less control over what information is collected and what is processed for. You have an ever-growing, ever-changing data set, and then to address the privacy risk is very difficult. One small example with some big implications is obtaining consent from an individual. For example, if you have an AI technology which uses personal information and then spits out a predictive model, that model may target an individual that was never in the original data set. Now these target companies are dealing with questions of how do companies get consent, when did they get that consent, when technically that person was never in the original data set. These are some of the complexities that investment teams at PE firms are facing as they try to grapple with some privacy concerns, but at a much larger complex and ever-growing scale.
Chad Neale (04:41):
That makes sense. The lack of control certainly adds some new wrinkles to it, much like the approach you need to take whenever you are transferring data to a third party. But now it's an AI system that we're involved with. I look forward to diving into some of your recommendations on how to manage those types of risks. But first, you talked a lot about individual privacy. Can you expand the challenges when you think about the challenges that companies face just protecting potential business secrets, trade secrets, intellectual property? What are some of the things that implicate AI in that regard?
Sara Haneef (05:22):
The implications of AI are far reaching for companies and certainly go beyond privacy. For example, for open source specifically, there are IP concerns, issues regarding intellectual property rights and ownership. We are seeing many organizations struggle with the concept of ownership over the data that these models generate. For example, if a target company has outsourced some AI capability to a third-party AI vendor, then who owns that data? Is it the target company, the consumer or individual, or is it the third-party vendors? These are some of the discussions that are currently arising. Another trend we're seeing is around the use of vendors and business confidentiality. Within these due diligence deals, a target company is working with a variety of vendors. Some of them have 20, 30, 40 vendors depending on their size, that all in some way or format want to roll out AI features either quietly or by the direction of the target company. While the issue there is the use of these AI features are often conflicting with business confidentiality policies. If the AI feature is, for example, recording phone conversations or meetings and translating these into to-do tasks. We're seeing that as a consistent use with AI, and then they'll send out those notes or those follow ups to all of the stakeholders that were on that business meeting. That output could very likely contain business confidential information. What becomes of that information is a confidentiality issue. The implications of AI go beyond privacy and data protection, and it's critical to understand how AI can affect business operations within a target company, so that PE firms can position themselves successfully in these areas.
Chad Neale (06:50):
That's really interesting, and I don't think a lot of people connect those dots when they're thinking about the implications of AI. All the benefits, we could list those and they make a lot of sense, but how do you get your head around that potential challenge? You talked about open AI. Can you talk from a data control perspective and an ownership perspective, how do close and open-source AI approaches differ in terms of the challenges around privacy compliance in terms of an acquisition?
Sara Haneef (07:22):
Yes, that's an interesting question. I'll start with open source since many of us are familiar with that. These are the systems where the source code of data is open to the public, like ChatGPT. Here, the major privacy concern is data protection and security. Businesses have a wide range of users that are using these AI tools, and so the open-source style may be more susceptible to potential security risks or vulnerabilities. The other big concern here from a privacy perspective is quality, so data quality is a concern when you have data being ingested in mass amounts to feed these predictive models. You want to make sure that it's accurate and it's relevant to ultimately be useful. In closed AI, this refers to AI where the data is proprietary and not openly shared with the public. Users typically will interact with some sort of interface. That's like commercial products, such as Siri or Alexa. The major risk here is transparency, so closed AI systems often operate as black boxes where inputs are fed and then you receive outputs of information. But what's happening behind the scenes and the process and algorithms is really concealed. This lack of transparency poses a lot of risks from a data privacy perspective, such as the potential for bias or discrimination, which is concerning when the personal information of individuals is involved.
Chad Neale (08:38):
Sara, you've done a nice job setting the table, talking about some of the challenges and the risks that are associated with AI, the things that the investment team should be thinking about. Back in February when you and I were first talking about establishing a strategy that we could bring to our clients, you shared with me a due diligence framework that you had worked up that effectively helps you go through this evaluation, make sure you're covering the key areas. Can you elaborate on the importance of bringing such a framework to these deals in order to identify material risks related to an AI strategy?
Sara Haneef (09:17):
Yes, so that framework that we've developed really covers the key areas of governance from an AI and privacy perspective. It's important to note that regulations are playing catch up, and this is the case with privacy, and now we see it with AI as well, the technologies moving faster than the regulations. It's important to acknowledge that there's no total a hundred percent compliance, but companies could show a good faith effort, which actually has gotten them pretty far, in the event that there is any regulatory action. But to answer your question, how can PE firms ensure that the target companies AI activities comply? It's important to identify the overlaps and trends in the regulations. Globally there's an ever-growing list of regulations both in privacy and in AI. We have the GDPR in the EU. In the US, there's a state patchwork of laws and the looming discussion of a federal regulation. It wouldn't be impossible for a PE firm to identify each and every regulation and develop a separate compliance roadmap. What we've seen PE firms be more successful at is to understand the major trends that knock out the bigger requirements because there are overlaps across all of these regulations. Items such as consent, notice, contractual obligations, third party and vendor management. There are nuances within each of the laws identified. Once you can identify and risk rank those nuances, the PE firms are better prepared to tackle this messy world of privacy and AI regulations.
Chad Neale (10:32):
Yes, that framework is really critical to ensure that you're being able to cover the areas that are most germane for a buyer to make sure they understand what they're getting in terms of an AI strategy and where there might be any potential risks or landmines that could be material to the deal. With that in mind, could you share some of the red flags that you've seen that give you an indication that they've got an inadequate data privacy strategy around their AI business strategy?
Sara Haneef (11:08):
Yes, so one of the major red flags that we've seen is when a target company is not sure what AI features they rolled out, and I don't mean that as a joke. Many target companies have numerous vendor contracts, and they really are not aware of what features they use internally or at these vendor contacts. Sometimes it's because the vendors have quietly rolled out these AI features. Or at times it was someone specific in the business that had this relationship with that vendor, and they turned on this AI feature. That first red flag is really a lack of attention to identify what the AI capabilities across the target’s organization are. Another major red flag that we see is a lack of oversight of the contractual obligations, and that overlaps with the first red flag, but there is right now a lot of scrutiny on the outputs of these AI models where they include personal data. So, who owns that out book? Who owns that consent required? Who owns a relationship with that individual whose personal data is being processed? A red flag is a lack of these discussions. It shows that there's some gray areas that could potentially cause problems as a target company matures with their use of AI and when that data becomes more and more valuable.
Chad Neale (12:11):
Right, you mentioned this technology is way out in front of any kind of compliance frameworks. It's also getting out in front of a lot of businesses before they've really thought through the implications. Oftentimes businesses assume people are not really using AI as part of their daily business routines, but in fact people are going to ChatGPT, they're using Gemini, they're using these various tools that are readily available, and they are putting potentially sensitive information, proprietary information into these systems without really thinking through the governance and the way to effectively manage that. Would you say that today you're still seeing a fairly nascent approach to AI business strategy governance, just in general?
Sara Haneef (13:02):
Yes, so we see target companies that are all points in their journey with AI. Some are just shopping around, and they're at the very beginning, while others have really intertwined it into their business. And what these companies, regardless of where they are at currently, where they'll be a year from now with AI is very likely, most almost positively will be very different what they're using AI for.
Chad Neale (13:21):
Again, this is changing so quickly. Who knows what it's going to look like in a year or two. So, we want to build resilient governance programs. Given the dynamic nature of AI and business privacy and data privacy, how can firms incorporate a strategy that really is future proof, so you're not falling behind in your strategies as this technology continues to evolve?
Sara Haneef (13:49):
Yes, and so what PE firms can do to future proof as you're stating, is to see the uses of AI and create guardrails for all future AI investments. The first thing that we're seeing is the capability to be selective within a target company. An excellent example of this is we have seen companies roll out specific AI features typically from a vendor, and they will roll them out in phases across an organization. They’re just not blanket adding an AI feature into the organization, but they'll focus on low-risk areas where there's not as much of a privacy or cybersecurity concern. They may not touch the HR or legal business functions, but they'll work on a more low risk business function and then roll it out on a phased approach. We're also seeing PE firms actively develop guardrails in the form of, for example, AI principles. Another approach is building guardrails. We are seeing PE firms actively develop their AI principles. What guides them in the world of AI when they're looking at acquisitions? Some of those off the top of my head as I've developed these for some PE firms is incorporating privacy by design principles or another AI principle could be to be accountable to people. This allows teams across the organization to come back to a set of common principles that leadership or PE firms agree are the requirements of AI.
Chad Neale (15:01):
That's a great point. These AI principles that you described are paramount to ensuring that there's an effective strategy. Can you share some of the examples around data management practices that a PE firm should be looking for when assessing a company's data privacy posture around AI?
Sara Haneef (15:24):
Yes, so we've seen in many due diligence deals we've completed, that one good way to assess the target company is to get a sense of how well they understand their personal data environment. And there's a spectrum. Some companies are not collecting much data at all. Some are collecting a lot and have little idea of that collection or what that data flow looks like. Then there's others that are collecting a lot but also have robust data flow maps documented. Target companies that have a stronger understanding of data within their organization are typically stronger from a data privacy perspective. That understanding of data means where it enters, where it flows, how long it's retained. We are seeing organizations use data flow maps for much more elaborate reasons than just regulatory purposes. That’s a good general gauge of a company's data privacy posture. And then of course there's documentation. Do they have a privacy policy, and do they have a process in place for making sure that that privacy policy is up to date and accurate? Because we're seeing that privacy policy incorporate AI and AI principles because we're seeing that privacy policy incorporate AI and how that target company is using AI throughout their business. Similarly in documentation, roles and responsibilities regarding privacy and data protection and AI. Also, retention is an interesting one. The retention of personal data is often discussion that arises when we're reviewing target companies. Often these companies have stronger practices when it comes to the collection and use of data, but it falls off the radar when it gets to the retention aspect, and so those are good indicators for a target company's data privacy posture.
Chad Neale (16:47):
Thank you. Thank you for that. I think this is a good place for us to conclude this conversation. It's been extremely insightful. I've certainly learned a lot thinking through some of the strategies that you're deploying today. Thank you so much for joining us, Sara. As we conclude this discussion on AI and data privacy, it's very clear to me that a thorough due diligence program and processes are paramount when firms are looking at capitalizing on the potential of an acquisition that's leveraging AI by prioritizing data compliance, cybersecurity in a robust governance framework that Sara walked us through. Investment teams are going to be in a much better position to mitigate risk and ensure that there's no surprises post-acquisition. With that, I want to thank everybody for listening and please look forward for the next M&A podcast dropping soon.
Alliant note and disclaimer: This document is designed to provide general information and guidance. Please note that prior to implementation your legal counsel should review all details or policy information. Alliant Insurance Services does not provide legal advice or legal opinions. If a legal opinion is needed, please seek the services of your own legal advisor or ask Alliant Insurance Services for a referral. This document is provided on an “as is” basis without any warranty of any kind. Alliant Insurance Services disclaims any liability for any loss or damage from reliance on this document.
Thanks for your message.
We’ll be in touch shortly.