Page of | Results - of

Podcast

Specialty Podcast: Beyond the Hype - Evaluating AI for Real Value in Acquisitions

By Alliant Specialty

Chad Neale, Alliant M&A, welcomes Haytham Allos, Chief Technology Officer at Cyberbian.ai, to discuss the evolution of Artificial Intelligence (AI), AI systems prevalent today and strategies for evaluating AI enabled applications during due diligence. From dissecting the technology stack to assessing talent shortages and data security challenges, they provide insight for navigating challenges involved with AI-driven businesses.

Intro (00:00):
You are listening to the Alliant Specialty Podcast, dedicated to insurance and risk management solutions and trends shaping the market today.

Chad Neale (00:09):
Hello everyone and thank you for joining us for another Alliant M&A podcast. My name is Chad Neale. I run the Alliant M&A Transaction Advisory Team, and I'll be your host today. We are very excited to have Haytham Allos with us to discuss a very exciting and dynamic topic. Haytham has spent his career providing CTO leadership to technology centric businesses and recently has been working with leaders in blockchain and AI technology. The world of business is undergoing a massive transformation driven by artificial intelligence. In fact, a recent study by McKinsey found that 80% of executives believe AI will have a significant impact on their industries. And this isn't just about efficiency, it's about fundamentally changing how companies operate. At Alliant M&A, during our due diligence engagements, we are seeing a surge in AI adoption at companies being acquired by our customers. The focus is not only in marketing and customer services, but what we're seeing is AI being embedded into the core business applications and the services that they're providing clients. However, it's important for someone acquiring such a business to distinguish between true AI integration and simply leveraging AI buzzwords. Many organizations use AI as a marketing term for basic automation and business analytics. Much like we saw the term cloud being kicked around 10, 15 years ago. Today, 50% of the deals we're performing, technical and cybersecurity due diligence, include a review of AI embedded proprietary applications within the scope of our work. With that in mind, we thought it was important to spend time discussing the crucial role of AI due diligence. Today we'll delve into the different types of AI systems, open source versus close and explore introducing ideas of focus when performing due diligence of AI enabled applications. So Haytham, really glad to have you with us, and thank you again. Can we start off by getting a brief background on your experience working with AI systems?

Haytham Allos (02:29):
Well, it's nice to be here, Chad. Always enjoy talking to you and the rest of your team. I think it's a very important topic we're addressing today, as I live it every day in my life. In terms of my background, I've been following this field for the last 30 years and how it has evolved, all the way from neural nets to rule-based systems to expert systems. It really has come to a pinnacle in November 2022 when we saw this emergence of technology. And the last six years I've been dealing with the technology at the strategic and also technical level, being the CTO of a blockchain real estate company and now leading an effort to build a sophisticated AI systems for a company. Definitely a new world we're in, and the techniques and tools that I would've used six years ago, they're different. Very much different than what we have today.

Chad Neale (03:29):
That's interesting, and I'm sure we can have a whole topic about how it’s good for humanity or maybe the destruction of humanity, but for now let's focus on when you think about the types of systems out there beyond the marketing hype, how would you describe the different types of AI systems that organizations are building today?

Haytham Allos (03:52):
That's a good question. What has happened with AI last, let's say three years, has changed in terms of the segmentation of the field. If you think about what was being done, a lot of foundational work was being done up to ChatGPT 3.5 in November 2022. And what I mean by that is that there was a concentration on putting together a lot of the research and development to create ChatGPT and of course other open-source models as well. But what happened is there was a break off. There was a fork in the road where the technology started to be applied to real business problems. Companies started adopting it. So, and this is just me personally, but the way I see the field right now is segmented into two parts. You have more of that foundational, where people are working on making these large language models bigger, better, more efficient, and then you have people that are actually applying those technologies to real business problems. And those two segments are becoming on their own versus overlapping, as they did before. So that's the way I see it.

Chad Neale (05:05):
Yes, thank you. That's really interesting to learn of the evolution. I've got a couple follow up questions to that. First of all, today, are there any specific industries leading the charge in bringing these AI technologies to their customers? And two, when you're performing due diligence on an AI enabled company, what are some of the key areas of focus when you're thinking about a scope of work? What would you be defining as the areas you'd want to dive into during a due diligence?

Haytham Allos (05:36):
There's a lot of movement in terms of industry providing artificial intelligence services. The resources out there, there's definitely a shortage of resources out there given the demand of looking at, let's say, a due diligence project. Now, in terms of when we look at a company and when we start to analyze the company for due diligence, traditionally we would look at the stack. We would look at whether they're using certain open-source technologies. What are they claiming? What's their secret sauce? That was more of the architectural databases they were using, things like that. But now I think we have to look at it very differently. Artificial intelligence can actually be interjected at the different layers of the technology stack. As you start to analyze the different layers, for example, user interface level, that machine human interaction, we have to look at is AI being used there? Are assistants being used there? Are they proprietary? Are they leveraged from another company? But even as you go down the technology stack from user experience, to let's say web services, and the middleware, and then these databases that they use, and storage and cloud, we also have to look at how AI is being used there as well. Because you could have AI doing things like vision processing where they're interpreting data on the backend. So there's a lot of backend capabilities that AI can also provide, and it's important that we take a look at those, analyze those, and then be able to clearly convey where the value is if such value is being claimed by the company and to make sure we remove as much as possible the noise around it. That's what I really think. We have to look at all the different layers at that point.

Chad Neale (07:35):
That's fascinating. Getting some time to look at the marketing materials the company might be representing as it relates to their AI strategy so that you could put together a scope and really dive into the right level seems critical. You mentioned finding the right people today is very difficult. There's a big shortage. How much does that come into play during the due diligence process for you? Are you interviewing the developers? Are you looking at the team? What kind of talent are you looking for?

Haytham Allos (08:10):
To accomplish a comprehensive due diligence, you have to be able to have the right people and the right people with the knowledge to be able to do this. And right now, this field is evolving at a very quick pace. Sometimes it's difficult to find people with the right knowledge. People are not keeping up. I, myself, for example, clearly spend about 30 to 40% of my time strictly on just keeping up with the activities that are going on with this field. That's R&D, that's just reading, that's just experimenting. And then the rest of my time is filled with my role to apply that knowledge. So, it's a difficult thing because the industry hasn't yet caught up with the pace of this technology, and the educational system hasn't had a time to bring new workforce into the area for them to be able to go out there and do the work that's right now happening. And it's in demand.

Chad Neale (09:16):
Is there a type of background of an individual or individuals at a firm that you're looking for to check a box or ensure that they're really building an application, I'd be looking for this type of talent. Is it that simple that you could define that, or is it more challenging than that?

Haytham Allos (09:38):
Yes, there is. For example, if you go look at a team and they're claiming that they've created their own model to do X, Y, Z, and they're able to provide this level of service. I'd be looking for team members with a masters or PhD in artificial intelligence or a related field like cognitive science. You want to look to make sure that whatever is being claimed that, indeed, are these resources outsourced? Are some of these things that have been developed, are they patented? Who owns the patents? For example, the intellectual property. There's a lot of that going on, and you want to make sure that you have the right backing of what is actually being, like you said, what's being marketed, what's being claimed, so that way potentially, let's say private equity or whoever's attending to make this M&A, that they have the right information. That there's a staff out there in this company that has the right skills. Yes, indeed it looks like they've written a lot of the secret sauce, or it could be the story where they outsource it to some contractors somewhere else and they actually have the secret sauce, not them.

Chad Neale (10:51):
One last question on this topic here about staff and talent. Would you say that because of the shortage and the reliance on individuals to bring this technology to bear, does this introduce a new area of key person risk that we want to try to quantify so that the buyer of the business understands how important a particular individual is to the overall business strategy? And if they were to leave, what kind of impact that would have on the company?

Haytham Allos (11:24):
It’s a good point, Chad. Absolutely, I agree with that assessment. Just like you would have key personnel for example, that's why we have insurance for them, things like that. And that's what Alliant does, that's part of their business, is assuring these key people. I really do think that that's a key area where we have to look at who are the individuals that are actually leading this value proposition that is being claimed, let's say in that particular area of the product. And are they at risk at these one or few individuals leaving just because there are not that many resources out there? It's an area that has to be taken seriously because those individuals, if they leave, it could seriously hamper or have an effect on potentially the price of the company or it could have an effect on the M&A acquisition if they were to split right in the middle of it, for example. So, there's a huge risk that needs to be underwritten, needs to be assessed.

Chad Neale (12:22):
Yes, great feedback there. Okay, so you're doing your due diligence on an AI enabled application. How important is it in your process to really understand areas around data quality and security, and how they've handled that in designing their AI systems?

Haytham Allos (12:41):
It's a very good point again, Chad. This is an area that is least understood in what I'm calling applied artificial intelligence. The reason for that is because we tend to think of information security in a very structured way. I'll give you an example. We could have a bunch of additional databases, for example, and we could look at, well, the data is at rest, and we know how we could protect that information. We know that we could put cards in those databases based on accounts that are coming in and things like that. There is a clear path of defining what data security would be at the different layers because we've got structured information, we've got structured based methodologies for ensuring those level of securities. With AI, it's almost opposite. We're taking unstructured information and putting it into this brain-like neural net model that clearly has no logical structure as we know it traditionally.

And that presents a problem. The reason it presents a problem for security is because it's not easy to secure information in this model because there could be a lot of leakage, and it's not guaranteed. What we do is we use a lot of prompt engineering, a lot of guards around these models to make sure that if someone's putting a prompt in or someone's doing something with AI, that we don't get this knowledge leakage that comes as a response. In fact, ChatGPT, Llama, Anthropic, they all have these guards that they've put in. For example, they don't want anybody knowing how to make a nuclear bomb or a biological weapon. And so, what they do, they put these guards in. However, those guards are very susceptible to hacking. In fact, it's just like the traditional world, Chad, where you used to have people coming in and doing hacking on systems. It's the same thing, except what they're doing now is they're using prompt engineering to go in and try to go around these guards and bring that knowledge that's there out in the response.

Chad Neale (14:57):
That's fascinating. Sounds like some new challenges to think through and design when you're thinking about data quality and security. I want to spend a little time talking about growing concerns around AI bias. What types of steps can companies take to ensure that their AI systems that they're investing in are fair and explainable?

Haytham Allos (15:21):
Yes, the things that need to be looked at when it comes to “the bias of a system” and how that could be mitigated…we saw this with Google Gemini, for example, where it depicted people in the wrong racial color, famous people for example. It all comes down to the data that you have trained your system on, and any kind of data inherently is going to have some level of bias. Large language models like ChatGPT, they're all trained on internet data, and we know that internet data is inherently biased depending on what kind you have. So, you're going to have something there. Now, what we try to do with these models when there's that inherent bias, is we try to correct those by putting these guards and filters and also personifying these models. What I mean by personifying is we tell the models that they should act in a certain way or be in a certain way, in such a way where they are as fair as possible to what we think to be fair. There's no clear definition of fair, but that's the challenge that's facing these large language models. It's hard for them to put that fairness because the training data is so large, and there's so many different possibilities of someone coming in with prompts that they could receive responses that are biased. So, there's a lot of that going on.

Chad Neale (17:01):
What would you say, based on your experience, are some of the biggest risks? What's the two or three biggest risks that you see when it comes to AI development for an organization that's looking at acquiring a business, that this is a critical part of their business strategy?

Haytham Allos (17:25):
I think, Chad, we mentioned this earlier about information security. Information security, when that's leaked, for example, it could translate into incidences, lawsuits, huge risk. We still don't have the technologies or methodologies for how to secure information in a model when using AI, and that's a big problem. There are no high integrity methodologies right now for doing this. Just because we don't store information traditionally, as I said, like databases. It's a neural net based on billions and billions of neurons that are being interconnected with each other, and it's very difficult to control how data is going to behave at that level. Now imagine this, let's say in a company you have very sensitive information about customers, salaries, things like that, human resources. But human resources traditionally would be kept secret, or it could be medical records, for example. And then you have on the other end, let's say, a marketing department that may not have as much sensitive information.

However, a lot of this data is going to be intermingled, and it's going to be correlated in this brain-like model. There’s going to be this overlap of how the model is going to respond to certain prompts. Even as much as we can do prompt engineering, we cannot say to a company, let's say, I was doing a due diligence and there was AI technology, no matter what I do, I cannot say that with high confidence or high integrity that we are not going to have an information security breach if indeed they're utilizing data. Now the only way we can do this is we can assess the types of data that they've actually used to train the model or the type of data they're using to act on the data, meaning that these are prompts that, with current information that they have, we can remove that information and say, we know for a fact the knowledge base does not have that information. That's the only way to do it, but a lot of companies are going to want to have this knowledge base and that's a problem. How are you going to guarantee that?

Chad Neale (19:46):
Great points. Listen, you've given us some strong things to think about, particularly those listening that are looking at investing in these types of businesses. It's obvious that there are some key areas from the technology stack point of view that you want to be diving into during an acquisition to understand where the AI has been enabled and how it's been enabled, where and how they've applied data security throughout that technology stack and making sure that it's not marketing buzz, but a true AI enabled application with a team that can support it. Really great insights, Haytham. Very much appreciate your time. And with that, I want to thank everybody that has joined us on this podcast. We really appreciate your time. We hope this information was helpful. Please reach out to me if you've got any questions about the topics. If you'd like to learn anything more about our overall strategy in the Alliant M&A group, including our cybersecurity and technology advisory practice, please go to Alliant.com for more information.

Alliant note and disclaimer: This document is designed to provide general information and guidance. Please note that prior to implementation your legal counsel should review all details or policy information. Alliant Insurance Services does not provide legal advice or legal opinions. If a legal opinion is needed, please seek the services of your own legal advisor or ask Alliant Insurance Services for a referral. This document is provided on an “as is” basis without any warranty of any kind. Alliant Insurance Services disclaims any liability for any loss or damage from reliance on this document.