-
Julia Nimchinski [ 01:59:22 ] How are you doing? Oh man, this has been such a great event and I am so happy to have this powerhouse panel together to close us out. We’re going to be the best panel of the event. I think. Hard to disagree. All right. We, the, the pressure’s on y’all. Um, all right, so let’s go ahead and get into it. This is our fundraising panel. As Julia said, I am Anne Holland. I’m an advisor with the Strategic Edge. Um, I am advising, uh, organizations on AI strategy, uh, roadmap, and then readiness. And, um, Daniel, let’s start with you for an introduction. Sounds great. Hey, Anne. Uh, hi everyone. My name is Daniel Bowen. I am a principal at Software Equity Group. We’re a sell-side, uh, advisory firm for software companies.
-
Daniel Bowen [ 02:00:11 ] I’ve been in, uh, software M&A for approaching 15 years now. Um, so. Spent 10 years on the buy side, and I’m now a seller’s rep. Fantastic! Michelle. Hey everyone. Michelle Fry here. I’m a partner at Treehouse Studios. We are a venture studio that’s focused on the sports industry. And, um, I’ve been there for a little while before that it was on the consulting side, working with, um, corporate venture. And before that, a long time, uh, corporate operator and did a few deals. That Anne knows about, yeah, very, very impressive on that corporate innovation side for sure. Michelle, um, Neil, my name is Neil Patel. I’m the co-founder of NP digital, uh, or a global ad agency. I spend a lot of my time on MNA and buying companies and tucking them into our fold globally.
Neil Patel [ 02:01:08 ] Um, previously in a few of my startups, I have raised money before as well. Awesome. Mark. Hi, I’m Mark. Stoose. I’m the CEO of proof analytics, um, a top causal AI, uh, software product. Um, I have done about 82, 83 MNAs in my career, uh, from various points of view. Um, and I am part of a family office today that, that, uh, continues to do that. So I. I have kind of, uh, uh, uh, the last, uh, group that was here. The last panel was, uh, one of them was talking about the fact that he hoped that 2025 would be the year of pragmatic AI. Uh, and I think that that’s exactly what we’re going to see. Fantastic. And yeah. Hi, my name is Yao. I’m managing partner of the hatchery.
Yao Huang [ 02:02:12 ] I’ve been doing tech investments for almost 20 years. I’m involved in AI when it was just machine learning and NLP algorithms, trying to figure things out. Um, excited to be here. Awesome. Great panel of folks. Lots of different experiences from operations, uh, corporate innovation, MNA, uh, sales side by side, VC, PE, all the acronyms that we can think of here, um, to discuss this. So let’s start big picture here. Um, in terms of how you are viewing the landscape today for AI, right. As I look at this right now, the last 20, 25 years or so have really been dominated by software as a service or what we would call the SAS model. Um, and the one word that I would use to predict that model would be predictable, right?
-
Anne Hollander [ 02:03:02 ] We understand how to model that. We understand how to value it. We understand how to buy and sell it. We understand what this looks like, uh, from a fundraising standpoint, when I think about the AI age, I see the inverse of that. Right. We see instead service as a software. How are you guys seeing this landscape today? Mark, let’s start with you. I think you’re actually very spot on there. Right. I th I think that, that what, okay. So let me, let me figure out a way to address this. Um, I did a research note a couple of months ago with one of the foremost experts in AI, a person I can only hope to emulate. Someday, uh, by the name of Bill Schmarzo, he’s the chief evangelist at Dell, and he’s very well known.
-
Mark Stouse [ 02:03:52 ] We, we were looking at the economic impact of Gen AI on two vectors, efficiency and effectiveness. What we found was that while it absolutely delivered a certain level of efficiency gains across an organization, assuming that the organization was enabled to use it, right. That, um, those were more or less one-time only kind of benefits. They did not compound and they had a ceiling, right. However, when you approached it from effectiveness, which would include things like innovation, stuff like that, better strategies, whatever. Right. Right. Um, you saw an explosive benefit that was uncapped that did compound, but that was overwhelmingly concentrated in high-performance people and teams. And then as you move to the middle part of that whole curve, the bell curve, you got similar benefits, but reduced.
Mark Stouse [ 02:05:04 ] And then as you move towards the, the other end, right. The not-so-great end, right. It was a classic, you know, situation of eight times zero equals zero. And so I think that what’s really going to happen here is that AI, like a lot of things, right, is a multiplier of human capability and capacity. And it’s going to do it in ways and degrees of effectiveness that SaaS never could. Right. Right. Right. Um, that there is a kind of this AI is almost like this middleware layer, right. That exists between people and process, right. That enables it both ways. And so I, I do agree with you on that. I think it is, uh, very much a, a standing SaaS on its head kind of situation. Yeah. Just to cross one on what Mark’s saying.
Michelle Frye [ 02:06:07 ] Um, I think. Yeah. I think. It’s the same reason really, right. I mean, it was all about accelerating. It’s all about being more efficient. Um, the models had to change then I think with the, with AI and coming onto the scene, it’s, it’s just another up level of, of capability. I think that, uh, it has some risks. I think people aren’t super sure how to necessarily harness it, you know, what’s the level of specialty. And skill that you need to have with the, with your employees, um, or the, the service that you’re buying. Um, but I think at the core, it is absolutely all about efficiency. I think to, to elaborate on, on just one, one aspect of that and definitely agree with, uh, with, uh, what, what Mark and Michelle are saying.
Daniel Bowen [ 02:07:03 ] I think if you look at. SaaS in the way that it’s grown and evolved, um, It ultimately ended up spinning up trillions of dollars worth of industry, uh, from the services standpoint in order to manage and oversee and, and sort of control the SaaS. Right. Um, and you know, I think that’s something that people have mostly kept their head in the sand about, um, because we want to hold these big enterprise software companies as the gold standard for SaaS, when a large aspect of that has shifted cost, what either took, um, the people who are, um, you know, paying for the SaaS, uh, for them to spin up departments in their own organization to support and underwrite, um, the usage of it, or to hire a third-party consulting service to do it.
Daniel Bowen [ 02:07:52 ] And I think the way that AI is, is coming out is it’s actually closing the gap between those SaaS products and the services industries that were set up to really make those SaaS products work. Yeah. And one quick thing is like with Salesforce, we pay an arm and a leg to Salesforce, but we spend more money on Salesforce consultants and engineers to actually get the work done. And when you look at this new world that we’re in because of AI, companies aren’t really paying for software. They’re not really paying for people. They’re paying for problems to be solved. They don’t care how it’s being done. They just want the most efficient way and accurate and precise way, right? In which you pay to solve a problem.
-
Neil Patel [ 02:08:35 ] You actually really want it done because not everything actually solves it, even if it claims it does. And with AI, the world that’s changing is it doesn’t matter if it’s services or software-there’s now, or eventually will be as a technology improves-a more efficient way to just get things done. And no one really cares if it’s service or software, they just want it done fast, accurately, and cheaply. And this is also why you’re seeing, you know, fundraising and AI boom, because it’s just creating so much efficiency in the market, right? Or in theory, it will. Um, as the technology rolls out and gets implemented. And one of the things I just would like to inject here, right?
Mark Stouse [ 02:09:13 ] Real fast is that when we talk about AI in the current context, we really are talking about general AI, but there are three other types that are honestly over the next several years, going to dwarf general AI in terms of their impact, um, analytical AI, causal AI, and ultimately, and I think this is the fourth one is a long shot here, right? Is autonomous, right? And I think that we have to get back to certain core principles. When we talk about efficiency, efficiency without proof of effectiveness first is meaningless, right? It’s like you can be driving along and have your car totally tuned for optimum efficiency. And if you’re going the wrong way, it doesn’t matter, right? And so analytical and causal AI are going to be skewed more towards effectiveness.
Mark Stouse [ 02:10:24 ] Certainly there’s a, an efficiency aspect to both, right? But it’s going to be the truth teller in the mix on this. And I think that we have to be really clear about kind of like. What, when we say AI, what do we really mean by that? And when, what’s the time scale on this? Cause without that, the investor side of the whole thing kind of goes, it seizes up. Absolutely. So let’s, let’s talk about that investor side for a moment. Um, are we too early just right too late in terms of investments in AI organizations? I would, I would say it depends on what kind of AI investment, right? If you think about the gold rush, there’s people who are going out there and trying to find gold.
-
Mark Stouse [ 02:11:18 ] There’s also the companies that are making the picks, you know, axes and the shovels. There’s different categories. If you’re, if you’re looking at AI, a lot of the companies that are building features on top of these AIs, I think a lot of them will go away and it could be too early, but a lot of them that are building the platforms, whether it’s Elon Musk’s version of open API or Open AI or ChatGPT, right. They’re Open AI, um, or Facebook has a version, you know, some of these models, they’re here to stay; they’re going to get bigger and bigger, right? Facebook’s is their plan is to open source everything and just have more people build on it. Um, but a great example of something that was too early.
Michelle Frye [ 02:11:56 ] There was a company called Jasper that raised money at a billion plus dollar valuation to help you create content. What ended up happening is, is you have all these features for free, whether it’s using X’s version or Google’s version or ChatGPT’s version. So, you don’t need that. And I’m not saying that company will fail or not, but the investors who put in a lot of the cash, you know, it’s going to be very hard to recuperate your money. Um, and it’s like the same thing with some of the other technologies like cryptocurrencies. If you’re investing in the infrastructure or some of the things that are great, it’s still going to be needed, but if you’re trying to pick the winners so early on, on what’s going to do well from a product perspective, things change as technology evolves and it gets hard.
Neil Patel [ 02:12:44 ] That’s absolutely right. It, I think that it also is, I love Neil. I loved your analogy of the, yeah, you can go and start investing and trying to find the gold and the gold rush, or you can, or gold rush, or you can just invest in Levi’s, um, or, or the pick axes, right. All the companies that supported that. Um, you know, I think it, it just depends on where you’re looking at for an investment opportunity and the way that companies can leverage AI. Um, what, what I’m seeing in the market right now is a large, uh, increase in influx of traditional software investors, taking more serious looks at traditional services companies, um, especially sort of consulting services companies. Um, and I believe that part of that is from a lens of, okay, where can the efficiencies be driven here in AI?
Neil Patel [ 02:13:36 ] Because even going back to, um, you know, some of the, one of the prior conversations. So ultimately people don’t care how the service is being performed. They just want it performed. Right. So if you’ve got someone who is willing to pay you $10,000 a month for some form of services, they don’t care if it’s a human or a hamster on a wheel. Um, if the result is right, it’s right. Fantastic. So with this then, right. And we’re starting to see this, Daniel, I know also with uh Salesforce, right. We’ve launched a new AI tool. We’re going to hire a thousand reps to go out and sell this tool. Um, while that may on the um on its face look as though well wait a minute.
Michelle Frye [ 02:14:23 ] Why aren’t you just using AI for this? I think ultimately it is exactly that AI play um that you’re speaking about where you’re taking a a very traditional role going to grab all the data out of it replicating it as much as you can. And the processes, not just using GIN AI, but other forms of AI as well. So that at some point in the future, you have an AI agent who can go and do this start to finish, right? Yeah. You train your own model enough and you own the model, then why not implement it? I think that it, you know, anyone who has seen the demos of the Salesforce offering, right. I think it’s pretty clear what’s going on there, right. Because it’s pretty basic stuff.
Yao Huang [ 02:15:09 ] You know, when you kind of the general consensus view after seeing it with a lot of people was everybody was kind of like, really? But when you take that capability and you strap it onto a company like Salesforce with Salesforce’s market weight, right. It’s going to. Kind of win in a sense anyway. Right. And it’s going to obliterate just from a platform standpoint, a lot of independent players. A lot of the VCs that I talked to, that’s one of the things that, that they’re worried about with regard to their own AI investments, right. Is that they kind of maybe got in a little too early, a little too rich. Right. And now you’re going to kind of. See, I think almost like a half step of consolidation and other and filtration in the marketplace, right.
Anne Hollander [ 02:16:11 ] Because it’s hard to, it’s hard to fight a city hall in this case, Salesforce, or something like Salesforce. If you are a startup. Definitely. So, with this, then what are our health metrics that we would look at with regard to an investment? How are we going to evaluate an AI organization, either an AI product or an organization that has injected AI into their stable or legacy products? I think the biggest one is, I mean, we just start with, does it really solve a valuable problem? Right. AI is, is kind of a paradox, like a lot of things in life. It’s a paradox, right? It’s. It’s a risk creator and it’s a risk eliminator, right. Depending on how you view it and how you use it.
Yao Huang [ 02:17:12 ] You know, when with causal AI, a lot of the modeling that we see customers doing, particularly as it applies to this conversation, is all about de-risking potential investment. Right. So you, the, the, the ease with which you can run counterfactual models, scenarios, blue ocean work, all this kind of stuff today versus just 18 months ago is night and day. And so ultimately you can build like a spaghetti model, right? That totally informs a pro forma or a pre-rev company that doesn’t have any real data to base anything on. Because they haven’t existed. Right. And yet you can create a lot of very representative synthetic data to do that. So that’s an example. Right. Of how it bears against risk. And I do think that 2025, they name is risk. Right.
Daniel Bowen [ 02:18:18 ] And that in the hearts and minds of a lot of CEOs and CFOs. And so AI is going to be used in various ways to do that. That’s I’m seeing. It right now we’re, we’re involved with MNA where they’re running analyses much like I just described, but using kind of a mix of synthetic and natural data, right. To say, ‘Hey, will this, is this likely to pay off? Cause a lot of them really don’t a lot of mergers in particular don’t really pay off. So kind of really adjusting for risk. On this whole deal. I think particularly in B2B, right. I mean, everything is a risk-adjusted decision these days. And so if you can help them do that, you’re going to win. If you can’t, you’re probably not.
Yao Huang [ 02:19:16 ] Neil, you’re currently running a services-oriented business, right? You have a great marketing platform that you created in providing marketing services out to others. How are you seeing this transformation play out? Yeah. So we’re, we’re still small in size, but we have over a thousand people and our scale, one would think that you can just leverage this technology and just make tons and tons of more people efficient and replace them where it’s at right now in today’s world, we can do some of the mundane tasks and repetitive tasks and get them replaced, which is great. We’re struggling to replace creativity. And I’m not talking about creativity, like what you see in gen AI and have it just create something.
Anne Hollander [ 02:20:05 ] I’m talking about creativity where someone comes up with a unique strategy, not where AI is analyzing all the strategies in the past and what people are doing and coming up with something similar. I’m talking about something that’s out of the box that is actually working. And on the flip side, we’re seeing a lot of corporations implement AI. So for example, Google has something called performance max or P max. It’s their version of AI to optimize ads. It’s a big black box. You can’t have AI manage the AI hasn’t been working and you can’t have Google’s performance max just run on its own because is the AI optimizing for Google or the customer or both? And the general consensus is, Hey, it’s good in some areas.
Anne Hollander [ 02:20:53 ] It reduces ROI in other areas because it’s placing your ads in areas that you may not want. And it typically increases your performance. It reduces your cost to acquire a customer. So then you need human intervention to do what’s best for the business. And the reason I bring this up is I think AI will definitely impact services. How I think it varies a lot per service provider and how good the technology gets over time. And it’s definitely going to improve. But, B, the other part you’re going to have is I still think in the future, you’re going to have people double-checking because some of the AI platforms benefit, not the company using them in certain ways. And it benefits. The person trying to make money, EX, some of these ad management platforms, does it doesn’t seem like, Oh, that’s a big deal.
Anne Hollander [ 02:21:36 ] But when you’re talking about on digital ad spin on an annual basis, over $600 billion, right? Why would a platform, and I don’t blame them for this, but why would a platform like Google and Facebook? Yes, they need to do what’s good for users and help them achieve their goals. But they’re of course going to try to maximize the revenue and profit. And one thing for everyone to keep in mind is we’re in a bad economy, whatever anyone wants to call it, a recession, right? It doesn’t matter. It is a bad economy. And if you look at marketing spend in general, a lot of it has gone down, but yet the platforms are making money and their AI is helping them.
Anne Hollander [ 02:22:14 ] It’s not like there’s tons more companies trying to go and spend new money and invest a ton in this market. Look at venture dollars. A lot of it’s drying up. It’s not that these funds don’t have it. They’re just don’t want to deploy as much right now. Same with family offices, especially when you’re going to get amazing returns just in Treasury, right? They’re less risk-averse. So how is it that these platforms are making showing great growth in their ad ecosystem? Well, a lot of it’s the AI systems are optimizing for their revenue versus the companies. Yeah. Interesting take there. All right. So as we then begin to think about AI in our own work, right, we are in the middle of, you know, either M&A, we’re doing buy side, sell side.
-
Neil Patel [ 02:22:59 ] We are thinking about innovation. Virtual deal rooms are an area where we’re all trading documents back and forth, beginning to do some due diligence, getting to know each other’s organizations. How have you seen AI either transform this process or deal flow analysis today? Or are you hopeful that this may come to fruition in the future? So speaking as a partner in a family office, right, we’ve deployed a bot into those kinds of situations that essentially says, present this kind of analysis, these kinds of data points, this information, right? And we’re after a better understanding of the leverage that AI presents to customers of company X, right? Not company X. First and foremost, we want to see what it does for customers. And is that magnitude significant enough to drive upside for the potential invested company, right?
Michelle Frye [ 02:24:15 ] And so that we are kind of using it similarly to the way more and more companies, but still not a lot of companies, are using buyer bots, right, to essentially agitate. So we’re using a bot to actually aggregate marketing and sales activity into a bot, right, where a lot of filtration happens within the bot and ultimately gets to a person, a human team, making the final decision. But there’s a lot of stuff that happens before that. We’re using the bot in the same way on the investor side, right, to really help us get to a, you know, a better understanding of what you know, a better understanding of what’s going on. So what we’re really trying to do in this is say, is we’re asking questions, we’re essentially prompting, right, the prospect for information that they can’t necessarily connect the dots on, so it helps bear against a biased response,
Mark Stouse [ 02:25:16 ] and at the same time, we can interpret it and say, okay, this is the amount of leverage that we’re going to see out of this. And then bringing that data into a model of sorts? Oh yeah, very much so, yeah. Okay, tell me what that looks like. I would kind of say, I would characterize it as significantly analytical and causal, but there’s also just an ongoing back stream of gen AI information, largely for context, against what we’re being supplied by company A, right? So the cool thing about Gen AI, when you use it in that way, is that it can really give you a substantial amount of ongoing, virtually perpetual contextual data, so that you’re essentially constantly running, you know, in our causal model, we’re sucking all this into the causal model, and we’re basically saying, okay, as this evolves, as things change, headwinds, tailwinds, all this kind of stuff changes, what is that going to mean for this?
Mark Stouse [ 02:26:32 ] And so it’s constantly wargaming the information that we receive from the company in context, in a much broader context. And we also share it back with that company, so they can benefit from it later. You know, either way, whether we invest or don’t invest, we’re going to share it. Sorry, Mark. I had a question for you, and anybody else on the panel, sorry, I know I’m not supposed to be asking the questions, but I do have one, because I haven’t done this myself, but I’m curious. So has anybody done kind of a retro on previous deals that you either did or passed on, and the old way of doing it, meaning sans AI, and using your current sort of, you know, approach, including AI, were they aligned or not?
Mark Stouse [ 02:27:29 ] Well, in our particular case, they largely were, because the math, the basic math for causal AI, computationally speaking, is essentially a really souped-up version of multivariable regression, right? So we were doing MVR investments on investors or investments, I don’t know, five years ago, six years ago, both ways, right, in an attempt to create causal patterns that we could then follow for future opportunities. I would say that in general, there is tight alignment between kind of the really super basic, you know, MVR and an Excel file, right? A machine running the AI autopilot and doing things like that, but I would say that I don’t And a super deluxe causal AI kind of output, right? It’s very, very similar.
Mark Stouse [ 02:28:31 ] I would say that the best part about causal AI is that it’s contextually more robust and easier to make more robust than an old style model. Does that help? It does. I mean, I think, you know, I think we’re pretty good. I think that there’s, you know, sort of a level of awareness problem for folks in the deal room sometimes that I think that the AI, the insertion of AI capabilities could be really beneficial to sort of uncover those blind spots or those areas, or add a level of rigor to, you know, a pre-existing model and approach. So, you know, my curiosity was plainly whether or not we were just as good, is if the model is better, or where are the benefits of, of it.
Michelle Frye [ 02:29:29 ] And I think part of it is, you know, that well, the validation may be a velocity thing. So, effectiveness, making sure that I have lower risk, you know, all the, all the other things that we see the value of this. Well, I think that it also comes down to kind of another aspect of this, and that is in a VC situation, it would be thought of as subscribers. But within a family office, it’s obviously who it is, right? But everybody at the end of the day is an individual investor and investors are starting to really focus on the fact that if we look at say 2008 to the present, certainly the studies vary slightly, but only slightly-90, 91, 92% of startups during that period of time failed, right? Is that?
Mark Stouse [ 02:30:23 ] Does that mean that the funds failed, right? Not necessarily, right? But it illustrated the high stakes poker aspect of a lot of these investments. If you are a subscriber or if you are one of the family members who is kind of wanting to make sure that you’re not going to run out of money in 20 years, right? So, I mean, it’s kind of really interesting from the standpoint that at the end of the day, AI is a Socratic tool, right? It’s largely dependent on the quality of your questions, right? And so we really work our ass off to ask better and better and better questions, knowing that we will never get there, right? Right. We will never achieve perfection. It’s very much like the scientific method in that respect, right?
Mark Stouse [ 02:31:29 ] You kind of have to use your knowledge to curate your ignorance. Daniel, I think you had a comment to add in here as well. I was going to agree, at least with the path that Mark was going down with AI and how you look at things. It’s everything from it is a trust but verify. So literally everything right now doesn’t really matter what you’re doing in anything, in a VDR or not. It’s trust but verify. 100%. And with that verification, are you using humans then to go and verify? Pretty much, yeah. I mean, things have to pass the smell test. And they’re very. There’s varying degrees of needing to go and prove something out or not.
Daniel Bowen [ 02:32:23 ] I mean, if you’re in an Excel document and you can go and you want to follow the links and make sure it’s done right, just the way that I would if an analyst or someone who was wanting me to look over something handed me something. It’s the same level of vetting that I would want to put something through before sending it to a client or representing it as something that I stood by. Is there an inflection? Is there an inflection point for you of the AI is good enough to not have to do that anymore? Or is it, oh, you’re always going to do it? What is the Supreme Court definition of pornography? I’ll know it when I see it. I think it’ll be around that point in time.
Daniel Bowen [ 02:33:09 ] I’ll know it when I see it. Sounds like a good start. I think that any of the issues we’re talking about here are only going to get more challenging. Only quantum computing, which is not verifiable because nobody can, aside from comparing it with real life, no one can repeat the calculation the same way, twice in a row. It just doesn’t happen. So what that’s going to do to kind of the mentality of the scientific method is going to be really interesting. And a question for Yao. As you are running a collaboration studio, Venture Collaboration, and in your investments in the tech space over the last 20 years, how are you advising or working with organizations who are incorporating AI? Has that changed at all for you?
Daniel Bowen [ 02:34:02 ] So established companies that are doing business, we’re dropping it in everywhere. From blockchain topped with AI and automation all the way down. Every process from content to engineering to service. Everything you can touch, we can replace with either AI or robotics, we’re doing right now. On the other hand, the startups I’m seeing, I’m seeing amazing companies that would have been fantastic five years ago that are not that impressive now because at the speed the big four are moving, you’re competing with them. So at some point, even if you invest in a company, you’re going to look for an exit. It’s either going to be M&A or IPO. I don’t think any of these small AI companies can last all the way to IPO. I don’t think so.
Yao Huang [ 02:34:53 ] The obstacles in their way will get them murdered for sure. At the speed we’re going. So before we had the time of like, oh yeah, a couple of years, five years, maybe even 10 years for change to happen. Change is happening in months now. And they don’t have the resources to move fast enough. And then the cost of GPUs and the fight for GPUs. I don’t think these little companies. So then what is the exit? Do you have something good enough, or actually do you have the brains, the talent that’s good enough that could get acquired right into one of the big four? If not, I don’t know how you’re going to maybe just run a good business that you can iterate well enough and make a profit.
Yao Huang [ 02:35:35 ] That that’s fine, but that’s not a venture-backed business. So then you’re stuck between these problems. Some of the more interesting companies I’m seeing in AI are coming out of Asia, but that I don’t think the Americans know how to invest into a Chinese-powered AI or sitting on top of their e-commerce infrastructure, their social infrastructure. They’re years ahead of what I’m seeing here, but and they’re coming here to get money. And I don’t I don’t think it’s going to happen. Also. Who’s going to pay for the services they’re trying to sell? There’s a variety of them. And. I don’t think they’re good enough at selling into the U . S. market. So then the question is, can they be their own service?
Yao Huang [ 02:36:27 ] Whether let’s say it’s an e-commerce play or social media play replacing humans, if they’re going to be that service now, that agency, that offering. It’s then a sales, but they’re primarily engineering, right? To get to where they are at present, they were mostly engineering. They figured out how to build the product. But then can you sell this? And then I don’t think they have that muscle, not from the teams I’ve seen. So it’s, it’s a, but now you have six months to get there before you maybe had three years. Can you get there before the whole world, the whole ground shifts again? If not, you’re going to die in six months, right? Because, and that’s how the first wave of AI companies got gobbled up. ChatGPT just rolled out all those products.
Yao Huang [ 02:37:15 ] So I don’t need you anymore. It’s free, right? Or as a large entity, you can deploy it with ChatGPT, their service offering, which is more reliable than this like a startup that just got started a year ago. So that’s the problem right now. I’m not sure of how AI companies are; you can fund them. I don’t know where the end is because I think it sits in the big four. Gotcha. Any contrarian thoughts on that? Daniel, I saw you shaking your head in agreement. No, I, yeah, no, I, I have no contrarian thoughts on that. I mean, I would, you know, my, my background and experience is, you know, much more around how more traditional businesses, non-AI focused businesses are leveraging AI to improve efficiencies and profitability and make those really nice, very profitable, growing websites.
Daniel Bowen [ 02:38:15 ] Well, companies that, you know, maybe aren’t VC backable, but can sit and grow well and become an interesting option for private equity or even a family office. I think the other, the other thing that, that doesn’t get nearly enough attention on this is the, the saturation problem, right? With people, with employees, right? There’s gonna, there already is, we are already running into this problem a lot, right? Where the amount. The amount of tech and the speed with which the tech, AI or not, is being advanced by companies, by their employers, is overwhelming them, right? A lot of CEOs in particular are very worried about this. They’re worried about long-term competitiveness and all this kind of stuff. Can their employee base keep up? It’s sort of like when you have had a lot of rankings.
Mark Stouse [ 02:39:16 ] On the ground, right? And it’s soaked it all up and then you get hit again with more and it cheats off, right? And you get kind of flash flooding. That’s a similar kind of situation to a lot of what’s going on right now. So I agree, okay, completely that we’re seeing a lot of companies just gobble it up and put it everywhere and all this kind of stuff. But it is running into implementation issues for a variety of different reasons that do not have easy fixes. That’s just. Neil, do you see this also playing out? I do. I’m, you know, and we’ll see what ends up happening in the next few years, but, you know, I think they’re all spot on. So coming back around, we had a question from our audience.
Anne Hollander [ 02:40:15 ] Um, what. What’s the role of synthetic audiences? This might mean synthetic data in modeling deal outcomes, probabilistically and deterministically. So is anybody using synthetic data or synthetic audiences as they are modeling out? Deal outcomes. Mark, I’m sure you are. Synthetic data. Definitely. Synthetic audiences, I think, is a rat’s nest, right? It relies almost entirely on personas. Which have real issues attached to that methodology. Um. And then you’ve got the issue within, you know, kind of traditional uses of machine learning, you have really substantial tendencies to regress to the mean. And that’s exactly what we’re seeing with people who are trying to do this with synthetic audiences around market research and all this kind of stuff. Right? It’s, it’s a, uh, it. It’s. The results are heavily regressed.
Mark Stouse [ 02:41:20 ] So no, I don’t think that that is necessarily working right now. As far as the last part of the question about probabilistic versus deterministic, kind of the newsflash here is unless you’re talking about certain laws, natural laws like gravity, there’s nothing that’s deterministic. Nothing works like a vending machine, right? You can’t create a go-to-market model that works like a vending machine. It’s totally probabilistic. Everything has odds attached, hence the like a spaghetti model, right? So, I mean, I think that’s really where you have to take it. But I really have significant, I see significant challenges with synthetic audiences. Anyone else? Anyone else to weigh in there? All right. So I would be remiss as we come to the last 15 minutes here without discussing some of the ethical or risk-related concerns using AI, whether taking a look and evaluating companies who are leveraging AI or bringing AI into your own workflows and processes. To you, right, in your mind, what kind of ethical concerns do we see with AI?
Daniel Bowen [ 02:42:48 ] From an ethical perspective, I think the glaring thing that I think is really interesting and I’ve started monitoring more is the actual GPU and energy usage of all of it for what even just a standard chat GPT query runs on that front. I’m definitely not qualified to talk about it. I’m definitely not qualified to talk about it any more than bringing it up. But I think that’s one of the bigger issues in my mind, so beyond a limiting factor, then an ethical one as well. Yeah. Yeah. Michelle, go ahead. I think it’s a big deal just to, you know, totally be in the same boat as Daniel. The climate impact pieces are massive. I don’t even know if they’re as well understood as we all would hope at like the public level.
-
Michelle Frye [ 02:43:44 ] And also not an expert on this either. But certainly what I read about, what I hear about is concerning. So I think, you know, there needs to be a level of accountability as well to balance out that because I think it can become certainly an ethical problem. I think, so I’m involved in this space. So one, every one of the big four are getting their own nuclear plants. Also move. The movement in energy is towards more localized nuclear power. So instead of the big ones you see, they’re just like for your own little building or facility, there’s more compact. We’re working with server farms to neutralize their carbon emissions on my climate investment side. And so I think over time, because the amount of energy they need is double the amount we have now.
Yao Huang [ 02:44:35 ] So everything we’re using now, again, is what AI needs. And so I think the tech companies are handling. They’re on their own because you can wipe out an entire city’s use of energy for your one, for your one business. That’s not, that’s not, they know that’s not right. And so they’re getting their own source to be just to be reliable, even. So that’s coming. The problem is it doesn’t even matter what we think is ethical or not. The genie’s out of the bottle. It’s moving forward. And there’s four people who make these decisions now, like the small groups that decide. So if you have influence to their ears, great. If not, we’re just going to see what happens. All the conversation is almost irrelevant.
Yao Huang [ 02:45:18 ] You know how people are talking about how there’s gender bias and other kinds of bias in it. It’s already built in. If you weren’t there when it was built, it’s too late. You can talk about it, but you’re not going to implement it. It’s moving at a pace faster than people even know what to do. And the creators don’t even know. So we’ll just see what happens, I guess. Is that pace sustainable or do we hit these limiting factors? It’s happening. What I mean pace is that this will be a sentient entity at some point. Smarter than us. We’ll do all of our jobs. It’s moving in that direction. It’s moving faster in Asia. And we will just see, because if Japan and China have something and the U. S.
Yao Huang [ 02:46:03 ] doesn’t, then you’re just running on a horse when they’re using airplanes. You know, you can keep using a horse. But you’re not going to be able to do that. You’ll just eventually lose out competitively wise. So we can talk all we want, but we’ll just have to see what happens. It’s kind of like, oh, well. And a lot of the ethics, you know, it’s debatable on what people see is right or wrong. But at the end of the day, people tend to just follow what’s happening in the law. And the stuff is just moving way faster than regulators can keep up. And I don’t see laws adapting or adjusting anytime soon. So they’re trying, but it’s, you know, archaic system dealing with new technology.
Neil Patel [ 02:46:46 ] And by the time they can all get together and agree upon something, the technology has already changed and adapted to something else. And then they got a whole new set of things that they’re dealing with. This one’s too fast for them. We’re all screwed. Haven’t they all been too fast for them? Right. It seems like an old problem. This one, there’s no time. Before they had, you know, the last tech age was like 30 years. That’s a lot of time to get. Like, oh yeah, the internet’s important. Back in the early days, I sat in meetings where the chairman would say, ‘I don’t know why the internet’s not like useful.’ I don’t know why this is something we need to talk about. Like that came out of someone’s mouth in a meeting.
Yao Huang [ 02:47:21 ] I’m like, well, then this meeting’s over. Right. So we must’ve been in meetings like that. The quote from mine was, ‘The internet is a fad.’ Something like that. Like early days. Right. But you had 30 years to get used to it and change your mind. Now you have months. What they’re saying, what, by next year, next year or the year after. We’ll be there. But just think about how long laws take. Right. It’s just like in the United States, they don’t like monopolies. And Facebook was too late for laws. This is outside of laws because think about it. Let’s just say that America, America says, ‘Hey, you can’t do X, Y, Z.’ You don’t think the big four have offices somewhere else. Just power their engines over there. Yeah.
Neil Patel [ 02:48:01 ] But they still got to deal with politicians globally and keep them happy, which is why I also think all of them were very neutral. Oh, they’re all bending over for all that ultra money opening server farm somewhere. Someone’s going to be okay with it and then they’ll have the advantage. You want to ride your horse? We’ll have this. And by the way, I done in Japan are way ahead of us there. They’re present; is our future, you know? So if you want to compete, you’re going to need the same tools and you use a hundred people to write that one thing when they could do it in 10 minutes with a robot, you know, you can’t-the cost, won’t justify it. So, I’ve.
Mark Stouse [ 02:48:37 ] think I’d like to kind of maybe take a different stab at this right from the standpoint not of what is ethical or moral or legal, okay. But what is pragmatic, okay? So, the biggest problem that I see with AI is that it’s biased on steroids, right? And the problem with that is that just like all technology, it’s an amoral thing, and what we do with it makes it good, bad, right, or wrong, whatever. Right? It also is what makes it successful or unsuccessful, and if you have the wrong kinds of bias in a calculation at the wrong time in the wrong situation, it won’t matter-it will be incorrect using that word as opposed to ‘wrong’ (which has some moral stuff that it’s freighted with). ‘Incorrect’ just factually wrong or incorrect, right?
Mark Stouse [ 02:49:38 ] So I think that is actually the biggest issue, particularly when we knock and we put quantum computing on top of it. Right? You’re just gonna be in a situation where there is no way to check; you have to assume; you have to accept. And critical thinking-which, you know, I was said earlier that AI is really a Socratic tool-starts to suffer. The energy issue is definitely a real one; setting aside the ethics of it, it’s An it’s a, it’s a big issue. I mean, if you had to decide between really mitigating climate change in a significant way or implementing AI more robustly, which way would you go? Right there’s uh one of the things too that I say is that I think it’s a big issue. I think it’s a big issue.
Mark Stouse [ 02:50:32 ] I think it’s a big issue. I think it’s a big issue. I think it’s a big issue. I think it’s a big issue. Say about the this all the time, legally speaking is that AI helps you do things. Those things are either already illegal or they’re legal right? And the fact that you happen to do them with AI as opposed to I don’t know, a gun, right? Is not really the point. So we’re gonna See this, I think, in 2025, with a number of cases that are going up through the world court and a bunch of other courts right around property ownership, something very near and dear to everybody on this call as investors, so if all of a sudden we say that if you use AI to infringe somehow, that’s okay or we can’t do anything about it but any other way, right?
Mark Stouse [ 02:51:26 ] We’re going to slap your hand. I don’t think that’s sustainable, so this is this is as always as much about what kind of society do we want to have globally as much as anything else. The one cool thing about AI is I think it brings it to a head real fast relatively speaking, so Michelle. And Daniel, I think you guys had some comments to add; I was going to chime in just a little bit on what Mark was talking about on the the property ownership or intellectual property ownership and things like that. You know, Heart back on the conversation of AI really disrupting more standard maybe services businesses defined efficiencies and things like that. Um, and getting getting back to talking about deals, you know?
-
Mark Stouse [ 02:52:20 ] One of the things that I’ve actually seen come up is um buyers not fully knowing how to assess the risk of a company rolling out proprietary AI that has been trained using their customers’ data. And I have seen that throw a roadblock at parts we’ve been able to get around and figure out ways through, but have seen that pop up in assessing companies that are rapidly rolling out AI with their own proprietary models. Now it’s their own proprietary model, but what is the IP who owns the IP that has trained that model? Um, and so one of the really interesting parallels to that is um, you know what’s kind of happening in the music industry, and if you look at uh, the music industry has been very worried about um or the R.
Neil Patel [ 02:53:21 ] C. I. A. or RCA, whatever the recording industry uh uh has been really worried about AI music coming in because effectively Any AI music has been trained on music we’re listening to over the radio, um, and so some companies have come out and said, ‘Hey, why don’t we just partner with some of these big labels and have the exclusive rights for their music to train our AI music and we’ll just pay them?’ Meanwhile, anyone else who’s creating AI music is kind of screwed. So I think that that concept that Mark talked about in IP ownership, um, you know if you’re looking at contracts with customers or things like that if you’re building your own proprietary AI models to find efficiencies in your business, that is something to really consult an attorney on, uh.
Neil Patel [ 02:54:12 ] as to making sure that you have the rights to do that without a doubt michelle so my comment was uh back a couple minutes um but i think i think we’re probably going to be not making a lot of progress if we keep calling if we keep using the label ethics around ai because i think that that is such a subjective interpretation of what’s ethical and we’ll probably not get a level of alignment and therefore we probably need to rebrand that whole thing to around something that people can agree on which is hey we’re going to be using a label that’s ethical and we’re going to be using a let’s keep this safe like let’s keep this in service To some, some real problems out there, right that I think that everybody sees that there’s promise here with this amazing technology.
Neil Patel [ 02:55:15 ] But how do you, how do you do that? Because I think if we’re calling it ethics, we’re not going to get anywhere. You have a recommendation; you wouldn’t need laws. That’s kind of it. Can you say it one more time? Sorry, did you have a recommendation on what to call instead? Sorry, I couldn’t hear what Mark was saying. Sorry if morals I was just agreeing with you, the morals and ethics were enough. Laws wouldn’t be necessary, right? Yeah, so you have to agree; yeah, yeah, I agree. Michelle, do you have a recommendation? On what to call this instead if not ethical right, or safe I think are also uh deeply subjective. Do you have a more objective term that comes to mind?
Michelle Frye [ 02:56:02 ] I don’t, I don’t have an answer but I think that you know people really resist some of those things um but I mean I think you know we could certainly make more progress around safety great all right uh quick question from our audience uh clearly a lot of pitfalls in M&A what is one or a top issue that AI has been beneficial for you in helping you solve? Where’s AI helped, de-risking research, research definitely, research um and honestly little tasks yeah such as such. As you know, edit this paragraph or hey I have someone sent me a PDF of Excel financials or of financials can you put them into Excel form? Or you know very very little random things that are not in the Excel form or you know very very little random things that are that you know.
Michelle Frye [ 02:57:05 ] I harpen it back to like when you used to be at a at a conference or seeing a PowerPoint presentation that someone would give and you’d be writing down notes and then all of a sudden you learned that the pro move was oh I can just take a picture of this slide because I have a camera on my phone um it’s it’s I use it in that way. And I saw a product not too long ago that’s Pre-release, it hasn’t gone out yet, but uh, it’s effective for me and I think it’s going to be effective for me, and I think it’s going to be a DD tool right? And so it just consumes all the information, pattern matches it-says this doesn’t agree with this, this doesn’t agree with this. Right?
Michelle Frye [ 02:57:45 ] Looks at it against the backdrop of larger amounts of market data, contextual data right, runs a comparison and that all happens in less than 90 seconds, so I mean, that’s uh, I mean if you’re if you’re a professional due diligence provider right, I would be really concerned. Alright, last question for our panel, Julia, I know I’m sure that we could Go for at least another hour, maybe two, on a number of these topics. Um, but as you look at the next five years, uh, for fundraising, uh, in the AI age. Um, how do you summarize this? What’s either your your word or your short phrase for what you see
-
Mark Stouse [ 02:58:39 ] next five years, honestly? I-I don’t know. I think the ones that are going to do well are the platforms and the ones that people believe are just the stickiest and you know have a shot of lasting um and not being crushed by Google, Amazon, Facebook, Microsoft, you know. I think that if you uh, I think the biggest issue really or is still people right and if AI creates a situation or a set of situations that disenfranchises too many people accentuates the the gap between the haves and the have-nots even more etc etc etc right um you’re gonna have social outcomes that nobody wants to have right and so managing that side of the equation is not where most people are looking right now but they better start
Daniel Bowen [ 02:59:45 ] daniel or michelle i don’t know trust but verify i i like what mark was saying i i think that that’s a it’s a really important point people make the world go around um yeah and i would i would say that the best way to do that is to do it in a way that’s not going to to not be sort of crushed by one of the big four um that people are talking About is that there’s a lot of value and a lot of opportunity in the niches, um, so if you can find or create an AI model that you can apply very specifically to a vertical or an industry, um, that really helps solve um, a pain or a problem or find that level of efficiency, um, you’ve still got room to run, uh, in in these niche industries and so that would be what I would say: go for the niche margin versus the large tam, yeah, awesome!
Julia Nimchinski [ 03:00:46 ] Alright, Julia, let me turn it back to you. What an incredible panel, I love being flying a wall made a lot of notes, thank you so much, and please everyone follow these amazing thought leaders, and uh, I think you definitely Have a great day, thank you for having me. Thank you, we are transitioning to a practical side of things with the summit and it’s my pleasure. Welcome Helen, how are you doing? Thanks for having me, good, how are you, and how is everyone? Our pleasure, how is everyone doing well um, maybe some introduction I know have like 15 minutes so there’s really not much time… We’re a data analytics company and we do employ AI but more classic machine learning, classic AI models not as much LLMs um I think you know for practical purposes um as we were talking about go to market whether or not it’s marketing, sales uh retention.
- Opening Remarks and Introductions
- Panelist Backgrounds
- The AI Landscape: Service as Software vs. SaaS
- Efficiency vs. Effectiveness in AI
- AI's Role in Solving Business Problems
- Investing in AI: Timing and Strategy
- Applications of AI in M&A and Deal Flow
- Ethical and Risk Concerns in AI Adoption
- AI and Intellectual Property Challenges
- Future of Fundraising in the AI Age