bg

    Rewatch

    AI Summit

    Missed the Summit? No worries! The recordings are now available on-demand. Please fill out the form below to access.







    Text transcript

    Plan Forward: CXO Priorities for 2025

    Executive Roundtable held on November 12, 2024
    Disclaimer: This transcript was created using AI
    • Robin Daniels [ 01:00:11 ] But in this one, they are, and now they’re pitting themselves with an underdog against the big bad Microsoft. It’s a smart move, because everybody loves rooting for an underdog. And they love to see that clash of titans. So, that’s going to happen next. Classic smart move. And they have to do it or they’ll be replaced. I mean, I think this coming wave could make Salesforce irrelevant if they don’t adapt. Absolutely. Yeah. Awesome. Thank you, everyone. We are now going to transition to the next panel CXO priorities for 2025. Led by Mark Oregon, founder of Aliqua, and Fluidity. And category nuts. Randy, you could please mute yourself. I will. Sorry. Mark, how are you doing? I’m doing great. And I am very fired up for this panel.

    • Mark Organ [ 01:01:02 ] * As you know, I’ve been a category developer and a keen watcher of how new categories get adopted. Because of disruptive technology. And I grew up in the age of multi-tenant cloud, which is no longer that exciting. I think AI has replaced that VCS, it’s really all they’re investing in these days. And so it’s exciting that AI has the same potential to create entirely new categories of software. So that’s my interest in involved here. Of course, I’ve been involved in the Martech, SAS category. And I’ve seen that develop and now you know, becoming mature. So anyway, that that’s my keen interest in it. Our team is constantly doing a lot of research for reverse automation because we’ve been working with the Japanese four-and four-bit software manufacturer, multi-rice adoption of AI that really does lead the way in terms of seeing what our companies adopting, why are they adopting it, what are the challenges that they’re having, how they’re how they are getting over those challenges.

      Mark Organ [ 01:01:58 ] * So I can’t wait to learn from my co-panelists today in terms of where they are placing bets, where they’re seeing their customers placed bets. Amazing. Let’s have a quick round of introductions. Kaitlyn, let’s start with you. Well, hi, everyone. I’m Kaitlyn Clark-Zigmond, marketing executive, most recently with Intel, leading software marketing globally around trust, AI, and performance optimization. Always great featuring you, Alexa. Hi, everyone. So great to be here. I’m Alexa. I’m the CEO and co-founder of Pokus. So we’re a sales AI platform, primarily helping with account research and prospecting. So excited to be here. Great to have you here. Anastasia. Hi, everyone. Anastasia Pavlova. I’m founder and CMO at Bold GTM, a revenue and go-to-market consultancy. Prior to that, I was in cybersecurity at OneLogin, at Marketo.

      Anastasia Pavlova [ 01:03:06 ] * Mark, I’ve used Eloqua, and I used your Influitive platform many times. So if we have time to dive into Martech evolution, that would be very interesting. So really happy to be here. Definitely will. Dale. Dale Harrison. So I work as a strategic marketing consultant, largely in the biotech area, but increasingly in the IT field. And I’ve been working in the IT field for a long time. I’ve been working in SaaS and other technology areas as well. Great having you here. And last but not least, Randy. Hi, everybody. Randy Whitton. So CEO of a company called Maxeo, which does billing, invoicing, RevRec for B2B SaaS companies. It was my third gig as CEO. First one, though, back in 2015, was a first-gen AI company called RocketFuel, where we had our own servers.

      Randy Wootton [ 01:03:59 ] * We were cranking our own models. It was a whole different world before that. So it was a whole different world before that. So we’ve been watching the evolution of AI and just been totally blown away by how much traction it’s gotten over the last two years and everybody thinking about the different use cases. So excited to explore more today. Great having you here. Mark, it’s your stage, your hour. Take it away. Let’s rock and roll here. So before I thought we would get into what technologies we’re adopting, how we’re making our teams more competitive and whatnot, I’ll just take a step back. As all of you have been around for a while and have seen technologies evolve, how do you think about AI as technology for business?

    • Mark Organ [ 01:04:46 ] * Is this kind of like the way the web or the way that mobile was adopted, or is this something different? And I think that might set the stage for where and why we’re placing the bets that we are. I don’t know if it’s going to be all the way. If you think about where we’ve come from, things like from a networking perspective, you talked about mobile. Well, 4G took almost 10 years to fully adopt. 5G is definitely going faster, but it’s still a length of time. And I think there’s going to be this kind of slow progression, and there’s going to be kind of that sort of rocket. And I think we had the advertising of the rocket ship of AI in 2020, 2024.

      Caitlin Clark-Zigmond [ 01:05:30 ] * But I think it really is going to come a little, little bit later once, from previous conversations, the data sets, businesses start really unpacking what they have within their own companies and then start leveraging these tools, as well as the tools starting to integrate AI into them and actually making easier to get value out of the different work that each tool does, et cetera. So I think it’s going to sort of have that little bit of that slow kind of rollout, and then sort of that increase, just like we saw with the internet, for those of us who watch that happen. Right. Any other thoughts on that? I think there are multiple camps. There are certainly skeptics that have seen it, have seen it all and been through previous periods of disruption.

      Anastasia Pavlova [ 01:06:29 ] * And there’s definitely a lot of hype. And the skeptics think that this is just another, one of those bubbles with little substance. And I think there are also plenty of younger companies which are AI-first companies. And that is very interesting to watch because when young founders have an opportunity to build a company, a company on AI, delivering AI services, and really taking advantage of all the opportunities AI can offer with insane valuation, some of them, it certainly brings a different perspective. So I’d like to, because I work with both camps and it’s interesting to watch, but I’m fully supportive of the AI-first companies and newcomers to the space. For sure. Yeah. And they both camps, maybe right. I mean, I remember when Amazon.

      Mark Organ [ 01:07:34 ] * com went down 97% in terms of market value, Cisco, which was the darling of that age has still not got to its former value. That said, the people who said the internet changes everything were right. It did change everything. So there is a lot of hype. There are a lot of companies that are not going to make it. And I think you’re right about the AI-first being interesting. I think those are the ones where they have a chance for, for real value to be made. So, and that might be the thing that’s interesting to get into later is, where, what is overhyped and where is the real staying power? Where is the real lasting value going to be particularly with enterprise adoption? I think it’s interesting. So yeah, sort of a dissenting opinion here.

    • Dale W. Harrison [ 01:08:23 ] I think the question is making an irrelevant comparison because, Tell us. Reminds us it’s going to look like AI so anything, everything. So it’s going to look like digital, talked about, internet, web, mobile, 5G, Cisco, Amazon, these are fundamentally distribution technologies. And AI is not a distribution technology. You know, the way that it integrates into companies and businesses and operations is going to function at a deeply fundamentally different way, you know, with very different adoption dynamics than what you get from, you know, a 10X, 100X, 1,000X improvement in distribution. Companies are always able to absorb better distribution in ways that are not disruptive to the organization. You know, if we go back 60 years ago with the invention of containerized cargo, which is a very you-know old school technology, but it radically altered how manufacturing is done, how you-know your ability to ship across the oceans.

      Dale W. Harrison [ 01:09:30 ] And you-know but you-know at the end of the day the manufacturers were still building things with parts. It’s just that the parts came from further and further away. And so it was not fundamentally disruptive to the organizational structure of the company. And, you know, I think the challenge with AI is that you’re asking it to do something that is really, you know, extremely disruptive organizationally, but, but also I think it’s something that is not well thought through because I think, again, I think a lot of people are making these kinds of false comparisons with what are essentially distribution technologies, as opposed to the kind of decision engine technology that comes with AI. And to give a little bit of perspective, early in my career, I was an AI developer.

      Dale W. Harrison [ 01:10:18 ] So I was very, very involved in an earlier generation of AI products. And so a lot of this was around naturalization. And so I was very, very involved in an earlier generation of AI products. Natural language processing, expert systems, those are the two main areas that I worked in. And, and a lot of this was frontline deployment in Fortune 100 companies. And, you know, just to give you a specific example, and I think this very, very directly ties in with, with sort of where we are today with the current generation of AI. So I had a project with, you know, a major, you know, Fortune 50 insurance company. And the goal at the time was to build an expert system that would replace the thousands of claims processors they had that were doing homeowners insurance claim claims processing.

      Dale W. Harrison [ 01:11:09 ] And, and I remember at the time meeting with senior executives, you know, and someone made some comment about, you know, a lot of these people don’t even have high school degrees, you know, no one, you know, at most they’ll have a couple of years of community college, you know, monkeys could do this job. And it turns out monkeys couldn’t do the job and neither could AI. And, and, and what you saw was that there was an extremely deep level of intuitive understanding on the part of, of those claims processors, even if they didn’t have, you know, a lot of education that allowed them to see subtleties that could not be captured, you know, with the AI technology.

      Dale W. Harrison [ 01:11:51 ] And, you know, and so in the end, what we ended up with was a tool that, that was essentially, you know, an accelerator of the efficiency of the existing claims processors rather than a replacement for the claims processors. So, you know, it, it was like, you know, a more efficient stapler. And, you know, but the, the upper limit of the value of that was a lot smaller than what people were imagining because senior leadership at the time was imagining laying off thousands and thousands of people and replacing it with a bunch of software. And, and I think you see the same kind of delusional thinking today. And I think it’s going to, it’s going to crash against the same set of rocks that we did in the prior generation of AI.

      Mark Organ [ 01:12:37 ] * Okay. So you’re saying, so if, if AI is not distribution technology, what it, what actually is it? Decision technology. I mean, decision technology. I see. And why do we have humans in companies? You know, why do we not have a Fortune 500 company with five employees? Because you need humans. And why do you need humans? It’s not just to lift things. It’s to make decisions. It’s to think about the world, some tiny narrow aspect of that world, process that information and make decisions. And, you know, the promise of AI, the promise of AI 30 years ago and the promise of AI today is the same, which is: you can somehow replace these expensive humans with really cheap machines and, you know, and get the same sort of decision-making.

      Dale W. Harrison [ 01:13:23 ] And, you know, and this is where, this is the argument that I make is that we, are a century away from having software be able to do that level of decision-making that’s going to allow you know massive replacement of the sort of human decision-making that you see on the factory floor that you see in big processing groups but you know within marketing groups you know there’s a lot of subtle deep understanding of what works and doesn’t work that’s locked up inside people’s heads. And companies are very very good at nurturing you know that human capital.

      Caitlin Clark-Zigmond [ 01:13:59 ] * And you know and I think what you’re going to see is that the current generation of AI technology is just going to be a shadow of what it needs to be to even begin to replace that sort of human capital. Yeah Well that’s really interesting That’s a good segue to the next question really is around adoption So what what is the main value that you are seeking from the adoption of AI in in your companies? I think for what I’m seeing in where CXOs have opportunities in 2025 and sort of bringing it down to what are we doing today, which is, isn’t asking AI to make the decision, it’s asking AI to help bring the data about to help us make the decision.

    • Caitlin Clark-Zigmond [ 01:14:49 ] * So, you know, as Microsoft would tell you that whole co-pilot, not the pilot, but your co-pilot in this world, you know, is a big part of the decision-making process. And so I think for, you know, some of the areas where I see it’s really around predictive analytics, using AI to analyze historical data, really, you know, weaponizing the information and data sets already in existence within an organization. So yes, you have this human capital. Yes, you have tribal knowledge, but you have tons of data from finance to marketing, to sales, to sales plays, to partner relationships, to other things that you really could be leveraging. If you could only remember. And each human brain can only remember, you know, so much. So I think that’s one area where AI is going to be helpful.

      Caitlin Clark-Zigmond [ 01:15:33 ] Two is really around personalization from a marketing perspective, really allowing, you know, cool tools. I mean, some of them are, you know, for content generation, we’ve talked a lot about that, but how do you personalize not just the content, but the journeys and maximize the interaction points that we have with our customers. And I think we saw in this last panel where, you know, how SDRs are; you can use less data, you can use less data, you can use less data, you can use less data, you can use less data, you can use less data, but they’re becoming more productive. And so how can we take personalization to do that? I think automation of repetitive tasks is a third area that I think is really going to be helpful where AI can take burden off and bring answers to people.

      Caitlin Clark-Zigmond [ 01:16:16 ] Then fourth is really data-driven insights. So, um, you know, how do you look at the past performance of the campaigns, the gaps in, you know, the, you know, attribution or other things that we were talking, we’ve been talking about, how do we think about the marketing, the funnel, the driving to delight, the prioritization of next features on a roadmap, et cetera, and using data within the organization or within the marketplace to help us understand how to, to guide our businesses forward. And then on AI integration as a whole, I think we have to continue to work on the side as experiments, um, to figure out where best to integrate AI into our businesses within marketing or sales or product development.

      Caitlin Clark-Zigmond [ 01:17:03 ] There’s lots of different places, but I think it’s not a, like a, it’s not a lift and shift. It’s going to be test and learn. And not only because we want the AI to perform well, but we have a lot of learning for our human component, you know, companions in this journey around, you know, prompts, but, you know, being good at prompts, being good at data sets, looking at how it’s clean. How to understand hallucinations and, you know, so on and so forth, whereby I think we’re going to take a lot of people along for this journey. Um, and so while it might not be a distribution, it’s still, it’s a data and information opportunity that these, the businesses can definitely leverage to accelerate revenue and opportunity for them.

      Caitlin Clark-Zigmond [ 01:17:47 ] * And AI should help make that a little bit easier. We don’t use washboards anymore. We use washing machines and, you know, yes, you can still clean your clothes that way, but no, you don’t want to. So let’s think a little bit about what we’re, you know, I see around some, you know, five areas where we can take advantage of that from a CXO perspective that doesn’t make the decision for us, but makes the decisions easier. That’s great. Thanks. Love to just build on that a little bit. Um, and splitting it into two, one is workflow automation. I think that there is going to be, even in this year, the VCs and PE-backed companies are going to need to show the, um, the gain from AI, in their efficiency scale and quality of delivery.

      Randy Wootton [ 01:18:29 ] And I’ve seen different slides going around amongst my fellow CEOs that next year, people are expecting like 30% so that you can show that you can scale your organization at 30% using the available AI tools out there for like Zendex service, et cetera. So I think it fundamentally changes the way you think about operating and delivering and engaging. Um, the other piece, I think both, uh, Dale, you and Caitlin make me think of, um, AI as augmented intelligence, not artificial intelligence. And that was the big aha we had when we were rocket fuel back in 2015. And people were afraid of AI displacing them from their jobs and that they, what would they do next? It was the Jetsons versus the Terminator type construct. The boom and the doom, I think is what they describe it today.

    • Randy Wootton [ 01:19:14 ] And I do think that’s the fundamental shift. And I think Dale, you put your finger on the button in terms of it’s, it’s not about workflow automation. It is about thinking, rethinking how you even approach work. That’s the thing, that is mind-boggling for me is that in every single interaction, there’s a step of, oh, should I be playing with ChatGPT for this, for this email, for this, you know, announcement for this press release for this, whatever. Um, I think Caitlin that over time, the interfaces become better. So you don’t even need to really, I mean, you need to know about prompt engineering now, but it’s just going to get better and better in the next 12 to 18 months. You’re not going to even even know the concept of prompt engineering.

      Randy Wootton [ 01:19:51 ] * It’s just going to be a more human interaction. But I do think, how do you relate with something that’s really always there in every conversation that you’re having back in your work life? Obviously. But does it become part of your personal life as well, constant, like the new AI in the new iPhone 16, where do they call it? Apple Intelligence or something like that. Ongoing all the time, ever present this companion or augmented intelligence and learning how to take advantage of it. So you can move quicker, come up with better ideas, um, and, uh, uh, make a bigger impact. well and i think there’s a valuable lesson historical lesson from two centuries ago that

    • Dale W. Harrison [ 01:20:30 ] people should consider so if we go back slightly more than two centuries ago the dawn of the industrial revolution you needed to dig a ditch it took a lot of humans with shovels and not a lot of brain power and when when you know the early autumn uh early steam powered tools like the steam excavators uh started to come into put into practice or into usage in the early mid-1800s you had a lot of kind of moral panic around you know the that machines and automation will uh basically put all of the labor you know the proletariat out of work and in fact if you look at the foundations of karl marx’s Work in the 1840s, it is entirely predicated on the fact that machines will, you know, disemploy 90% of the population and you know, and this will inevitably lead to revolution; and of course, you know, what we see is that that’s not what happened.

      Dale W. Harrison [ 01:21:29 ] And you know, and in the end, the machines still required a human to run them, and it still required other people to decide where the ditch needed to be dug, and when the ditch digging was done, with um, you know this is the higher-level decision making. I think that in many ways, the current generation of AI is essentially an automated steam shovel, you know, and it’s going to interact with the workforce and with organizations in a similar way to how you know the early introduction of those sorts of machines in you know two centuries ago uh interacted now you know at the same time if we look over the last two centuries a profoundly altered society um and you know and one of the ways that profoundly altered society is that it upscaled the value of labor you know and so you

      Dale W. Harrison [ 01:22:19 ] instead of you know with with you know with uh with about gold if you really look at what might be the health and well-being of a single person and you know who is the alternative to work with something abge with robots and then the assembling That Bachelor of Technology really affects people because of painting that one side, your height it affects people. You know they have to be on a contract said ‘uh’ well apparently, advertising with technology doesn’t. Now, number zero.

      Dale W. Harrison [ 01:22:49 ] * So for factories to figure out how to use electricity, you know the idea that you you don’t have uh you know sort of one big giant motor, you have a lot of small motors on each individual machine that took three generations to to really work out between the 1880s up to the 1930s. And so I think the timelines on this stuff is is multi-generational, it’s very very long but how hasn’t though. Our modern age, I mean the adoption curves are so much faster; like what used to take decades now you see, but they’re not two or three years, there’s simply that is simply a false belief. If you go back and you look at the rate of the adoption of new technology from basically the beginning of the age of coal 1820 or so), um, all the way through the 20th century it was a blistering pace of adoption of new technology.

      Dale W. Harrison [ 01:23:45 ] You know, again, you look: 1875 telephone is invented by 1885, you know there are millions of telephones and millions of them because it there was huge rapid adoption but it still took another 50 years for the telephone. To really become what we think of today as the telephone, um so so you know I think the pace of of innovation has not picked up that’s this is sort of a uh kind of the current generation being egotistical about how better they are different they are from the past but you know you go back and you look decade by decade over the last two centuries and it you know we’ve gone through just a blistering pace of innovation and adoption of new technology and there’s nothing that’s really speeding this up again.

      Dale W. Harrison [ 01:24:36 ] * Think about the internet; I have personally been on the internet since 1987. Um, you know it took 30 years for the internet to sort of become the ubiquitous distribution mechanism after it was available, um and uh, you know, so you know these are very very long lead time changes because they represent cultural and societal changes that that basically require a multi-generational shift to be able to to really achieve right. And to your prior point, uh, AI technologies are actually maybe even more challenging for companies to adopt than some of them, yeah, so that might that’s a segue to the next question then as around uh what are you doing to well first, what are the barriers to adoption in your company’s we’ve heard a little bit on the human element.

      Mark Organ [ 01:25:33 ] Of that, what are the barriers and then what are you doing to use, uh, Dale’s words, and culturally shift your companies to better adopt um these technologies actually I know somebody who’s like the AI Czar and is a mid-size manufacturer, and that’s that’s her job, her job is to is to help uh people think differently about their work and and to think about using AI but curious about your companies and what you’re doing and what what your customers are doing as well in order to speed adoption of these uh ideas and technologies I can hop in here um maybe before I jump into adoption I can also um Dale, I am the building and AI first company so i have about every opposite opinion to what you just said.

      Alexa Grabell [ 01:26:18 ] So I think we could have some good spicy debate here. But I mean, so from my perspective, it’s not years away, it’s happening right now. And I agree with you that maybe AI can’t be the bold decision maker right now. But what I am seeing it do is do the first level of research and decision making far better than a junior analyst or a junior SDR or whatever field you’re in to then streamline the decision making process for folks later. So to me, I think AI is everywhere. And the beautiful thing is with adoption, with the internet, you had to learn something new. But with AI, it’s embedded into existing workflows. And you don’t even some eventually, you won’t even know you’re using AI, you just have this like co-pilot friend that is telling you what to do.

      Alexa Grabell [ 01:27:05 ] So Dale, we can go on and on about this. But I like the different opinions. But for us, in terms of adoption, so we are a AI account research and prospecting tool. And so when I see our companies adopt POCUS or other companies, someone mentioned there’s like an AI czar in another company. We see that a lot. We see AI tiger teams, or there’s kind of a project manager of AI. And what they’re doing is scoping out what are the pains that we feel in our organization that we think can be improved with AI. And they’re saying, ‘Hey, we’re going to take this small line of business team, or we’re going to take 10 different users, or we’re going to take this one use case, and we’re going to test this out.

      Alexa Grabell [ 01:27:47 ] * And can we hit our goals? Can we make people smarter? Can we make them move faster? Can we remove the tedious work? Can we improve efficiency? If yes, scale it to the rest of the org. If no, that’s not something that works for us right now. The AI is not good enough. We’re not ready for it, whatever that looks like. So I’ve already seen this happening a ton with our customers. I’ll also acknowledge we sell to other tech companies. So I think I’m probably in a bubble of folks that are very forward thinkers in AI and folks that are kind of waking up, eating, sleeping, breathing, and then going to bed thinking about AI. So there is more willingness to adopt versus kind of more traditional companies.

      Caitlin Clark-Zigmond [ 01:28:25 ] I could imagine it would be a bit slower. I would add that, you know, completely agree with you, Alexa. And I think that we see a lot of the experimentation. I think if we back up even before the pandemic, we’re going to see a lot of the experimentation. And I think if we get to AI, I think this was already starting with what you’ve seen in sales and marketing and how the whole go-to-market engine is coming together. But it’s really about the democratization of data. So your CDPs, you know, the databases, how do you democratize the information that’s available? Because even before AI, it was clear that in order to understand, you know, for product to know what features, for marketing to know what campaigns, for sales to know who to go after and talk to, customer success, understand how to onboard and keep and retain a customer.

      Caitlin Clark-Zigmond [ 01:29:12 ] We had to start bringing all the different data points that each of the teams was collecting into a place where we could all leverage and utilize it. Now, AI then becomes this great opportunity to accelerate those answers and integrate them and bring them to each of the teams and collectively to the CEO on how to then move the business, you know, forward. So I would say, you know, I think what we see even in the, you know, the, you know, the, you know, the, you know, the, you know, the, you know, in large enterprises who are still slow to adopt that they’ve already started to make the motions on, you know, chief data officers and, and other things that were, you know, even precursoring the explosion of AI, we’re already starting to move because the amount of data that was coming into the business that was needing to be processed had grown exponentially.

      Caitlin Clark-Zigmond [ 01:30:00 ] * And in order to make any use of it, besides store it, we needed to start to do something. I think, so that’s just one, you know, you know, one area, but I think the data democratization, just generally speaking, is a really key component that will facilitate a lot of, you know, AI transformation going forward. Yeah, to your point, I think that before you have an AI strategy, you have to have a data strategy, especially for companies that aren’t AI first. And I think we found this in our own journey at Maxio, bringing two companies together; the way that they structured data, we couldn’t put an intelligence layer on top. And I remember, we were talking about, you know, we were talking about being in Salesforce; you guys were chatting about that before I showed up.

      Randy Wootton [ 01:30:40 ] And they basically had to rip everything back down to bare metal and start over to be able to layer in Einstein as a decision layer. And so I do think companies have to get real about their data, and understand where it is, how it’s structured, how they’re going to aggregate it, how they’re going to put a data pipe in, how they can dump it down to a data lake. And that’s a multi-year journey. Again, for people that aren’t AI-first, I think the companies that are getting born today are thinking like vector databases, that’s radically different than what we were using before. And I do think people underestimate how hard it is to put the data. Now, the AI may advance to a level where it doesn’t need to have this much structure in the databases right now in the data lakes, but currently, really, really hard.

      Dale W. Harrison [ 01:31:21 ] So one thing to throw out, the last period of an explosion of AI-first companies was 30 years ago. 30 years is exactly the right amount of time for everyone to completely forget why things failed in the past. And what I’m seeing is the current generation of AI first companies are making precisely the same mistake that was dooming the prior generation 30 years ago. And they’re completely oblivious to it. And the problem is this. First of all, it’s a decision engine. The thing with decisions is they’re not always right. You know, and so you’ve got, you’ve got a variety of different contexts, you’re going to have false positive false negative signals coming out of the decision engine. If you do not validate and measure and understand what your rate of false positive and false negative signals are out of the decision engine, it will fail.

      Dale W. Harrison [ 01:32:19 ] Because what will happen is you may feel like it’s doing great. But when you put this in the hands of 10 ,000 customers, they’re going to intuitively know that there’s a lot of BS here. That is that is just not doing what was promised and the products will be abandoned. And this is what happened. 30 years ago. And this is why you had this entire, you know, what was referred to as, especially in academia as the AI winter, where you had, you went through a period of time where you had 1000s of companies collapse. And, you know, and nothing that was AI-related could get funded. And, you know, and, you know, I see the exact same mistakes be made again, because people do not understand how to evaluate a decision engine, and how to, you know, and so it feels good.

      Dale W. Harrison [ 01:33:07 ] Yeah, and I’ll give you a very specific example: I’m a pretty heavy user of ChatGPT, for it’s probably 80% incorrect. You literally have to know the right answer to be able to evaluate the answer you give it. And I use it in three or four key areas: I do market research, I use it as a reference tool for writing articles; I use it as an assistant to help me doing complex statistical analysis. I use it to write um short to medium blocks of Python code, you know, so if I need to do like a Monte Carlo simulation, I’ll set up the sim, you know, I’ll describe the simulation easily. 80% of what it gives me is completely wrong, now it’s close enough that I can fix it, you know, so I can fix the Python code, I can, you know, I can, you know, fix the the errors it’s giving me in terms of the statistical analysis and you know, and then do it correctly myself, but it’s because I have a lot of expertise.

      Dale W. Harrison [ 01:34:06 ] But one of the things that people don’t understand and again realize is that ChatGPT is just one incarnation, but a lot of this applies to a lot of the machine learning stuff. Out there as well, it is not the Oracle of Delphi channeling the wisdom of the gods; it is simply a uh a summarization engine. It takes its inputs from you know, and again if you look at Chat GPT specifically or the LLMS, you know it ingests data from the internet. The problem is most of the data on the internet is garbage, and so what’s going to come out, you know, is a lot of garbage. And the explanation I give is: it’s like if I’m making sausages and I have 20 chops but one of the chops is bad, but I throw it into the mix anyway; so are the sausages that come out— five percent bad or a hundred percent bad?

      Dale W. Harrison [ 01:34:56 ] * The answer is they’re a hundred. Percent bad, and this is the this is the core problem that I’m seeing right now in my own usage of LLMs: um, that you have to know the answer because no one is checking it and, um, you know, and again this goes back to the false positive, false negative checking on the quality of the decisions and whether or not this is represents an improvement over the human decision making, and, um, you know, and I think we’re going to enter a second AI winner because the existing AI first companies completely fail to understand what’s going to kill them. Well, thing is we will still have jobs and can’t rely on AI only, and AI will hopefully just.

      Anastasia Pavlova [ 01:35:44 ] augment what we we do that’s why we are the experts um but i i wanted to actually go back to the uh to mark’s original question about ai adoption kind of barriers to adoption and uh um and and clearly uh many of you have referenced the data issues garbage in garbage out we all know that so so that’s one of the biggest uh areas many companies are struggling with um and i think on the adoption curve again the bigger the company the the the more difficult it is for for them to integrate ai into their existing processes because they’re so complex whereas for for ai natives like for alex’s company they can just build some of these Processes from scratch and use the latest and greatest tools,

      Anastasia Pavlova [ 01:36:35 ] some of the best practices for AI adoptions, um so Mark mentioned uh an AI czar, I’ve seen companies form cross-functional committees actually right, so whether there is a czar at the top or whether they report to the CMO or CEO or a chief AI officer at some companies, I mean it really depends on on the size and and complexity of the organization, but uh the first step in the process I think is is really to define AI use cases within the organization. And that’s a cross-functional effort because there’s so many. We all know marketing use cases. There’s plenty of sales or BDR, SDR-related use cases, research and development, coding. I mean, those are all very important ones, but there’s plenty in HR or legal, as we know.

      Anastasia Pavlova [ 01:37:32 ] So there’s so many processes that can be automated and get productivity gains and improvements. And very often those functions are not at the table of the discussion. So it is, I think, the progressive companies are really thinking about this deeply and form these cross-functional committees and define and prioritize use cases. And I think that’s a very important part of the process. And I think that’s a very important part of the process for different internal functions. I also see a disparity between the focus of many technology companies-they’re really on how they can integrate AI into their products and platforms and services. And they think less about how they can upskill their internal workforce, and how they can really execute on the internal use cases.

      Mark Organ [ 01:38:23 ] Yeah, no, actually, that’s an awesome segue into what I really wanted to get into next, which is, you know, I think we’ve been using these technologies to improve productivity-some cases, you know, reduce costs, improve productivity, and whatnot. I mean, what I get excited about is when we can use technologies to increase our competitiveness and especially improve the experience and value for customers. And I think to your point about cross-functional, you can’t do that with just one siloed function. I think that requires a bit of a rethink. And I’d love to hear from you, guys, in terms of what, I guess, what are you doing? What are your customers doing? What are you hearing about in terms of using these technologies to fundamentally improve the value proposition for our customers, the experience that they get with us and the value they get from us?

    • Anastasia Pavlova [ 01:39:17 ] * Well, and I can start by saying that, you know, some of my clients, we do customer research with the help of AI. And so, you know, I think that’s a very important part of the process. So that really improves. I mean, it’s a basic use case, but we can do it so much better and faster. It’s from, you know, whether we’re creating surveys to analyzing the data that comes in, or whether it’s transcription of conversations and then analysis of customer conversation or sentiment analysis. I mean, these are the kind of fundamental use cases where AI brings enormous benefits to the customer. And so, you know, I think that’s a very important part of the process. And then we can, you know, go deeper and use our human touch and follow up on those conversations, and then really incorporate the fundamental insights from the research in our roadmaps or service offerings.

      Dale W. Harrison [ 01:40:19 ] * So, Anastasia, I think that’s an absolutely perfect use case for the current generation, especially with the LLMs. Agreed. Where you have, you know, you have a lot of direct control over the input data that the model ingests, which then gives you some certainty over the kinds of responses that are going to come out of the model. And that is, you know, I don’t think you get a better use case, but there’s a slightly adjacent use case that I think is one of the most absurd things happening in marketing right now, which is the concept of the synthetic audience. You know, and there’s a very high profile company that is, you know, pushing this idea and it’s utterly absurd. And here’s why it’s absurd. These engines are able to summarize data that it ingested, nothing else.

      Dale W. Harrison [ 01:41:10 ] And so, you know, and where does the data come from? It either comes from your own proprietary data that you ingested into it, or in most cases, it’s coming from the internet. And this idea of synthetic audiences, you know, we’re going to take a generalized engine, suck all the stuff from the internet. Well, the question is, what’s in the ingested data? And the answer is, for every sentence that’s written by an actual customer of a given product, there are probably 10,000 sentences written by marketers trying to sell the product. And so what you get is an echo chamber of what marketers think customers want, not a reflection of what customers want. And yet what comes out of it feels very authentic.

      Dale W. Harrison [ 01:41:52 ] But what you really are listening to is essentially just an echo chamber of what, you know, of what other people just, just like you think that the customer is thinking, not what the customer is thinking. And it’s this, again, it’s this inability to understand where the data came from, what’s actually being processed, that leads people to have these kinds of a magical thinking around what comes out. And, you know, one of the things that, that one of the phrases that I use for LLMs is that in their current state, they are essentially algorithmic mansplaining engines. That, that, you know, they have this kind of superficial knowledge. They’re very confident in their answers. They always sound smooth, but about 50% of it is bullshit.

      Dale W. Harrison [ 01:42:40 ] * And, you know, and so it’s like we’ve taken mansplaining and we’ve been able to, you know, fully automate it with several hundred billion dollars worth of investment. I think, I think, I mean, you know, I don’t, I don’t, I don’t disagree with you at certain levels. I think that, you know, I think that, you know, I think that, you know, I think that there’s, you know, I think you’re seeing it, you know, showing up in a number of different environments where it’s, you know, to your point, it’s very distinct data sets and it’s also user-driven. So the example I would say is like consumer and retail, the, you know, if I’m buying, you know, this blue jacket and I like pinstripes and I’m wearing this, okay. And I purchased it.

      Caitlin Clark-Zigmond [ 01:43:25 ] Well, then what would, what would go well with it? You know, what would you like, what else is like this? You know, so some of the consumer expectations is personalized content, the next best action, you know, some of those things where it’s really driven off the behaviors of the user, what are they searching for? What did they purchase? You know, what did they like about you? Where did they go next? You know, so on and so forth. I think there’s going to be a lot there. Cause I think the personalization you’re already seeing in from a marketing perspective, obviously that, you know, people who are doing personalized journeys and other things are accelerating much faster from a sales and revenue perspective.

      Caitlin Clark-Zigmond [ 01:44:05 ] I think the other one though, is that it’ll also allow, you know, practically for businesses and you’re talking about efficiency, it’s that the business benefit of really, you know, reducing customer acquisition costs and some of that by really being able to target specific personas. Now you’re right. If we’re ingesting all of that, the internet is saying in every marketer, but if we’re really focused on who our buyers are, what our buyer journey looks like, who our personas are, you know, one of the things we’re already seeing, especially, you know, in the space of, you know, selling trust and security services, AI performance optimization, we can go right after our specific personas on the things that we can truly target and how we can truly help people and have our sales teams run plays that really focus on those.

      Caitlin Clark-Zigmond [ 01:44:54 ] And it’s not about boiling the ocean. It’s about being very, you know, targeted and looking at the things that we can focus on to get to results and then keep, you know, and keep moving. I think, you know, that’s, that’s an opportunity from a business perspective. And then I think from a personal perspective, you know, whether using Claude or, you know, Gemini or ChatGPT, they all have some different skill sets and things that they’re better at from images to video to content and so on and so forth. And I think we’re going to, you know, personally, as we use the internet, we’re going to start to find based on our focus areas, you know, whether we’re in science or pharma or business or whatever, that there’s going to be certain ones that probably lend themselves to, you know, sort of better outcomes.

      Caitlin Clark-Zigmond [ 01:45:45 ] And I think that the co-pilot component of this is really around, I mean, you know, things that we’re talking about, you know, at Intel, around the AI PC, you know, the whole idea isn’t to, you know, you know, you know, you know, you know, ingest every single thing from the internet, it’s taking every single thing on your laptop, all the work that you do, the views and all the searches and the content that you’re driving, and make it easier. So you can say, what was that PowerPoint that had the blue grid that I did in January, you know, and then, you know, have it go find that instead of me going through file folders, you know, looking for, for all of that.

      Caitlin Clark-Zigmond [ 01:46:25 ] * So I just, I think that there’s, you know, we’re, we’re not, I don’t think any, a lot of the businesses I talk to, no one’s trying to have it make all the decisions and replace the humans. They’re trying to figure out how do I democratize, weaponize the data that I have and how do I more quickly connect with my customers and what they need and what can these tools do to, to accelerate that for me? So one thing to throw in real quick is this, the AI debate, I think, as I said, is a subset of a larger debate around the use of models versus data. So for instance, you talked about, you know, relying on user personas, but the problem with most companies, those user personas, how do they, how do they come into existence?

    • Dale W. Harrison [ 01:47:12 ] What data was ingested to produce the personas? The answer is a bunch of marketers sitting by themselves in a closed conference room, none of which have ever laid eyes on an actual real-life breathing customer, imagining what those they think they’re doing. And I think that’s a really important question, because if you put a user into a closed conference room where there’s data gotten, it’s really, it’s just simply not what those customers would want. And there you get the personas. Now that’s not true for every company, but that’s broadly true within, within marketing and, and so you basically have a model that’s going to tell you stuff that’s either irrelevant or wrong, because it is disconnected from the original source, the primary data source that contains the truth. Ignorant. Ignorant.

      Dale W. Harrison [ 01:47:52 ] And again, I think we have the same issue with, again, I’m speaking primarily with the LLMs. We have the same issue with the LLMs in that the data that gets ingested is largely disconnected from what we really want to know about the operational details of the real world. Now, again, I think there are exceptions. So, for instance, in certain consumer product categories where there is extensive amount of, say, product reviews, you’ve got the chance to have an abundance of input data that represents the actual voice of real users, of real buyers. But in most cases, almost all the data is the equivalent of a bunch of marketers sitting in a room trying to fantasize what they imagine customers want and look like and producing persona documents. That are typically absurd.

      Dale W. Harrison [ 01:48:42 ] And the thing I always say is take your persona document and show it to the top 10, the 10 top best sales reps in your company and count what percentage of them laugh out loud because they actually spend all day long talking to these people. And I’ve never seen or I’ve seldom seen a persona document that survives the test of showing it to the sales team because they are the ones that have that firsthand ground knowledge. And this is the problem with LLMs, is we have to understand where the data is coming from. And again, to Anastasia’s point, if you have complete control over the input data that’s being ingested into the model and you understand the providence of that data and you’re comfortable with the fact that that represents ground level data from the real world, you’re going to get good results.

      Dale W. Harrison [ 01:49:29 ] * But outside that use case, you are far more likely to get results that are really disconnected from the data. That’s sort of the ground level truth that we need to know to be able to do our jobs. So we need transparency. There’s a fundamental difference around what revenue marketing is versus awareness marketing. Anyone who’s in revenue marketing actually does talk to customers, but we can leave that for another debate. Well, but that’s exactly right. I mean, we need, first of all, transparency around what goes into the model. Right. And transparency. To control for some of these biases, we need to really validate whatever that synthetic research gives us and talk to real customers and use that. So some companies don’t have the patience or investors don’t have the patience to let marketers do their jobs.

      Anastasia Pavlova [ 01:50:30 ] * So we’ll run the pressure to deliver results tomorrow, if not yesterday. But what comes out of the LLMs is irrelevant. The only thing that matters is what goes into them. And this is what no one’s looking at. Everyone’s gushing over the outputs and no one’s having serious conversations about the inputs. Yeah. No, I. It’s important. Yeah. Yeah. I mean, one of the things to a couple of points here. I mean, what the area that I’m really excited about is how I might be used for improving customer success, the customer experience and value. And, you know. We now may have the ability to ingest more customer data, like inviting customers to to provide more information to us, which may also allow us to drive more value in terms of the insights that we can deliver.

    • Mark Organ [ 01:51:25 ] * So I’m excited about that. I think that that before this technology, I don’t think you could do this unless you had an army of people. In fact, I know of a company that hired a whole bunch of people in China 20 years ago to slice and dice customer data and provide that. Yeah. To frontline customer success people so they could drive more insights for customers. That customer, that company was WebEx and they did this in 2004. I was blown away by that. Now, I think you have the ability maybe for AI to do more things like that and to enhance the value that we can create by using customer data. But I’m curious if any of you guys are looking at customer success, customer value, customer experience.

      Mark Organ [ 01:52:07 ] * I mean, what are anything that we’re doing there? Make AI customer-facing. I think a good, you know, recent use case that I saw around AI and really helping improve customer experience and deliver value is Kroger. Kroger is processing, you know, tons of data at all of their stores across the country. Every purchaser, every digital shopping part, you know, every single purchase that is being made. And what they’re doing is, you know, just, you know, whether it’s them or you use, you know, Whole Foods or whoever. What happens is that they look at all the analytics of what’s going into carts, what’s being scanned as people are purchasing, looking at buying behaviors, you know, looking at the personas of those purchasers.

      Caitlin Clark-Zigmond [ 01:53:02 ] Are they, you know, because we sign up for our, you know, points or coupons or guests, you know. You know, discounts, et cetera. They can see that we’re 20-somethings, 30-somethings, 40-somethings. And what it’s really doing is that it allows them to do cool things. Like when you fill your cart and you want to go pick it up or you’re going to, you know, you know, go and have it, you know, delivered. You get cool stuff like, you know, did you forget something? And it can say, it reminds you that you buy guacamole every week or that you do certain things. And when it can get that personal and make the discounts. And so instead of just offering a discount for coffee, for you know, Maxwell House, if you’re a Folgers drinker, you know, or you like Starbucks, you know, the espresso pods, your discount that it’s going to give you is specific to things you buy.

      Caitlin Clark-Zigmond [ 01:53:54 ] * And so it can, it can actually take the dollars it wants to spend on marketing, give you the discount in a way that really makes a difference to you. And that improves customer retention and brand loyalty. And so those are, I think that’s a really. Great example of, you know, where a very large enterprise is leveraging AI to, you know, really help connect and stay connected with its customers and bringing value, you know, to them on a daily basis. I think there, there, there are so many use cases on the customer success side and companies initially really focusing about, okay, how can we reduce the churn? Right. And, and that’s just not the right focus because in order to reduce churn, you need to understand what. The reasons for your churn.

      Anastasia Pavlova [ 01:54:40 ] * And that often is a result of poor adoption of your platform. So you really need to, to get your, your customers to to understand how to use the features of your platform or your service. So there is a lot of education and I think AI is just like a perfect tool that can be again tailored to your specific platform, to your customers. You can use it to micro-segment your customer base. And, and really point your expensive customer success resources to tier one enterprise, biggest customers, and then use the technology to really make it easier for, for your other customers to really get the, the most benefit and value out of your platform. That’s an awesome use case. Room for a couple more people to chime in before we sign off. Yeah.

      Alexa Grabell [ 01:55:37 ] I’m happy. I’m happy to chime in. I think, I mean, there’s a lot of different use cases for AI and CS that I’ve seen. I can talk about our tool and then others, the AI where I think Caitlin, you were talking about it, where you can kind of look at past convert. I mean, we sell to B2B. So look at past conversion rates, what have helped you either book a meeting or close a deal or prevent churn or drive expansion that using all that data to then predict what other potential future customers or buyers could be in that bucket. That’s a great use case. I actually think that’s been around for like the. Data science element and predictive analytics is now quote unquote older AI.

      Alexa Grabell [ 01:56:12 ] But I think the newer AI that’s really what I’ve seen being super interesting is the summarizing of information and help CS teams prepare for a meeting, prepare for their deal, prepare for their upcoming executive business review. And I actually think that once you have all of your external data anywhere on the internet, looking at 10 Ks or news articles or podcasts or LinkedIn posts, everything’s available online now and combining that with the internal data. So ingesting your emails and your current call transcripts and all this information, you can really prep a CS person or a CS team or a support team, whatever that is, to do their job really, really well. And so again, not to, not to always reject your opinions, Dale, but for us, what we see is that all of these insights are super powerful for go-to market teams already.

      Alexa Grabell [ 01:57:07 ] * Because even if the data isn’t perfect, you look at what data they’re looking at today anyway, and yeah, no data is perfect, but you’re getting it faster and you’re getting it smarter. And so, I remember Sam Altman said, like, if you’re building an AI product and you can see a future where the AI gets smarter week over week, then that is a good product to be building and investing in. And so you can already see how much stronger the AI has gone in a year, that’s just going to pick up the pace of acceleration of getting smarter. And so I’m very optimistic about that. But, Alexa, I think you raise a good example of where. If you’re starting with the wrong model, all it will do is to give you wrong answers faster, and oh, sorry.

      Alexa Grabell [ 01:57:57 ] * Did your Alexa go off? So specifically, there is a fundamental problem with being able to work backwards from success to come up with any valid predictions. And this is a major flaw that I’m seeing in the machine learning world. You cannot look at a closed ones and produce a causal analysis to determine what needs what you need to do in order to have more closed ones, that is fundamentally not how causal analysis works. This goes much, much deeper than the kind of surface level stuff that programmers who learn some machine learning are doing. And what you’re seeing is these same failures to understand these sort of deep, fundamental data principles that were, you know, that have been infesting the machine learning world for 15 or 20 years now is now being carried over into the AI world.

      Dale W. Harrison [ 01:58:55 ] * This is a fundamentally failed model for being able to do predictive analytics. And yet, you know, with AI, you can now run this fundamentally failed model even faster. With even more data than you could before with just machine learning. And so if you don’t have, you know, that deep understanding of what is and is not a valid model structure, especially if you’re trying to do any sort of causal analysis, which any form of predictive analysis is causal analysis, all AI will do is to make the bad stuff happen faster and probably cheaper, too, which will make it even more widespread. But, you know, so I think there are ways to chat offline. There’s I think there’s I know we’re at time, but there’s also bad prompt engineering equals bad outputs.

      Alexa Grabell [ 01:59:42 ] * And so there’s ways to do good prompt engineering to get good output. So I can show you how our product does all the things that you just said. Yeah, yeah, yeah. We’ll continue this panel after after. This is fascinating. Well, thank you. Thanks, everybody, for participation and for all the I’m sorry, we didn’t get to the great questions from the audience. Although I think we have the question of where. Where is the chief of staff going in this age of AI? And I think we actually that function and I think we actually answered a bunch of that in terms of adoption of the things. But thanks, everyone. And I look forward to continuing this conversation offline. Thank you. Thank you so much, everyone. Thank you, Mark. Last question from the audience. But if you were to build another startup this year, what would be what people would you hire, if any? And what would be your tech stack?

    Table of contents
    Watch. Learn. Practice 1:1
    Experience personalized coaching with summit speakers on the HSE marketplace.