Text transcript

When Agents Fail: Why Outcomes Must Come First

Event held on October 28, 2025
Disclaimer: This transcript was created using AI
  • Julia Nimchinski:

    Awesome, thank you so much. And we are transitioning to our next session.

    Welcome to the show, Joe Klinn, Head of Sales at Hook, and some champion, founding GTM. How are you doing?

    Joe K:

    Hey, Juliet, good. Hi, everybody, good to see you.

    Julia Nimchinski:

    Hey, yeah, 7. Welcome back!

    Sam C:

    Thanks, good to see you again, Julia. How are you doing?

    Julia Nimchinski:

    Great to see you, excited about this topic, and you know what I’m gonna ask. Hard-hitting results, ROI, case studies, internal, external, anything tangible. About this futuristic concept.

    Sam C:

    Yeah, I actually have a good one that we recently shipped, like, if you look at our LinkedIn, we shipped a feature, I think, at the end of last week or something, where you can effectively tell an AI agent to look out for certain pieces of intelligence about an account, both internal and external. And, one of our customers was quite innovative, and they programmed theirs to basically be looking for intel around what could indicate an upsell, and so they jumped into Hook and were immediately able to see something That would have looked a bit like, obviously it’s demo data, but would have looked a bit like, oh, tell me all of the upsell data for meta industries, all of the strong internal signals, and we’ll go in later about how we crunch all of those internal signals using lots of AI agents, but then also what are the external signals.

    And they saw, like, this company had actually recently acquired someone.

    And internally, they were using the platform really strongly and actually asking for some functionality beyond their current subscription tier.

    No one was actioning it, and so then they reached out off the back of this and managed to close a seven-figure upsell, really quickly, because obviously it’s a customer, and so you don’t have to go through quite as much process on paperwork, etc. So, that’s one that springs to mind for me.

    Julia Nimchinski:

    Super cool. Joe. What are you allowed to share?

    Joe K:

    Sam’s done a good job there. I think, I think it ties nicely into what we’re about to talk about, but I think that the approach that we’ve taken is that we need to focus on outcomes more than we need to focus on agents that do specific tasks.

    And so, all of our customers are achieving outcomes that Sam’s talked about.

    I think Sam’s maybe just referenced an upsell opportunity.

    We do the same thing for churn. So, most of our customers, if they take the actions that our agents are suggesting, see a material 41% reduction in churn across the board.

    If they take an action that our agent has found and then suggested and then taken for them, if that action is completed end-to-end.

    We see really material impact for our customers on churn as well.

    Julia Nimchinski:

    Cool. Well, let’s… let’s dive into this.

    Joe K:

    Yeah, awesome, great, let me just share my screen. Let me know, can you see that’s okay?

    Julia Nimchinski:

    Yep.

  • Joe K:

    Perfect. So, the title of our talk is around multi-agent approaches, and when agent swarms fail, outcomes need to come first, and I think it’s a really interesting period of time at Hook that we’ve been going through.

    As we all know, their multi-agent approach, agent swarms, is sort of transforming a ton of industries at the moment, and our hypothesis on this is that that’s not necessarily a great thing, or the best use of agents. The reason being, if you look across loads of industries now, tons of products have hundreds of AI agents that are completing tasks. So you think about the note-taking agent, you think about a task creation agent, email drafting agent, meeting summary agent.

    What all of these things have in common Is that they complete specific tasks, but they often don’t talk to each other, and then there’s often quite a lot of unnecessary complexity that comes with that, because as a user, which part do we use when, which agent’s getting involved when, and they don’t have that context across agents to understand what to go and do.

    that sort of approach we almost fell into as a leadership team at Hook, maybe six, seven months ago, we were thinking through, how do we take what we do today, which Sam’s going to talk about in a second. And how do we start layering in a lot more of an agentic approach?

    How do we start introducing agents into the workflows that we have for the customers that we work with? And… we were thinking along the same lines, like, how do we… how do we come up with agents that do certain tasks that we see our customers do, right?

    Like, an email writer agent was one of the first things that we were thinking about building, or a… or an org chart building agent, like, how do we piece together who the people in the account are, and who we need to go and talk to? But when we sat down, and we started putting this all together, we sort of thought, and our CEO tends to sort of charge forward on this before all of us, but he’s like.

    if other people are zigging, we need to be zagging, so thinking about actually taking a step back and understanding what is the bigger picture that we’re trying to achieve here.

    And so what that then led to is us as a business defining agents in a very specific way, which was that agents are basically They’re built not just to do something, but they’re built to achieve something and achieve a business outcome.

    So take all of these smaller agents that do specific tasks, put them into one thing, and one specific agent that owns a business outcome.

    So as a business, we went on that journey of, okay, instead of it just being task-orientated, we need to move to an outcome-orientated approach.

    And what that has led us to, as a business is Hook as a North Star is all about automating the role of the CSM to give the CSM time back to go and spend with customers, right?

    And because we define agents in the world of they need to drive a specific business outcome.

    our platform is now being built in a way that each agent does a multitude of different tasks that might be considered to be agents, but we think are a true agent in that it delivers a business outcome and it does something end-to-end. So, if you start on the left-hand side, our ECHO tool, think of that as a data scientist that identifies risk, pulls in a ton of data, understands what the business’s goals are, a customer’s goals.

    Understands what we need to go and do, who we need to go and speak to, and identifies all of those opportunities and risks. We then have our activator tool. Tom’s gonna talk about all of this when he takes you guys through a demo, which is… Think about an onboarding manager, an implementation manager, tracking everything that needs to be done, understanding a customer’s goals, executing and tracking all the tasks that are done, but then also doing them, right?

    Doing them during that onboarding process. And then finally, a sidekick.

    Think about an assistant to a CSM, doing all of the admin and busy work that tends to take CSMs a huge amount of time in their day. How can we automate that? How can we take all of that time away from the work and the day-to- day life so they can spend it with customers driving value?

    So, agent swarms are a really exciting thing, but I think the way that we’ve tried to think about this is, instead of it just being specific tasks that we can automate out of the day, and specific roles, how can we, as a business, create agents that completely own the outcomes of certain things?

    And that’s really what we’ve… we’ve focused on building at Hook.

    So, with that.

    I will hand over to Sam, who… I think you’re going to share your own slide, Sam?

  • Sam C:

    Yep, I can jump in. So, when we were building this year’s roadmap, we really felt it was key, because, a lot of people have, kind of, co-pilots, which is kind of what we were hinting at, but we feel like AI agents should really own an outcome.

    They should do what a human could do, and so if you really break it down, the first step when you’re building these things is you need to actually define what does a human do. And what we realized was the CSM role is infinitely complex, but there’s actually 3 key things that they do day-to-day.

    Which is, first of all, we need to start by gathering intel. Like, if you’ve got thousands of customers, you need to know who’s at risk, who’s ready for upsell, who do I need to focus on today? Followed by some sort of way to create a plan off the back of that.

    So if this customer’s at risk, what can fix this particular thing?

    And then finally, actually take action off the back of any of these insights.

    And… These things sound really simple, but the real life of a CSM is that gathering intel is very, very tough, because the signal that indicates a customer’s at risk could live in usage, could live in CRM, emails, tickets, product, and then, because of that, it takes so long to research an account, even longer to generate a plan.

    And so finally, when it’s time to actually take action, it’s really tough to test that what you’re doing is actually effective, to test what’s working.

    And so if you’re trying to build an automation, or to send stuff in a more scalable manner, it’s really tough to do that confidently, because you can’t test what’s working.

    And so we decided, actually, our goal should be automating each of these key steps using specific agents that own the outcome of gathering intelligence, that own the outcome of actually trading a plan, kind of like an EA would, or someone assisting you there, and then actually own taking action to try and renew, or to try and upsell a customer.

    And the way that we do that.

    is we take a few different types of data. We take, first of all, what we call structured data, which is, like, kind of anything that would fit in a spreadsheet. So, like, usage, adoption, ticket volumes, customer spend.

    And then we match it with unstructured data, which is, what you think of as, like, the voice of the customer.

    So, what are people saying in emails, tickets, calls, as well as what can we see externally about your customers from maybe the news or stakeholder information of people that are changing companies.

    And as many of you all know, like, LLMs tend to really struggle with numerical structured data, and so we do lots of different types of AI on this data to get the max insight from it possible.

    We started off, actually, as just a machine learning company, so in more traditional AI, we’d search in the past to work out what best correlates with churn or renewal in order to predict in the future which customers are likely to churn or renew. And the output of these analysis… analyses are what we give to the LLM to give them context about the structured data, because if you’ve ever tried uploading a spreadsheet into an LLM, it does not… it’s not very friendly.

    And then we use more traditional AI on the unstructured data and the outputs of the LLMs to basically detect risk and opportunity, to basically be listening like a CSM would.

    And to do this, we feed it all into ECHO, which is our risk and opportunity detection agent.

    And so, if you think about the role of a CSM from earlier.

    That’s the first step, which is understanding who to focus on, who’s at risk, who’s ready for upsell. And then the next step, if you’re thinking about owning an outcome, is we need to understand the plan for next steps based on any of these insights.

    So, what content could help this customer get healthy again?

    What intervention could maybe book them in for an upsell conversation?

    We reverse engineer from Echo’s Insights, a set of next steps to fix any customer risks, much like a CSM would do themselves, but obviously our AI has full context of all the customer conversations, their usage. And then finally, we deploy this through Sidekick.

    which is basically an agent as a set of tools, such as meeting preps, or sending emails at scale, so that they can actually send out these activator responses. So if you’re a CSM, you can be the orchestration layer, and you can be the person managing 3 distinct agents that are owning the outcome of detecting risk for you.

    Owning the outcome of creating plans for you and actually doing it at scale, so that all these things can help you reduce churn, or hit up some targets. And obviously, this is where AI really starts to come in, because we’ve got now a full 360 picture.

    We can actually have the AI own this outcome by conducting each of these agents underneath it. And so, this is really what we’ll focus on today, is how can we take data, feed it into AI, and use lots of different ways to get insight out of that.

    But then, crucially, roll all of these different separate AI agents up to one real conductor agent that actually owns the outcome of reducing share, or improving upsell, or improving efficiency by managing some scale customers without human in the loop.

    And if I jump into Hook.

    I’ll start by running through a single account to show how we really think about using agent sworn to give as much context as humanly possible about how a customer is behaving. And then I’ll finish off by jumping into something such as, like, the new hook homepage that we’re about to release, which is then going to show, okay, well, how can we use all of that agent information to just really clearly know what the agents are doing in the background to actually improve churn or upsell.

    The key thing that we want to start with at Reddit is we pull in all of the data, which is the key thing, about this account, to give the AI context about what’s going on. We pull in all of the structured data about how they’re maybe using the product, how they’re meeting CSMs, how they’re engaging with certain features, so we’ve given it all the context there about, sort of, numerical- based data. We pull in then also unstructured data, which is, like, meeting summaries, support ticket summaries, emails back and forth, external signals.

    We put all of this in one place. And like I mentioned, we then analyze the numerical data, historically, to work out what best correlates with churn or renewal, to build a picture of Leading indicators, so which metrics best correlate with the outcomes you care about.

    We also build a picture of benchmarks, so what should customers be hitting per metric in order to be likely to renew? And we add up all these things together.

    To turn all of this structured data and abstract it away, so that we can just say a really, really simple churn prediction score, so we know how the customer’s tracking in terms of their structured data. This is one of the AI agents, the churn prediction score, that then feeds into the conductor layer that I’m about to show in a minute. To get the… insight out of the unstructured data.

    We then basically listen to all of this data. And basically work out when customers tend to churn, or when their sentiment turns negative, what do they tend to do, say, or act like before those things happen?

    So, for example, do they tend to mention competitors and support tickets, or do they tend to complain about this or that in email, or do your users, who we also track, tend to go inactive, and then you have issues?

    And instead of you having to then read through all of these different summaries.

    will flag automatically with AI. This is another risk and opportunity detection agent that feeds into Echo when certain things indicate your attention. And so, if we think to agents needing to own outcomes, these are the two ways that we detect risk and opportunity in your customer base.

    We detect it through… Key metrics, we detect it through unstructured data. And what this looks like is we’ll give an exact summary to tell you, okay, the AI agent here has been crunching the numbers in the background, much like a data scientist would.

    So this is doing something that a human would otherwise do.

    We’ve detected that they’re having issues with templates, they’ve mentioned a competitor, and actually, recently, your main point of contact’s actually left the business. Just to help you sense check. If we hit View Findings, we’ll tell you why we’ve flagged this as important.

    We’ll cite the sources where we found certain risks, such as an email or ticket, just so you can actually sense check And in much the same way, we’ll be detecting upsell for you, so in much the same way that a BDR might be constantly going through signals and detecting and working out what goes, this is owning that upsell generation number by detecting that this customer is undergoing L&A activity, and they’ve asked for some functionality beyond their subscription to get And so again, if we jump in, we’ll see the sources have been cited, the AI has flagged why this is important. This is really that first step of understanding who’s at risk and who’s ready for upstart. This is sort of informing the conductor agent that actually owns the outcome about that.

    The next step is we then want to understand what actions can we take, because this is what humans do, right? We detect risk, and then we actually take action on it. This is kind of what defines Agentic.

    We then reverse engineer each of these signals into what we call play iBooks, which is basically a set of next steps to take. that the AI can either send out automatically, or that humans can decide to send off themselves. So in this instance, to solve this problem, we’ll scan our MCP, which is called a Model Context Protocol, that basically has all of the details about this account, recent meetings they’ve had, emails they’ve had back and forth, product usage.

    And we’ll say, okay, when customers have had similar issues in the past, what solves this problem?

    Or if it’s a net new issue, what could solve this problem, based on what we can see in your help docs? And we’ll suggest a set of next steps based on it.

    So in this case, one of the things we need to do is follow up on this new expansion.

    the AI will actually generate the email for you with full context about this customer, why they bought, what the next steps are, and maybe this Template Control 101 enablement, something that solves this particular problem. In much the same way for the upsell. The play iBook is generating a set of next actions that’s much more appropriate for a potential upsell opportunity.

    These are some of the things that then inform the action-taking part of the platform, alongside the briefs module that I think we covered at the start with Julia, which is basically where you can program Hook to automatically flag certain things as they happen, such as upsell intel. Or maybe you want to flag automatically value-delivered, and get the AI to calculate that in the backend. What we won’t have time for today is probably going through the chat interface, so you can sort of get ad hoc answers to all your questions.

    The reason I’m going through this quite quickly is because I think the idea is that in a single account here, we’ve got so many different places where AI is detecting risk for you, generating action for you. You can almost think of this a bit like an agent swarm, where we’ve got brief modules, we’ve got a chat interface that’s crunching numbers, we’ve got intelligent risk detection, we’ve got action detection. We’ve even got… Machine-learned churn prediction scoring, and machine learning upsell scoring.

    We abstract all that away so that you don’t have to look at 100 million different things using the sort of final step, which is Activator and sidekick. So that you don’t have to manually go in and use this whole agent swarm every time to work out what’s going on.

    What we’ve recently released is the ability If I jump in here… to start automating this end-to- end.

    So if we jump to create an automation, for example, we might want to say, using all of these agents that are crunching the numbers in the background, when certain criteria is met, such as, I don’t know, a customer enters onboarding, or they haven’t activated SSO yet.

    We might want to start adding an action. In the old world, this would be send a customer an email, or create an alert for a CSM to go do something. In the new world, we can actually start letting the agent take the relevant action.

    This agent will have access to all the tools, all the… think about, like, Agent Swarm that we spoke about earlier. You can give it a goal, such as help this customer renew, or get them to sign a DPA, or whatever. Something that’s important and tied to revenue.

    This is the outcome that the agent is owning. You can give it resources to follow, such as templates, onboardings, knowledge-based links. Give it escalation instructions as to when to get in touch with you, as well as some rules to follow.

    And what this will effectively do is give the agent the ability to call on all of those tools we spoke about earlier, such as briefings, risk detections, play iBooks.

    And what this will end up doing is the agent will do what it thinks is best to help you achieve whatever your goal is here at the top.

    And so if we jump into an account, like Slack, And we jump into goals.

    You can see that this activator sequence has been activated, and it’s currently working through a few steps. If we jump in and view details, we can see that this agent’s goal is to get some sort of contract signed.

    It’s about to send the first step, which is some sort of Gmail integration enablement campaign, and you’ll see that it’s drafted up a few next steps. In the old world, this would be static, but because this agent’s got all of the tools that we’ve walked through today at its disposal, if a customer connects Gmail, and this is successful. Obviously, it doesn’t need to send this follow-up, and it will adapt course automatically.

    If the customer replies to this email and says, hey, we’re actually ready to sign the contract, the agent will adapt course to send the contract, because you’ve given it as a resource.

    If, for example, they reply with a massive objection, like, hey, we’ve not seen any value in this particular contract, and actually we’re going to churn.

    It can either escalate this to a CSM, or use one of those briefing modules recently to draft the next step about delivering value. This can be completely automated end-to-end, or you can jump in and start manually approving tasks in a single account, such as hitting send, if you’re not quite comfortable to let the AI do its thing yet. And obviously, this is in a single account.

    But where this starts to become powerful is seeing, at scale, all of the tasks that are being flagged by CSSM agents, such as scheduling QBRs, having automatic generation sent off, getting details about why these tasks have been flagged.

    Eventually, selecting multiple at a time, so sending off all of the ones by a certain account, and then crucially, looking back in time. To see… tasks that have potentially been completed to see how effective they were. Are people reading these messages?

    Are people logging in more? Are people renewing? And this is really how we think about agents, is… Have tons of agents working in the back end to do individual tasks, but don’t force people to come in and, like, have to use all of these millions of things if, for example, you’re looking after a scaled book of business.

    be able to jump in to settings and automate this and give the sort of agent that owns an outcome at the top the ability to jump back in and use the things it needs to do to hit certain outcomes, which for our customers is usually, like, reducing churn, improving upsell, or just making the CSM’s day- to-day more efficient by getting rid of all the grunt work. And, yeah, that’s sort of the story of how we’re thinking about Agent Swans.

    got sort of 5 minutes left at the end. I think, sort of, me and Joe are happy to field any questions that people would love to go through.

    Julia Nimchinski:

    It was so comprehensive and futuristic, and on this note, Joe and Sam, obviously amazing. presentation here, what would be your advice to all of the folks watching and listening? If we are talking about mid-market and enterprise, what’s the first step to even think about this architecture, since obviously security is a big risk?

    And, I know that you’re… I mean, obviously, the topic of today’s presentation is failures and, you know, the unsuccessful deployments, so… What are your lessons and advice here?

    Sam C:

    Yeah, this is actually something we think about a lot, which is, if you remember, in the back end. One of the things that we think’s really important is being able to first set criteria before an AI agent is allowed to do something automatically. So, a really common way that we see customers mitigating risk in their own customer base is they’ll set We go back in here… For SMB customers, you might feel quite comfortable sending stuff off automatically, but for enterprise customers, for example.

    you might want to just flip this to consistently create drafts that a human has to go in and approve. And so this is how we think about it, is give the human the ability to actually judge for important customers or for more enterprise customers in mid-market, whether what the AI is outputting is appropriate to send.

    Or when you feel comfortable, and you’ve approved enough of these drafts, come in and maybe flip this to auto-send, so you don’t have to go back in and manually approve each time. That’s kind of how we think about it, like, give the human the autonomy to choose how much free reign you want the AI to have. Does that answer your question, was it something else?

    Julia Nimchinski:

    Absolutely, and I’m curious, Sam, on this note, what’s the main reason why, you know, this architecture can fail?

    Sam C:

    This is a good question. I think we… we like to have, like, as much data as possible, and obviously this isn’t always possible, because not everyone has the approvals to give you all the right sorts of data, and so a lot of the time, we’ll walk away from, like, sales, for example.

    Or we’ll go into a relationship with somewhat of a caveat as to, hey, eventually we’re going to need some more approvals. And so I’d say the main thing is just making sure you can give the agents the data that they need to own the outcome that they’re going for.

    For example, if you’re not comfortable sharing your product usage data with an agent, so they won’t have the ability to see risk surrounding product usage. Or if you don’t give the agent the ability to see, like, I don’t know, email content, that’s going to be a bit of a blind spot for them. And so, really, we have the great ability to be able to categorize all of your data into really key areas.

    So turn, like, email content into an account data endpoint that the agent can call on when it’s trying to get more info.

    But if you start getting rid of some of these, obviously it’s just going to be way less effective, you’re going to be much more prone to the AI, like, making mistakes or missing risks.

    And so, really, I’d say it all comes down to, like, what context you’re actually giving the agent.

    Julia Nimchinski:

    Curious your thoughts on the evolution of CSM as a function, and as a role, per se. How do you see, I don’t know, the CSM of the future? Are we firing people, hiring new criteria, evolving?

    What is it?

    Sam C:

    Yeah, this is something to also think about a lot. I think Joe can obviously speak to this too, but we, we don’t necessarily see this as, you buy hook, then you fire all your CSMs.

    It’s really about making each CSM more effective, or taking their work away from the busy work, because at the end of the day, your enterprise customer is always going to want to speak to a person. It may be that you can’t afford to put a CSM onto your scaled segment. So you can use Hook to fully manage those accounts, but you’re always going to want a CSM managing those enterprise.

    What you don’t want them doing is, like, writing emails all day, or, like, trying to find signals, and so we’re helping get rid of that busy work, so you can focus on doing what humans are good at, which is actually, you know, building relationships and understanding strategy, etc.

    Julia Nimchinski:

    And transitioning into 2026 planning, what is your prediction?

    Where are we going in terms of I don’t know, deployments, this year.

    Sam C:

    I think that’s definitely a question for Joe, coming forwards, in terms of forward-looking roadmap.

    Joe K:

    There’s loads. There’s loads we can share, there’s loads we can’t. I think it’s all going to be guided by the ethos that we need to give CSMs more time back to spend time in front of people.

    I think our… our belief with AI, our belief with agents, is that it should free up humans to spend more time with humans, in person, on calls, versus, you know, looking through emails and support tickets, things like that.

    So we’re just going to be continuously building towards that.

    We will probably pop up here again soon and talk through some of that stuff as well, but yeah, that’s, that’s how we’re thinking about it.

    Julia Nimchinski:

    Thanks, guys. And what’s the best way to test drive a hook?

    Joe K:

    Just reach out to me, Sam, any of the team on LinkedIn, and go find our website. We’ve had a whole new website launch as well, so go check that out at hook.co.

    Julia Nimchinski:

    Awesome. Thanks again.

Table of contents
Watch. Learn. Practice 1:1
Experience personalized coaching with summit speakers on the HSE marketplace.

    Register now

    To attend our exclusive event, please fill out the details below.







    I want to subscribe to all future HSE AI events

    I agree to the HSE’s Privacy Policy and Terms of Use *