Text transcript

Agent-Led Customer Experience 2026 — Fireside Chat with Pol Peiffer & Jonathan Kvarfordt

AI Summit held on Dec 9–11
Disclaimer: This transcript was created using AI
  • Julia Nimchinski:
    And we are… Transitioning to our next session, welcome to the show. Paul Pfeiffer, Head of Agent Development at Sierra, and Jonathan Carfort, VP of GTM Strategy and Marketing at Momentum. How are you doing?
    Pol Peiffer:
    Doing great.
    Jonathan Kvarfordt:
    Pulled first.
    Julia Nimchinski:
    Yeah.
    Pol Peiffer:
    Great, good to be here. I love the background, I feel like I’m in an actual singularity event.
    Julia Nimchinski:
    singularity.
    Jonathan Kvarfordt:
    And I’m like, whoosh! I like it. I feel like I’m in… what’s that movie with, Oh, shoot, Interstellar, that’s what I feel like I’m in, is Interstellar. Yeah.
    Julia Nimchinski:
    as a design. Before we get started, we start every session with a prediction, one prediction, top production for AI and GTM 2026. Both of you.
    Pol Peiffer:
    I’m happy to start. I think, My prediction is a lot will change, but I think probably a lot less will change than some people think will change. Which means that in all the little things, things will get a lot easier, but I think some of the predictions that are, like, you know, some of these jobs will completely go away by next year, I think, what happened.
    Jonathan Kvarfordt:
    We’re starting to see it already.
    Julia Nimchinski:
    it.
    Jonathan Kvarfordt:
    Yeah, I gotta balance this out. I see two things happening. One is, I see the rise of a more, what’s the word? More formal, either AI Ops, AI strategy, AI something, kind of like, again, polls… polls, titles are great. Evidence of that, but something more focused on AI across the board, not reporting to IT and not reporting to revenue, or maybe it is, let’s just focus on AI. And second is, I think there’s going to be this great divide that’s going to happen really quickly of the teams who have done their work in 25 to get there. They’re going to accelerate like crazy next year, and it’s going to be difficult for anyone who has not done the hard work this year to catch up.
    Julia Nimchinski:
    On this note, Asian-led customer experience, 2026. Jonathan, take it away.
    Jonathan Kvarfordt:
    Take it away. Okay, you want me to leave this, Julia?
    Julia Nimchinski:
    Yep. Yes?
    Jonathan Kvarfordt:
    Okay, you got it, I’ll do it. Alright, so, Paul, I am super excited to talk to you. So, I would love to know, what is in the purview of you, for people who don’t know what you do, tell us more, like, what do you do on a day-to-day basis, what’s your responsibility?
    Pol Peiffer:
    Yeah, yeah, happy to. So, I lead our agent development team at Sierra. Sierra, as a company, we build conversational AI agents for enterprises that focus on customer experience, so that is everything from, you know. Helping a customer find the right products, engaging with them on various touchpoints to maybe convert a lead, all the way to, you know, traditional customer support use cases that you might expect across all the different industries.
    Jonathan Kvarfordt:
    And…
    Pol Peiffer:
    I lead what I call our agent development team, so that’s the team that actually builds these agents with our customers, so we work very closely with enterprises, largely focused on large enterprises. to get these things into production, safely and, you know, handle lots of awesome different use cases. So I got to spend my entire day thinking about agents and how to make them useful and how to get them into production.
    Jonathan Kvarfordt:
    I love it. I do something… mine’s not voice agents, but Mentum, we do the same type of thing, but for revenue teams, so all I do all day long is think about agents for revenue teams, so from data orchestration points, so it’s kind of fun. Not… my title’s not AI agents, but I… part of what I do is that, so I just find it fascinating, and I know you’re way smarter than I am, so I’m, like, really ready to geek out with you and tap into your genius, which will be fun. So tell me this, you guys have a lot of good brands on the website of who you’re working with, like from Ramp, to Weight Watchers, to Sunos, to DirecTV, a ton of different use cases. where do you feel like… because the concept is customer experience for us, right? So… I find it fascinating that Klarna… I don’t know if you follow this with Klarna, a year ago or so, they got rid of a bunch of CS people and implemented agents instead. Fast forward 6 months, which is about 6 months ago now, they had to hire back people because the customers did not like the experience working with AI agents for whatever reason, right? So it seems like there’s this… we’re trying to find this balance between where AI can sit, where humans sit, and the value of what the human-to-human interaction is. because this is all you do, like, what are you finding with that balance? Like, do you feel like people are over-rotating too much, or how do you decide that?
    Pol Peiffer:
    Yeah, I think with any change management process, you know, you want to ease into things and kind of naturally find the types of things that AI is good at. I would say With our platform, what we see today is that they’re actually quite rarely Bottlenecks to, you know, the complexity of a workflow that even today’s technology can handle. But there’s work to get it there, right? You need to have the agent access all the tools that you humans use today. If you look at the average agent desktop of a human CX employee today, they tend to have lots of different tools, and so, you know. If you launch your agent and expect it to outperform that human without giving it the same tools and capabilities. Obviously, you will hit a wall. what I also always focus on is, you know, what are the transactional things where, you know, as a customer, I just want to get this done, and, you know, I don’t want to wait on hold, and I want my problem resolved. Versus what are those things where, like, today, maybe humans need to cut the time short? that I… warrant human touch and a white glove experience. And so, you know, the way I look and talk to our customers on applying AI is, like, how about we take those kind of more tedious things that are repetitive and and get those automated, and then that doesn’t necessarily mean, you know, getting rid of all the humans. You might reduce some OPEX there, but then using your very best people on those cases where, like, it warrants a white glove experience. And so I think… I’m not too familiar, of course, with, like, exactly the clarinet situation. I’ve read about the news. like everyone else, but, you know, I think it’s important to not over-swing there and kind of, you know, get these things into production, actually measure how they’re doing, and then decide on, you know, where do I rebalance my team.
    Jonathan Kvarfordt:
    I think part of it, too, goes down to the experience, because, like, we don’t know as consumers and buyers a lot of the times. if we’re even gonna like the experience, it’s kind of like, you don’t know if you’re gonna like a Waymo until you take a Waymo and think, oh, maybe I like Uber better. You know, I just don’t know. So, do you feel like that there’s… That it takes companies willing to experiment and take that risk, or do you feel like people are trying to be more safe and, again, do more tedious behind-the-scenes type stuff, or what do you see with that?
    Pol Peiffer:
    Yeah, I think you need to take measured risk, right? And I think… You know, we’ll probably talk a little bit about, like, where should you build and spend engineering resources, where should you buy? You know, of course, from our perspective, We’ve seen a lot by virtue of being out there with our customers, and so we’ve built robust systems to simulate these conversations, to have guardrails around them, and so on, and then to ramp them up slowly. And, to your point at the introduction, I think the companies that take the risk now and experiment and learn, are the ones that later on will be well set up to, you know, profit from that. But it should be done in a way that, you know, is measured and, you know, thoughtful.

  • Jonathan Kvarfordt:
    Yeah. Well, if we bask on it, because I’m a big fan of Polato Principle of, like, applying what’s the 20% that makes the 80% of the difference, you know? Through the, I’m sure, thousands of AI agents you guys have been working with all these different companies, what is the 20% that’s making the biggest difference for people? What use cases, or what’s being applied?
    Pol Peiffer:
    It will be vastly different across industries, but if you think about, you know, the retail industry. It’s not hard to imagine that, you know, we just came off of Black Friday, Cyber Monday, that, you know, order management and returns, and where is my order, and it got lost, and so on, is like… Clearly, you know, 80% of the volume for, like, 20%, let’s say, of the integrations, right? If you have a very strong integration into your order management system, you can actually solve a lot of problems for customers in a fairly efficient way. And the same is true across, aspects of healthcare, where there’s a lot of questions around, you know. benefits and how you might use them, or parts of financial services. The important bit, though, is that all these industries are different and nuanced, and they have their own regulatory concerns and so on. The other thing that I would think about is, I always think about the ingredients. What are the inputs that need to go into the agent to make it great? And one example of that is, you know, grounded, factual knowledge. If you have a fairly, you know. outdated or spotty knowledge base. You know, you can have the best AI agent that can look at that knowledge, but if there’s just a gap in the knowledge itself, you know, it can’t provide a good answer. And so there’s other simple things, like, well, do we actually have good FAQs in place for those common questions that It can be very high leverage, because once they’re there, the agent can reference it and reference them at scale.
    Jonathan Kvarfordt:
    I found fascinating, because one thing I think people get confused on is what is an agent in general, because I’m sure you’ve seen this, but it seems like agents are kind of on a spectrum, because every time I hear someone talk about agents on a marketing perspective, they always kind of talk about different, like, some people call ChatGPT an agent. I’m like, that’s kind of an agent, but not really. And I kind of lean on the Google white paper where they talk about agents, where it says, has to have access to an LLM, has to have access to tools, and has access to orchestration of some kind. Do you… I know you came from Google as a product manager. Do you kind of steer that same direction, or what do you think? How would you define an agent?
    Pol Peiffer:
    Yeah, I think it’s loosely defined, and I think that’s the same with a lot of these terms in AI. AI is probably the most famous one of all of them. But at its core, right, an agent comes from agency, and so, you know, there’s software that can decision autonomously in some way, and I think that is quite important, right? A, if-else decision tree is not an agent, right? It is a complex system. It can make, you know, various decision points, but there is not a lot of agency. It is pre-programmed in the sense that, you know, there’s a lot of nodes and there’s decisions, but.
    Jonathan Kvarfordt:
    Yeah.
    Pol Peiffer:
    is not a system that reasons, and so I think that LLM component where you’re using the reasoning capabilities of these models to act and decision is an important one. And then the second one is the agency part, right? Like, are you taking actions, so the system systems access, as you mentioned, is important, too. So we define an agent as, yes, it is software that reads and then autonomously makes You know, system calls, whether that reads or writes to a system.
    Jonathan Kvarfordt:
    I have this vision, I’ve always been fat… ever since my journey in AI, which is not… probably not as long as yours is, but for the last 5 years, been geeking out on AI stuff. I’ve always loved the… the movie Iron Man to show the difference between Jarvis and Tony, you know? So, like, you have this cool voice agent, he’s not looking at Avatar, he’s a voice agent, and then he’s doing these holographic things and doing all sorts of cool building and blah blah blah blah, right? So I always kind of leverage that as an example of, like, that’s where things are going. So my first question is, do you see that same future to where you have a voice AI agent, maybe not necessarily an Avatar. that has access to different systems, and you do work, just like, kind of, Tony and Jarvis, is that a good example to kind of hopefully think we’ll go to? What do you think?
    Pol Peiffer:
    Yeah, I think on the consumer side, I can certainly see us heading there, and, you know, voice becoming more of an important interface. Voice has this great property of that we all know how to talk, and we all know how to interact with each other over voice. You know, you can be in different languages, and so on, and so it feels natural. Jarvis has the benefit of voice plus visualization, and I think that is the important bit here, is, you know, if you think about all the things that you do either as a consumer. or as a professional on your computer, there’s lots of places where visualizing content is important, and so I’m sure we will figure out, like, new interfaces where maybe voice and visuals interplay. What we are focused on for enterprises is, of course, you know. phone today is a very large channel, and I do think 5 years from now, when you call up a business and there’s not a natural-sounding AI, but, you know, what picks up the phone is, like, press 1 for billing and press 2 for orders, I think you’ll be disappointed that that is the future we’re very rapidly heading. But the same opportunities exist there, you know, a landing page of a business, in my opinion, 5 years from now. will look quite different. You know, why do I need to go and filter down for trail running shoes in black to see the right product? Like, why can’t I just say that’s what I’m looking for, and so on. I think it’s quite obvious that, like, the interfaces will change and the modalities will change. Maybe we’ll talk to an avatar, maybe we’ll just talk over the phone and the website changes, maybe we type it in. I think there’s a lot of playroom there to figure out neat ideas of, you know, new user interfaces.
    Jonathan Kvarfordt:
    with that world coming, because I… to me personally, and I’d love to hear your thoughts on this. because the technology wasn’t there, for years we’ve leaned on humans to do a lot of different things that now AI is encroaching upon what humans are doing, which I think they shouldn’t be doing anyways, because now the AI can. But it’s going to come into this world where you have this balance between AI can literally do anything a human can, but you may not want them to because of the experience of whatever product you might be selling, right? How do you help companies, guides, like, enterprises, bigger companies, because I know a lot of people listening are from organizations, either for their own internal teams, for customers. How do you balance that? How do you think around, what do we think about the future, both current state and going to the future, of what we give to humans, versus where we put the AI agents to make sure it’s a good experience? How do you do that?
    Pol Peiffer:
    Yeah, I, I think, the trade-off is often not as simple. Like, what we see is that the agent can often do as good of a job or better, and in particular than your average or median. I think anyone that’s operated a large-scale kind of customer experience operation knows that, you know. The average tenure tends to be quite low, there’s a lot of churn, and so you’re kind of on this, like, training treadmill of humans, and so the fact that you can have a system that is 24-7 and consistently as good as your better humans… is actually vastly outperforming the average human. And so, you know, where do you use humans? I think that the question will come down to AI will handle a lot. you will want to use your very best humans to handle the trickiest cases, the ones that really need a human touch, and I think those roles will really expand, where, you know, today, even your very best humans in the call center, you know, they can only pick up one call at a time and delight one customer at a time. But, you know, later on, they will handle the hardest cases that, you know, ultimately really drive NPS, and they will be the ones building and teaching the AI, and so the leverage of those roles will go way, way up. And, you know, that frees up capacity to use the brains of those other folks in other areas of your business. And so, I’m actually quite excited by, like, the additional leverage, and we’re seeing this where, you know, customers where we have large-scale deployments of Sierra. you know, have meaningfully reduced their OPEX, but it’s also created new roles. We have companies that call their team AI architects, and, you know, what that team used to do was Right? SOC doesn’t train humans, and now they own the agent, and they’re building new journeys, and they’re running experiments, and the role is… much more vast. I think it’s a lot more interesting. There is a quantitative aspect to it, because now you can, like, run experiments, and you can see what is statistically significant, and so on. And there’s constant innovation around, like, how should this agent engage with customers? And so, I think that will be true across all these job functions. They will change, and they will evolve, but I think, ultimately, the experience will be better, because customers don’t need to wait on hold. Your average, you know, consistency would be higher, just because this system is 24-7, and it doesn’t… you know, randomly make mistakes the same way humans do when they’re onboarding, for example. And your very best humans will have, you know, a much more interesting, a much more diverse job, and therefore retention will go up, and that will, you know. Continue compounding in the business, so… There is change, but I think it’s actually net positive change.
    Jonathan Kvarfordt:
    What do you feel like teams get wrong when they try to go into this world of using a technology like yours? Are they, like, ill-prepared? Is it not data? Is it not connections? Like, where do they get… I don’t know if wrong’s being the right word, like, maybe what are some things they can prepare with to make sure they can really fully take power, or take advantage of the power that AI brings?
    Pol Peiffer:
    Yeah, I wouldn’t say, companies get it wrong, because it is hard and it’s new, and I think what we found when… We started the company was… having a very accountable team that can work with our customers, which is the team that I lead, has been very important, because, you know, everyone is going through this for the first time in lots of cases, and we aren’t. We have the benefit of having seen that movie before, and so can… the things that I see is a mix of, you know, technical and organizational challenges, right? There’s the data, and is your knowledge base clean, and, you know, do we actually have APIs to connect to? Those are all solvable, the technical challenges, you know, ultimately… It’s just code to be written, or an integration to be figured out. And AI is helping a lot with that, right? It’s easier than ever to get an integration written quickly. And then there’s organizational challenges, which is where we started, you know, well, can we actually put this in front of our customers? We’re a bank, we’re a healthcare company.
    Jonathan Kvarfordt:
    Right.
    Pol Peiffer:
    regulation. And that is just where, you know. you need to work with either a team internally or a company like Sierra that you can trust, that has done this before, that has the right guardrails in place, and go on that journey together, you know, and ramp slowly, and so on. But those are some of the things that I’m seeing. And then there’s the whole spectrum of, you know, maybe not going ambitious enough, and therefore not seeing enough impact, right? If you scope down This agent to only do the flowchart outcomes, while it can only do the flowchart outcomes, and you can’t have the magical responses. But if you expect it to be, you know, Jarvis on day one, I think you will also get disappointed, and you will not have a good experience. And so, I think, be, like. Operating at the frontier, but not beyond it, is also important.
    Jonathan Kvarfordt:
    How are you guys thinking about, the concept of… you could use swarms, you could use AI Agent Teams, you could use the Deep… like, I heard about this startup who’s using… they made this swarm of deep research agents, like this ultra-robust set, you know? Because I think a lot of times people are still getting used to an AI agent that does a very small use case, but then when you look at the holistic of what a team of AI agents can do together, like, when you think about the context an AI agent could have around the website and how that could communicate to another customer success agent that has the same interactions, like, they’re in the same context, right? First off, do you guys do that kind of thing? And then secondly, where do you see that world going when it comes to, like, this web of AI agents kind of interacting with the customer all over the place and has the context of the entire experience?
    Pol Peiffer:
    Yeah, I think that is a lot of what makes building an agent platform or a framework hard, is how do you share that context? I like to joke it’s turtles all the way down, and the analogy I always give is. if you go to a landing page of a website, you land on a landing page, and at the top, you might see, you know, some bar where you have the different subpages, and then you can go, oh, I want to see the products, and then I click into the products, and then I want to see the pricing, and I click into the pricing. I think of these agents Similarly, in that, you know, there’s no need of showing the customer up front all the pages that are on your web app or on your website, you know, the same way a user navigates in the browser and clicks you know, you navigate your conversation, and you disclose to the customer more and more. And you can do that with these little sub-agents, right? So you can have the swarm of agents that one is only focused on billing. The way we usually think about it is, like. dynamically loading in and out different contexts and different things that the agent needs to know about, and then having, kind of, supervisors and meta-agents that control that experience. But it’s… it’s… that’s why I say turtles all the way down. It’s ultimately the same, whether you think of that as a A sub-agent that you share context with, or you have a… maybe a slightly more sophisticated dynamic system where it’s the same agent. The most important thing that we worry about is You want it all to feel… Ultimately human, and what humans are very good at is we can focus on one task without forgetting, you know, why we’re here in the first place. So, if we’re having a support conversation, you tell me, you know, your Sonos speaker is not working, I can go and, you know, draw on my knowledge to, like, help you debug that, but I still know why you called in in the first place, and I might even remember that you called in last week as well, where you had a similar problem, I can draw on that to maybe it’s the same root cause, for example. That is the challenging part, and, you know, what we ultimately try to model, I think. anthropomorphizing these problems a little bit is often a good starting point in, like, how does a human approach this? Oh, they probably ask first, you know, roughly, have you tried these things? And maybe we should do that as well, and then get a little bit more.
    Jonathan Kvarfordt:
    Right.
    Pol Peiffer:
    And so on, you know.
    Jonathan Kvarfordt:
    Yeah, it seemed to me that some of the training data being worthwhile is taking your, top customer success conversations and just saying, hey, what are they doing? You know, and just mimicking the same process, making sure that has a database or knowledge base, you know?
    Pol Peiffer:
    That is correct, although I’ll give you one funny anecdote. We ran lots of experiments, and it’s correct, there is very valuable data there. I’ll get you one funny anecdote, which is one of the first, of those experiences that we ran, we were testing the agent that was produced, and in some cases, the agent would just randomly tell the customer it was its birthday, and, you know, and it was particularly when the conversation was kind of tense. This was a financial services customer, so it was like, where’s my.
    Jonathan Kvarfordt:
    Yeah.
    Pol Peiffer:
    and so on. And it turns out, when we spoke… and we were like, why is it doing this? When you look through the data, well, it turns out there’s some CX team members that had just learned that, you know, when there’s these, like, hard conversations, telling the customer calling in that it’s actually your birthday, and, you know, you’re kind of having a hard day anyways. That helps, it makes the customer less angry.
    Jonathan Kvarfordt:
    That’s funny.
    Pol Peiffer:
    So that’s always my anecdote to say, like, yes, but humans are very human in their own ways, and so, like, blindly scaling those things up can also be dangerous.
    Jonathan Kvarfordt:
    Yeah, when we do that kind of thing, I have the exact… I have another use case, I won’t take time to tell, but it’s very, very similar to that. Which is why I always tell people, like, yes, train the data, but make sure you analyze the data first, so you know what’s going in there, you know? Well, for those listening, I know, and I can appreciate, they probably would love some practical use cases, so you as an agent builder in the voice world with companies, what… what are… If you could tell people 3 to 5 use cases of very practical places to put AI that has improved the customer experience on all the people you’ve worked with, what would those be? What would you suggest to people?
    Pol Peiffer:
    Yeah, I mentioned some of the retail ones, right? Order management, troubleshooting, general policies. For technical products, these agents are very good at, you know. asking you some questions, and then drawing upon a lot of different guides and product guides to help you troubleshoot. So that’s, for example, what we do with Sonos. If your speaker doesn’t connect, or your ADT home security system, you know, is down, or your SiriusXM radio doesn’t work in your car. In financial services, for example, we do everything from simple, you know, reshipments of carts and disputes and fraud to, you know, more complex journeys where you’re actually having fairly empathetic conversations with the customer about, you know, loan collections and so on. Or, we recently talked a little bit about, Rocket Mortgage, where we basically power everything from the home search on Redfin, down to an agent that helps you, guide you through the mortgage process, which is, you know, very emotional. It’s a high-stakes decision, to pulling credit and, you know, getting you pre-approved for a mortgage. In healthcare, we’re having conversations about benefits, and, you know, how to use them, and how much of your deductible have you used, and explaining those things, and what can be complicated terms and simple terms. Then you can go down, you know, the industry list in telecommunications, we’re negotiating with customers, on churn, what is the right, you know, let’s say I no longer use my multinational, plan, what is the right plan for me to actually be on? Where these agents are not just, you know, saving OPEX, but actually, like, making dynamic offers based on what the customer is telling them. to get them on the best plan, which also can have a revenue impact, which is pretty exciting, because you’re not just saving costs, but you’re actually adding top line. So, very, very vast, I think. Any touchpoint that you have with your customer base today. I would think about, like, can an AI do this? And I would even go further, and I would say, could an AI do better by having, like, infinite memory and being very consistent, and so on? And the answer is likely yes, and then it becomes a, okay, where do we start kind of problem.
    Jonathan Kvarfordt:
    Yeah, I think a lot of times when… I realize just more and more that a lot of people don’t know they don’t know, so they don’t know what, number one, where AI could be put, and they don’t know if AI would be better, because they’ve never experienced it. You know, because again. a lot of people who experience AI, some is awesome, and sometimes it’s horrible. So, like, I don’t know if this is a great experience. So, hearing from you things that actually work, I think, is kind of… hopefully, it gives me ideas of, like, what People could do with different possibilities, you know? Yeah, and I think,
    Pol Peiffer:
    going back to the start of the conversation, I think it’s a time to, you know, think big and experiment, because.
    Jonathan Kvarfordt:
    Yeah. There will certainly be use cases, I think.
    Pol Peiffer:
    this technology, unlike lots of others, is actually one that is pretty horizontal. It’s useful in lots of different places. So, really auditing what is my customer experience, what are those touchpoints from, you know. first engagement to their loyal customer, to they might be having a problem, and how might I help them along that journey? The answer is likely that there’s lots of places, and so picking one to start is usually as good a starting point as any.
    Jonathan Kvarfordt:
    I love it. Well, I just want to tell you, thank you for joining, and hopefully, I want to make sure I’m a… I’m a time… time-aware person. I want to make sure we’re good. But I just want to tell you thank you for… for your… your thoughts. I might have to have you come join my podcast. I have all these questions I want to ask you now. But overall, that was awesome.
    Pol Peiffer:
    Very happy to do it, and yeah, thanks for having me. I get to spend all day on this stuff, so I love talking about it.
    Jonathan Kvarfordt:
    I love it. Is that okay, Julia, we missed anything?

  • Julia Nimchinski:
    Thank you so much, Jonathan. Thank you, Paul. Paul, before… before we wrap this up, just curious, what’s… what’s in the Sierra roadmap? What are you allowed to.
    Pol Peiffer:
    Oh, sure. Yes, absolutely. Well, we build conversational AI agents. You heard me talk a lot about, you know, we’re trying to do that across the entire customer experience. I think the main thing that we’re focused on is really moving from something that is a conversation, right, I reach out once and I get help. to something that is a relationship. So, how do I have a durable relationship with this customer? That is what we all strive for as a business. We want to have great, durable relationships by the products and services that we offer. And so, really, everything that’s on our roadmap is we’re building a data platform that will store those memories and can reference other parts of your business so that the AI agent can access it. We’re building tools to make these agents easier to build, so you can have lots more of them in different parts of your business on that shared memory. And we’re, of course, excited by, like, all the advancements and just the reasoning capabilities, the naturalness of, you know, AI voice models and so on. I think next year is going to be really big. Like, by the end of next year, you’ll call these agents, they will sound human, you’ll probably need to ask whether they’re an AI or not if they don’t tell you. And I’m just excited by the world where, like, you pick up the phone and somebody greets you by name. And, you know, knows that you called last week, and knows why, and has already looked at, you know, the telemetry data and can help you out immediately. I think it will save a lot of time and create good experiences for customers.
    Jonathan Kvarfordt:
    Love it. And how about you, Jonathan?
    Julia Nimchinski:
    What’s next for Momentum?
    Jonathan Kvarfordt:
    Oh, man. It’s a lot of the same thing. It’s not obviously from a voice perspective, but it’s very much context aggregate view of data and informing the individual, whether it’s a CRO or it’s a sales rep, to make sure they know the context, as well as from an aggregate view to a call-by-call view for us. So, like, it’s kind of the same concept, just from a different use case, you know? And then it’s… it’s also… we’re gonna be diving in more into the world of authority with AI agents, like, what… how much authority we’re gonna give an AI in the process. based on whatever the sales process or exit criteria might be, just experimenting, like, how much… how much does a team trust with authority to get things done after the fact that doesn’t rely on a human, and where’s that boundary? You know, it’s gonna be fun to kind of… Tap into that.
    Julia Nimchinski:
    Love it. Two companies, two iNative companies, truly AI native. Super excited for what you’re building, and that’s a wrap for day one! Join us tomorrow for Day 2, and yeah, we have more phenomenal speakers, CRO Prediction, Kyle Boyer, Fireside Chat with the CMO of Zoom, and a full Agentech stack panel hosted by Gartner. See you tomorrow. Thanks again. Bye-bye.

Table of contents
Watch. Learn. Practice 1:1
Experience personalized coaching with summit speakers on the HSE marketplace.

    Register now

    To attend our exclusive event, please fill out the details below.







    I want to subscribe to all future HSE AI events

    I agree to the HSE’s Privacy Policy and Terms of Use *