-
Julia Nimchinski:
Second… And we are live! Welcome back to the AgenticoS Summit Day 3. We have another amazing lineup of top GTM executives, VCs, and analysts. of every walk of life in GTM, really.
Before we kick things off, just make sure to share your questions, comments, and anything you’d like, any feedback on HSC Slack, and please do check our sponsors on the right corner of your screen. We’re quite intentional about those amazing GDM Magentic offerings.
And couldn’t be more excited to kick things off today with Philippe Lacour, CRO at Personio. He led GTM systems from $30 million to $3 billion across Envoy, Airbase, and now one of Europe’s fastest-growing platforms. So, Philippe. Incredibly, incredibly excited again. How have you been? And, we start things always with one question.
What’s in your agenda to ask? What tools are you using? What tools are you fan of? We need to know.Philip Lacor:
Yeah, hey Julia. First of all, great to be here. I love talking about AI and go-to-market, so I’m very excited to kick off the third day. We use a bunch of tools, some of the regulars, like Salesforce and Gong. But we also start using, Clay, Profound, and a bunch of others, Hockey stack.
And we keep looking at new tools, and I’ll talk about one or two more tools that we’re using.Julia Nimchinski:
Amazing. Let’s get into it.Philip Lacor:
Okay, great. So, we’ll talk about the OS for an AI-native go-to-market. As said, my name is Philippe Lacour, I’m the COO for Pazonio. Personio is one of the leading HR and payroll SaaS companies, and we’re all in on becoming, like, AI first in the company. First of all, our journey as Pazonio started with AI about a year ago.
We had, in May last year, a big, what we called, AI Search Week. Our founder and CEO, Hano, was kicking this off. We had speakers from OpenAI, Ms. Troll, and many others. Everybody was getting access, all 1,500 people to LLMs, and so on and so forth. And there was a huge excitement, and we’re gonna have our second search week coming up soon.
But then, after 4 or 6 weeks later, I noticed that, okay, people were using yellow lamps. But I was asking myself the question, okay, is this really gonna lead to, like, a big transformation?
And this is when we started that Slack channel mid-June last year, the hashtag AI-powered go-to-market, which is really where our journey started in GoToMarket. And what we saw very quickly is that 90% of our go-to-market team started using AI very quickly, within the first few months. By the way, one year later, we’re still 90%.
Which is, on one hand, great. However, this is the thing that we realized. Adoption does not equal transformation. Adoption is about, like, individuals working faster, reps using at attempts to craft emails, doing some research.
But that’s not the same as, like, true transformation, when you have, like, an AI-powered go-to-market, where you really have an operating system that continues to scale in an efficient way. And that’s a big question that we’re working on. And so, for real transformation.
You really need this go-to-market operating system, and that is the core of, like, okay, how do you build that? What are the lessons learned? We certainly made errors, had lessons learned, and I’ll share a bunch of those with you today. First of all, what is an operating system? Let’s start there.
We think an operating system has three big components. There’s a data layer, there’s a process layer, and there’s an execution layer, as a definition of an OS. First, the data layer, what’s in your data layer? That’s your customer data in Salesforce.
There’s your definition of your ideal customer profile, there’s intent, there’s product usage, there’s also your market, which prospects do you want to go after. There’s a lot of stuff in that data layer, and there’s even other things that I haven’t shown here. Then the process layer.
Those are your sales and post-sales processes, your playbooks, this is your discount approval matrices, your routing logic for leads, how fast your apps should follow up, and so on. All of that is in your process layer. Mind you. a lot of your processes, or our processes, have been documented.
There’s also processes that are living in the minds of people that have not been documented, and that has implications. Then there’s your execution layer, where you have all the different roles in the go-to-market team. In our case, from marketeers to SDRs, AEs, AMs, CSMs, so there’s a lot of different functions that play in this execution layer.
There’s also a lot of functions that are not sitting in direct go-to-market, but I’m sitting in this layer. For example, DealDesk. Or even your product team could be sitting in other parts of the organization, but are, in a way, part of this execution layer.
Now, one big consequence is that This system is run by humans, and humans are able to make bridges between the three layers, and also to bridge even when there’s gaps in the system, in one of those layers. Humans have so many… so much contacts that they were able to work through this and get the system to work as a whole.
Now, however, we’re gonna shift to, like, okay, let’s go to the Agenta go-to-market operating system. Let’s talk about the data layer. You have some of the usual systems, Salesforce, Snowflake. now you see some of these new tools popping up, like Clay, and I’ll talk more about how we use that.
Some of those systems are also showing up in the process layer, so Salesforce is not only your data, but there’s also logic in it. Gong the same, Clay the same. And then, on your execution layer, that’s where you see now the agents coming up, so we use Qualified for inbound SDRs.
We use FIN for customer support, and we’ll start talking about how this works. But one consequence is, like, first of all, in the execution layer, you have next to humans, you have now agents. The other big consequence that we learned is that These agents work different from humans.
They cannot handle the things where processes are broken, or where processes are missing. They’re also struggling where data are missing, and that has tremendous impact on how well these agents work and how efficient they are, and we’re going to talk about that more.
And so, when you make the shift from a traditional go-to-market OS to an agentic OS, you go first from, like, your traditional go-to-market operating system. If you’re advanced, you have all these signals, you have, like, health scores that can identify churn risks or downsell risks. You have, like, advanced lead scoring models.
But you’re not taking action. The humans are taking action. When you start shifting to an agentic go-to-market OS, the agents start acting on these signals, and these signals become like triggers. So the agents start acting on buying signals, they start generating responses to risks that you might have in your customer base.
And they move deals forward in your pipeline when lead scoring is there. So, the layer of actions start to move partially, or in some functions, totally to the agentic layer. So let’s start sharing some of those lessons on what it then takes to make that happen. The theory is easy, practice is a little bit harder. -
Philip Lacor:
And the number one lesson is back to the data layer, that data is the ceiling. You can also say, Yeah, your AI is as good as your data, but we have really seen that the effort that goes into the data layer I think probably one-third of, like, becoming an agentic go-to-market organization is all about your data.
One-third of all your resources, all your time, all your effort that you’re spending is actually on the data part. And, this is our journey. Our starting point was that we had a full sales force. But, our data quality was not very good, and we’re talking about, like, let’s say a year and a half ago, a year ago.
We figured out that one-third of our data were duplicates, which is a lot, so when you think about the impact that it has on how you run the organization, it’s tremendous, so… we deduped all our data, but then, again, we could have done it once, but to become an operating system, we actually bought software that does continuous deduping, where you start to do, like, continuous cleaning with every new data point that comes in.
So, it goes to, from a one-off Human-generated high-effort action to continuous operating system. The second thing is, we invested quite a lot in our third-party databases.
We also canceled a couple, like Dun & Bradstreet, but also we expanded our investment in, in good data, and Lucia, and a bunch of others, and we keep acquiring databases that we think are great, and that complement our existing data. We have everything in Snowflake, so Snowflake is also very important to us.
Then we said, okay, we now, defined, sharpen our ICP. We also used, like, LLMs for that. And then we scored all our prospects and all our accounts with ABCD, depending on, like, how great fit they are. That in itself was a lot of work. We released the first model, was not very good, then we released a second model that’s already better.
Now we got to the point where we did two POCs with Clay. We’re rolling Clay out now, and we’re using that to do more data enrichment, and also to go from an account layer to an account layer. Ultimately, you want to have For our accounts, we want to have, like, 4 or 5 personas.
We want to have an HR director, HR manager, a CFO, a payroll manager, and ideally, you have perfect information on all these personas to run an effective… to run effective campaigns. And then, intense course, we actually built internally, and this is where I very much believe in, like, going after signal to generate pipeline.
So, do you see a trigger where, okay, you have a former user of your product that goes to a new company. do you have? We, for example, know that whenever a payroll manager leaves, that could be a signal. So, what are the unique signals for your company? And again, we will… we went through multiple iterations where first model was not very good.
the salespeople didn’t like it, then immediately you have an adoption problem, so you gotta then make your models much better. Now we’re on the third model, and it starts to work really, really great. The point here is that you gotta treat your data layer as infrastructure for your agente go-to-market.
It’s not a one-off project, it’s undoing effort that needs to go in there. The other big data part is customer conversations, obviously an important data source for go-to-market. We went big on Gong, and we chose Gong because of their conversational intelligence.
We built them this GPT, And what it does is runs analysis continuously on thousands of calls that are happening, and we keep adding calls every day. And we’ve been able to go really deep on win reasons, loss reasons, competitive insights. What it also does is.
whenever we wanted to build a customer feedback loop with our product teams, we would use, like, data from Salesforce. But those were very often manually entered data by reps. And so you miss data, information is incomplete.
There’s more, like, judgment in there, where you can go back to your product organization and say, here are 5,000 customer calls. We ran the data on those.
We are very data-driven, things on where we’re missing features, or where we’re winning because of this feature, that is very powerful and gives you much more credibility with your product organization, and really can feature a product roadmap. So, I think this is a very valuable exercise to do.
We can still get better on, like, recording all the calls, but this has given a rich, very rich information for us. Okay, then shifting gears to the next bit. So one learning, and one year in, is that… Buying tools is really easy. Building systems is easy.
the really… the hard part, ironically, is not the AI or the tech, it’s really how you operate. That’s much, much harder. So, one lesson is, if you want to go from experimentation to transformation.
You need experimentation, you need to have bottoms up, you want everybody to… The people who work with their processes day-to-day, they understand best where the friction is, and they can help solve issues in, like. I cannot get access to data, I need to hand over… But, for big transformation, you also need a top-down approach. And why is that?
It needs prioritization of resources, it needs, budget, sometimes you gotta buy tools, sometimes you need to invest in more tokens, you need to invest headcount. You can’t say, hey, I’m not gonna hire a rep or an account manager, but I’m gonna hire the next go-to-market engineers.
For those things, it’s, you need to have, like, also top-down guidance and give it alignment with your strategy. So, it should also not be only top-down, because then you miss all the ideas, you miss all the improvements that ICs can drive and that managers locally can drive. But you really, gotta have both. The other thing is that from day one.
We took a very cross-functional approach. We’re blessed to have a data and systems team that runs like Snowflake, that runs like the big instances, Amazon Bedrock, and so on. They also help, with, setting up good data structures, and do data engineering. And then we have our RevOps team.
RevOps team is building the bridge between the data team and the systems and the business. The business is then sales, sales, customer success, account management, all your go-to-market functions. And then you have RevOps that’s building that bridge, and within RevOps, we have now these go-to-market engineers.
We have a couple, but they’re really helpful in building this bridge, and they own the big change initiatives that drive that transformation. We have seen projects where we had not great coverage from sales or customer success. Then you start building the wrong thing. You build things that don’t work. You build models that don’t work.
So we know now, for every agent, for every bigger project, we need a dedicated owner in the car management, in customer success, that builds that continuous feedback loop and say, hey, this is what we need, this is how it’s working, let’s do V2, V3, and you get to that fast experimentation.
We also know that without go-to-market engineers, it’s harder to bolt everything together and solve also the system issues that are real, and where go-to-market was, in the past, often blocked. Where they did not have that capability. So, I really advocate to have, like, this combination in every big initiative that you drive.
We also have this AI-powered working group, it’s become very popular. There’s now about 20 people in there. I meet with them every week or every other week, where we go through the projects.
We had a meeting earlier today, and really, really drive… it’s almost like a product roadmap for go-to-market, where you drive releases in your AI-powered go-to-market, and it’s really fun to do it. Then we have two frameworks for prioritization, jobs to be done.
So we have go-to-market engineers shadowing, SDRs for two weeks, or account managers, and then they figure out, hey, you need to go into these seven systems to get your job done. You lose a couple of hours per day here, okay, why don’t we totally solve that, or really reimagine the process?
We then started having so many projects that we said… we lost track. We said, okay, we need… we need some context. So we started plotting everything into the customer journey map. Which is very helpful to have, like, an overview where you are, and when you want to go, and then the final stage will be to get to a full flywheel.
And with the flywheel, you also need to map, okay, where are still now the gaps from customer journey to flywheel to get you to a full agentic OS, but we are on our way there. That’s number two. -
Philip Lacor:
Lesson number three, the moment you’ve been waiting for, let’s talk about agents. You see, it’s only the third part.
And I also want to talk about assistance and what the difference is. But the lessons learned are… you need a ton of oversight to get them to work well, and you need a ton of context. those are really, really big things. So, let me talk first about assistants and about agents.
I think when I read all the LinkedIn posts, people sometimes use them interchangeably. For me, the difference is that you have batch processes, and you have continuous processes. A batch process is started by a human, and that’s where you use an assistant, that the trigger comes from a human.
and an agent is running continuously, where there’s no… the trigger is coming from a system, or they’re, like, always on, you’d probably need both of these in your go-to-market. I don’t think everything is agents. There are many human-led processes where you still need these assistance. And so, let’s start there. We built a number of those.
On the left, you see two of our research assistants. One is for expansion BDRs that identify cross-sell opportunities. They literally were spending, per person, 2 hours a day doing account research by going into, like. Is the account healthy? Is there an opportunity? Where are they in the contract? Which module are they using?
And we brought that down with an assistant to 15 minutes a day. So that’s real hardcore ROI. Same for account management. They… and why… you could ask, okay, why is it not the same assistant? They’re all tailor-made. They’re tailor-made for the job to be done.
So the account managers need to do renewals, they also do, like, cross-sell quite often at the point of renewal. So you need different data, you use different propensity models, different signals, and it’s tailor-made to that job to be done and to that function.
But again, both of those rules, we save, like, 2 hours… from 2 hours to 15 minutes a day. But then it also goes to, like, customer experience, and a famous one is, like, handoffs. Every go-to-market team loves handoffs, so new logos to implementation, from implementation back to customer success, or to account management.
You have all these different functions, and I think this is a difference when you have a 10-person AI native startup, or a 500-person go-to-market team, where, yeah, the complexity is just a lot higher, there’s multiple segments, multiple countries, so you have more handoffs.
Here, we built also a system to bring all this information together, tailor-made of what, for example, an implementation team needs. And this drives customer experience, this drives productivity, and I think there, yeah, you can do a lot of improvements here. And we actually have… more than 40 assistants now.
Some of them be… the best ones have been used more than 50,000 times, so a lot. Next to that, we have now two agents, and our inbound SDR, AI SDR, and our customer support agent. The inbound AI SDR is from Qualified. We call her Nia, she’s on our website.
And initially, we wanted to do her only, meeting booking for demos, but now she’s doing, like, true, qualification. And we’re booking, like, hundreds of meetings per month. That are pre-qualified by NIA, and so it’s become, like, a huge, huge point for us. For customer support, we use VIN.
You can also use Shara, or Decagon, or Zendesk now, but we use VIN. They solve now… VIN solves a lot of our tickets, so a high proportion of our more simple tickets are being used by VIN. Now, one insight that we had is we thought that one would be for inbound pipeline, the other one would be for customer support requests.
I think it’s now different. The qualified agent actually covers our website, so it’s a service, so we also get customer support requests via our website. Not only demo requests. Similarly, Finn is in our platform. does not only get customer support requests, get also pricing requests.
So, it’s more like, which service are we covering with which agent? And when you think about how many agents do you need over time, I don’t think you need 30, 50, 100. I think they start converging over time, and you start covering big services in your… for your customers, but also within your go-to-market organization.
So, these conversions, and also the coordination and handoffs between these agents, is becoming more and more important. And, how do you optimize for having many specialized versus few that can do multiple things? That is like a… we’re learning… we’re definitely in a learning journey.
Few other things, agents, you might have seen this article coming out of the context graph. It was written by Yahya Gupta, Foundational Capital.
They said, like, yeah, contact, agents need context is not only your Salesforce data, but it’s also, like, the decisions that are in Slack about the discount that you gave to this customer for that exception. It’s like, this thing that you did on enablement, and the more contacts you can give to these agents, the better they work.
And I wrote, this is where the fun begins, because we definitely learned a bunch of lessons there. When we launched our inbound AI SDR, I’ll start with what didn’t work. We launched it, it was quick, worked well, we uploaded, we trained it on our website, okay, all great.
Then we saw very quickly that the performance degraded, because the training was too light. We saw that, hey, let’s throw… 100 slide decks, or our entire high-spot instance to the agent, and the outputs actually got worse. So whenever you use materials that are not fit for purpose, you can throw your entire pricing strategy at the agent.
But when a customer asks, is your price higher or lower than this point, your agent cannot answer it, because the materials are not fit for purpose. So, we had to rewrite a whole bunch of these materials. And this was a whole journey where we… Where we had to really rebuild things to train our agent.
Also, some of the agent functionality is really immature. And so, one of the big things that we learned is we needed to have a dedicated person accountable for the performance of the agent. So we have one person that looks at the agent and looks at it every day, checks every answer. And go through that iteration every day.
We don’t do any AI project anymore that is bigger without a dedicated business person training it and looking at the output every day. You also need guardrails, To say, this is what you can and cannot do. So, there’s a lot of lessons learned on training the agents.
I gotta say, on our inbound agents, we are 6 months in now, it’s now pretty good, but 6 months, almost day-to-day to day. And you also see that the models and the agents are getting better. But before you go, like, hey, here’s 10 agents, yeah, you’re gonna take much more effort, at least we did, to get to, like, very high performance.
Once you do it, it’s really worthwhile. We did see, like, 100% productivity in some areas, because we do this now. But it needs a… it needs the effort. Then, taking a step back on the overall operating system. What didn’t work is, as I said, the under-trained agents. What didn’t work is bottom-up experimentation only.
We also built, originally, a lot of the agents ourselves. Now, SASA says, okay, it’s 90-10, 90% built, 10%, sorry, 90% buy, 10% build. I think it’s maybe 70-30, because I think with cloud co-work or with context, you can start solving a lot of your internal processes, so I think there will be a lot of build work ourselves.
I also think, assume our ROI is a quick fit. If your board asks, hey. Are you taking out all your sales reps now? I don’t think that that’s the case in a large scale. go-to-market organization. If you see recent announcement from OpenAI, they are now massively scaling up their hiring for AEs. Okay, if even OpenAI is doing that.
I think at large scale, you still need, like, a lot of humans. The question is, for us, for example, can we go to a half a billion to a billion AR with the same headcount? That’s the big question. But what worked is bring your people along the journey. Have a cross-functional team for day… from day one. Yeah, your data, I spoke about that.
You got F-tier KPIs, all those things, Are important, and they really, really don’t work. Do we have a fluent flywheel that is fully autonomous now? No, not yet. That is, AI natives talk about it, maybe they’re there, I don’t know. But I think companies at scale, I don’t know any who’s fully there, but are we getting there?
Yeah, we’re definitely on our path. We have built a lot of things, and the flywheel’s starting to work. ask me again in half a year, in a year, I’ll come back in a year.
But I think the nirvana is to get to the flywheel where every action, every trigger, be it from an agent or be it from a human, strengthens the intelligence and makes the engine more efficient. That’s the ultimate flywheel. We’re on our way, but it takes time to get there, but we’re striving to ultimately get there. So, closing thoughts.
Still, most go-to-market organizations are running on human-powered infrastructure, dressed up with AI features. I think humans will continue to play a big role. The approach is definitely changing. The ones that will win are not sprinkling AI on top. That’s, by the way, the same for products. You gotta overhaul your products.
It’s the same for go-to-market. You gotta overhaul your go-to-market and really strive for transformation. Humans will have a big role in judgment, training, and the decisions. Roi isn’t instant, but it does show up. Sometimes you see 100% productivity, sometimes you see, hey, our retention goes up 2 or 3 points, pipeline quality is improving.
And it makes people’s lives easier and more productive. And, I wake up every day and think I’m behind, I gotta learn faster, we need to learn faster, and the learnings are definitely compounding over time, so lean in, spend time on it, and you’ll build a great AI-powered agent to go to market.Julia Nimchinski:
What an incredible way to start the day. Thank you so much, Billy. I have millions of questions, but unfortunately, we are out of time. The one that I would really love to address is How do you name, or, like, what’s the… you mentioned you have one person who is overseeing all of the agents. what’s their role? Is it an AI architect, or…Philip Lacor:
Yeah, we call them go-to-market engineer. But we also have a people and a person in the business that is one of the SDR leaders who is now accountable to doing that. So it’s always go-to-market engineer plus a business person.Julia Nimchinski:
This is amazing. Thank you so much again, and what’s the best way for our community to support you? What’s the best next step? How can they engage with Personia?Philip Lacor:
Yeah, just hit me on LinkedIn, and yeah, I would be happy to help and see what we can do for you. Thank you. Excited to be part of the community. Thank you for having me.Julia Nimchinski:
We’re excited to have you.