bg

    Rewatch

    AI Summit

    Missed the Summit? No worries! The recordings are now available on-demand. Please fill out the form below to access.







    Text transcript

    Compensation Structure

    Executive Roundtable held on November 13, 2024
    Disclaimer: This transcript was created using AI
    • Julia Nimchinski [ 02:02:43 ] And I think we found our new slogan, Regeneration AI. I like it. Cool. We are transitioning to our next panel. We have a unique subject, but it’s an ultra-relevant one when it comes to AI application compensation structure. Welcome, Eric Charles. Thank you very much. Oops. I love when you click a button and you almost think you’ve lost one of your, you know, you’re closing out a screen. So, I’m trying to make sure I can see everything. Audio working? Perfect. All right. Fantastic. So, thank you for bringing me on. Yeah. I can see the panel slowly coming in here. I’ll just do a quick introduction. My name is Eric Charles. I’m the former chief marketing officer at Exactly Corporation, which people know for managing incentive compensation.

    • Erik Charles [ 02:03:37 ] Nowadays, I’m working as the fractional CMO at DispatchTrack, but I also still do a lot of advising work with startups and big companies with SalesGlobe and Ohana Operators. And it’s been a fun path here of what, you know, we all know the AI game, or we think we know, or we pretend to know it, or, you know, we’re trying to figure out how to do it. But I think it’s that rubber hits the road thing. And earlier on a LinkedIn post that I saw, it was a wrap up from yesterday’s sessions here, was this comment of, ‘we still don’t have that self-driving Uber.’ I’m like, hold it, we do. I take Waymo in San Francisco. It’s, I love it. First time I did it, I was a little paranoid.

      Erik Charles [ 02:04:15 ] This car pulls up, there’s nobody in it. It has my initials on this little rotating disco ball at the top. I get in the car and it did a better job of driving. I’m driving me through the back streets of San Francisco to get out to the Sunset District. Then anytime I’m behind the wheel, I’ll freely admit it. I mean, it was smooth start and I’ve started using the, a few other things. So we are there. I’m sorry. AI is more involved in anything they want to admit. And so today’s panel, let’s talk about another application. Cars is easy. That’s kind of the default right now. It’s this whole world of compensation. It’s because there’s still humans at the car. The company, there are still people who need to pay to do things.

      Erik Charles [ 02:05:00 ] I have in my career, some of the absolute worst bits of compensation plans written and lack of understanding and equity. And by equity, I mean both fair pay practices and the use of equity for Silicon Valley. Let’s face it. So before we start, or as a way of starting, I’m going to call out each panelist, one sentence about who you are. And then one sentence about the weirdest way. You have used artificial intelligence so far, since this became available. And I’ll even say, I tried, you just did or write a poem for my wife for Valentine’s day. Her response was, ‘This isn’t your work. This is garbage.’ She’s like, ‘You do a better job.’ So I felt really good and also really bad for being lazy. So Jonathan, introduce yourself, please.

      Jonathan Kvarfordt [ 02:05:47 ] And your weird air AI story. Oh, goodness. Grace. Well, good. Good to be here with everyone else. Excited to be here. So my name is Jonathan Carford. Some people call me coach. Um, I’m the founder of GTMA Academy advisor to five or six startups. Um, and love this whole conversation. My past history was in operations and enablement. So this topic of compensation is near and dear and close to my heart, both from a next salesperson and as an ops enablement guy, uh, weirdest thing, probably trying to see what I’d look like with different shades of hair. Probably the weirdest thing that we’re making chicken Alfredo, which actually did a really good job. Wasn’t weird, but it was really good. So how about that? I love it. Jen. Yeah, thanks.

      Jenn Steele [ 02:06:30 ] Hi, I’m Jenn Steele. I am CEO and co-founder of Sound GTM. We’re a referrals marketplace. Um, I grew up through being a CMO, so I, uh, I have a lot in common with y’all and the weirdest AI. See, I’m not that creative. I’m like, I have used it to make lazy posts for people in my marketplace so that they could help us get the word out. And those got weird, but it wasn’t a year weird, use as much as, um, Clark. Um, I’m not So I kind of lost it momentarily. All right, Jason. Hi, uh, Jason Naperelski. Um, I am, uh, ex Amazon, ex Google, ex Oracle. Um, now, uh, working with, uh, smaller companies that are more sane, um, as a consultant and a chief revenue officer and go to market, uh, I guess overall.

      Jason Napieralski [ 02:07:20 ] And, uh, I guess the weirdest thing that I did was I, uh, had AI generate a picture of a regular customer. Uh, and, uh, it was, uh, a regular size elephant riding a regular size monkey. So I did that. I love it. All right, Blake. Hey, I’m Blake Williams, uh, CEO of Demandy. And, uh, we help people scale with their existing networks. And, uh, I think the weirdest thing that I’ve done with AI so far is, uh, had it write a song or sing a song in my singing voice, uh, that I love, uh, which is fun. All right, Karen. Yeah. I’m Karen Sage, uh, um, at CMO at, uh, locally, which does, uh, uh, retail, uh, unity. Um, the, probably the funniest thing I did, I got tired of answering questions.

      Karen Sage [ 02:08:10 ] So I created a CMO bot for myself and it was too nice. It was, you know, totally unlike me. And so I created a snarky CMO profile so that they could really get the real Karen when they, uh, approached it. Okay. I, I think that teaching the AI to speak in our voice is one of the key things. And, uh, a company I advise, Change Engine, they do email, they do automate like HR messages. And that’s one of the biggest things they have to teach it. So if you want the CEO to say ‘happy birthday’, it has to say it the way your CEO would say ‘happy birthday’. And the funny thing about it is a couple of CEOs I’ve worked for, or even myself, I’m like, ‘I don’t know if I really want to teach the AI that voice.’ Um, but that’s okay.

    • Erik Charles [ 02:08:54 ] I like the idea. All right. Well, Jonathan, I’m actually gonna come back to you to start, but I want everybody to jump in only because you said your ops background. Yeah, one of the fun things in compensation is, and it’s funny, I’ve, I’ve, I’ve done projects where see people said, can you design the sales plans for a company of this size in this industry? And here are the titles we have. So I’ll do some work for them. They’ll come back and say, well, except this person is really an SMB-ish with a side order of customer success. And all of a sudden I’m like, look, I’ll do it. You realize how much you’re spending on my billable rate, my hourly rate to keep, keep on doing this, which brings me to the question.

      Erik Charles [ 02:09:34 ] All right, Jonathan, will you trust an AI to write as individualized commission plans as a, as, as a company wants? Do you think it’s ready? Hmm. This is a good question. Because it kind of goes on. I asked the CFO a similar question about, would you trust any AI to help you write a 10-K and publish it? The answer’s usually no. No. So it’s not quite there yet, but I do think it’d give you some interesting insights into what you could do. So I come from a heavier enablement ops background where I view compensation or bonus structure as an enabler of behavior I’m trying to look for how do I help a team be able to collaborate together? And so I want to compensate for that.

      Jonathan Kvarfordt [ 02:10:17 ] So, that we don’t, you know, I’ve been too many times where I’ve had to redo compensation because they enabled the wrong behavior. So if I can have an AI, give me insights on how to do that, then I’m going to be able to do that. And then if I can’t have an AI, if I can’t have an AI, I’ll do that. And if I can do that, I can do that. So if there’s a chance of what ways I can be able to deliver that, then bring it on whatever I can do to make that happen more often or bring the insights I wouldn’t have had before. Because one thing I love about AI is it gives me insights of things.

      Jonathan Kvarfordt [ 02:10:35 ] I have a very narrow point of view consciously, and AI opens up my field of vision so I can see other points of view or other possibilities I may not have had. And even run different scenarios to where I could kind of go in and saying, here’s the personality or culture of my team, give me three or four scenarios I can go through hypothetically with specific numbers and really figure out, Now, what does that look like profit-wise or behavior-wise or any other factors? Raise your hand if you want. By the way, I should have said this at the beginning. Hit the raise hand button or if not, I’ll call people out. You know, I do teach sometimes and I can be really obnoxious that way. Yeah. How about AI testing the plans?

      Erik Charles [ 02:11:16 ] I mean, one of the ones I mentioned, I worked with exactly the CEO, Chris Cabrera, wrote a book years ago called ‘Game the Plan’. It was all about how reps will get an incentive plan and try to manipulate it. And one of the things I love to do is whenever I design a plan, I think, okay, if I just want to bank cash and not care about anything else, what could I do to make the most money just for myself? All right, Blake. Yeah. So one cool way that I played with AI is ultimately around this go-to-market forecasting. And what I love about it is that. You can feed it the ceilings and the floors on a wide variety of variables, almost anything that could be, you know, imagined.

    • Blake Williams [ 02:12:02 ] And then you can allow it to randomize what might occur based on the ceilings and floors of benchmarks that you know to be true just from operating, right? That you wouldn’t perform any lower than this. And it would probably be irrational or amazing if you perform better than that. But not to allow that to happen. And then run that scenario 5,000. And see where that actually ends up and print the regression line and then see what those next additional tasks would be to shift that regression line further north or up the line. So I think the scenario planning instance is fantastic. If you know where you’re at for sure, and then you can make some better assumptions about what those floors and ceilings are. I like that.

      Erik Charles [ 02:12:48 ] We’ll come back to the data side. I saw Karen and Jen both had their hands up. So, Karen, how about you? So one of the things that we don’t talk a lot about is, you know, when you talked about compensation and sort of incenting things and teamwork, ability to use AI and basically incent the ability to use AI in everything we do. So I think like in STR, for example, you know, it’s kind of built in because the number of ops, et cetera, et cetera, you’re going to use to whatever’s your advantage. Some of the teams that it’s not so value-based compensation. I think we need to encourage more of that to say, hey, go ahead and it’s not cheating if you use ChatGPT to come up with your ideal profile or if you’re, you know, conversion rates are higher.

      Karen Sage [ 02:13:36 ] So actually building that into our comp, I think, is a really important aspect. All right. So, Emily, I want to ask, how would you do that? So I’ve got an SDR, you know, and I’ve told them to use it. I’m like, guys, why do you keep coming back asking me on how to message this? Did you ask anyone? You don’t even have to pay for ChatGPT or Gemini or some of these others for the entry-level perplexity. We’ll even do it for you a bit, you know. So how would you incentivize an SDR to actually use a free tool if they should be? So are you asking me? Yeah. Yeah, yeah. I tried to go a little deeper on that. Sorry. Yeah. Absolutely.

      Karen Sage [ 02:14:19 ] I would take the value-based and I would say if you engaged AI to basically make this happen, brownie points. And, you know, if your deal, your opportunity was created with AI, you know, and I put attribution on it, wrote the copy for some of your cold emails that, you know, basically made the phone call for you, you know, no matter how it was used, I’d actually use an accelerator in the formula for how I comped. Okay. No, I love it. I love it. Jen? Okay. I’m not sure I would comp on using AI. And the reason being that AI is great until it loses its mind. AI is horrible at following rules. And so that’s one thing to remember. Like, unlike somebody that you’re just like, this is the structure of our email.

      Jenn Steele [ 02:15:14 ] Five emails in, AI is going to forget that you ever did that. Then you say, what did you do wrong? And then it tells you. And asking an SDR to spend that time. Um, spend that amount of time, like figuring it out. I think I would more comp on when we see people use AI, they do this and that faster and they get better results. I would much rather compete on results that you can basically only get if you’re using AI than to compete on just using it. Yeah. I mean, that’s what I’ve seen. It’s like the smarter, not smarter, perhaps more empowered. I don’t know. Best word. Cause of the SDRs that have. Suddenly like building out their own cadences, um, in that way when they’ve just got a bunch of go for it.

      Blake Williams [ 02:16:00 ] But Blake, you had your hand up. Yeah. So before this, uh, before this, I went and talked to Yuri at, uh, he’s the CEO of AI SDR. And one of the interesting things that we talked about is, uh, the fact that, uh, SDRs are not the only people using AI SDR. Marketing teams are using AI SDR to generate those open digital conversations for which, and the handoff can happen to a human. Um, especially as we see signal adoption start to become very, very popular, um, and make its way in as, as teams are rationalizing that. And we think about compensation. It’s not just the seller who’s going to be compensated. Maybe it’s, it’s marketing, right. That we need to think about how do we compensate them differently?

    • Blake Williams [ 02:16:45 ] If 60% of the teams using the SDR are marketers. Uh, I, I’ve. I’ve, um, experimented with it’s funny. My first interaction with an AI SDR was from the team over at Six Centives, um, where it was an email conversation to book a meeting with me at a Forrester summit. And it was brilliantly done. I did, I had no idea it was an AI until I sat down for the lunch and they, they, they passed that on. So I’ve, I’ve seen that work really, really well. Uh, Jason, you had your hand up though. Yeah. So, um, I just wanted to zoom out a bit cause we’re, we’re awfully tactical, awfully fast here. Um, and I just wanted to kind of zoom out and say a couple of things.

      Jason Napieralski [ 02:17:25 ] One thing I think that’s unbelievably broken from the top to the bottom, especially CEO compensation was now a thousand to one on the average person and then all the way down. And I, and this, I come from an enterprise background, so sellers elect quarterbacks get way too much credit and way too much blame for how they perform. And Blake triggered me because he said other people need to be compensated. And I think that we should be paying for work from a compensation perspective, not necessarily end results. And that way you’ve got a much cleaner line and that all of the people involved in enterprise sale should be compensated and should have a large proportion of their set over their compensation on that. All of that said, compensation is way too high.

      Jason Napieralski [ 02:18:12 ] Our margins are way out of whack, especially in tech. And we have an AI come up, it’s coming and it is going to come hard and it’s going to come fast. And AI. Gives us an opportunity to react to that significantly. And what we can do is really what we should be using AI for-is not trying to refine, you know, who gets a little bit more, who gets a little bit less, but looking at how do we change a compensation model altogether? How do we use AI to analyze stuff that we didn’t have time to analyze before? How do we pay for the top of the funnel activity rather than the back of the funnel activity? And all of that stuff can be done faster and easier with AI.

      Jason Napieralski [ 02:18:50 ] All of that said, this is over. Last week’s election told us that we’re into a reality stream here now, and that we need to align our compensation, our programs, and everything to what the reality is. And very simply, stop misaligning our customers’ goals to our sellers’ and our people’s goals. Blake, yeah. So I think that brings up a massive topic of attribution, right? When we think about attribution, the, the evil that drives it is that, you know, it comes down to who’s performing and doing what, right. And who ultimately gets compensated. I think it creates that negative behavior and siloed, siloed functionality. And if we don’t have to worry about tracking the final outcome, like Jason, you were just saying, but we can worry about all the different steps along the way and start to learn all the right behaviors and get directionally accurate.

      Blake Williams [ 02:19:48 ] We can forget about solving for at the end of the curve to try to optimize. To the nth degree, who gets credit and have it be binarily or zero-sum game where everybody gets to contribute. And now we can catch up on those microtransactions of effort across all teams and let AI think about how to solve for the compensation around that. So, I think that’s a beautiful opportunity that would drastically impact the way that we work just because the compensation model shifted. That’s an interesting idea, but I love the idea of micro-transactions or really micro-incentives. Um, my only concern is there’s also the question of line of sight. If like, I suddenly just get a quarterly check or even a weekly check and it adds up from a hundred different things I did, it’s not necessarily going to change my behavior.

    • Erik Charles [ 02:20:38 ] It’s going to recognize my behavior. That would be, there’s a, there’s a path there, but there’s plenty of research I’ve been engaged in that has shown that like, if you put too many measures on people, they just kind of keep on doing what they were doing because they, you know, the whole idea of a good incentive should be to tell you what to focus on. But I think, I think there’s a bridge there. I, I’m going to have to, I’m going to keep thinking about that one, Jen. One hesitation I have when, when we say things like, well, everybody who’s involved should get credit is that not everybody gets incentivized by money. And so for example, way, way early days HubSpot, like I was, I was employee number 90 there.

      Jenn Steele [ 02:21:18 ] Um, and I was one of the managers on the consulting team. And the consultants were more CS, they were implementation, they loved customers. And when we decided to put on a bonus based on performance, no, we weren’t decreasing their pay. They freaked out because it’s not how they were accustomed to being compensated. It’s not how they were motivated in the slightest. And, and so we can actually kind of hand wave some attribution away. If we’re talking about compensation, just because not everyone is a coin-operated machine to use a dangerous term with sales. Yes. I, I, I, there’s a term I’ve used before: are you coin-operated or coin-recognized? You know, and the coin can properly recognize, uh, Jason, because you did bring us back up high and I, I didn’t get a chance to say thank you for that.

      Erik Charles [ 02:22:09 ] That was perfect. Because it’s so easy to go down any given rabbit hole when we only have what, uh, 38 minutes left. So go. Yeah, I think, um, I, I, so I, I, I agree with you, Jen, that not everybody is money motivated. But everybody is score motivated. If they’re not score motivated, they’re not winners. Winners have to be score motivated. That’s how you win. So I think talent goes where the money is. And I think smarter people understand the value of compensation. And I think everybody is more incentivized. If they know that if they do X work, they will get X reward, whatever that reward is. And if you ask somebody that isn’t motivated by that, they’re not a winner. Preach. Preach.

      Erik Charles [ 02:22:52 ] I think, I think that’s back to the depends on the stage of where they are in the game too, though. Um, but let’s get one of the, the challenges of AI. And I know in some of the models that I’ve built is it’s there’s the, the giga garbage in garbage out, but it’s not as much that it’s like, it’s the, how much data do you need to feed it to be able to get those prediction lines? I mean, yes, I, you know, I can go back to, to grad school, school statistics and say, Hey, you need 35. Data points to properly plot a line. 32 was the one my professor used to say, you know, at least at that point, you can be somewhere there. Nowadays, we can have 10 million.

      Erik Charles [ 02:23:31 ] I remember some of the early, uh, you know, being able to run, um, uh, Monte Carlo simulations and the fact that you could run 10,000 simulations, you know, at the time, this is, uh, early two thousands on my ThinkPad. I would just go, go, uh, have dinner come back and I would have a beautiful graph. And he was like, who’s running 10,000 simulations. Yeah. I did. Why I could, why not? And he’s like, well, he doesn’t get any additional value, but then we get an, but that was a mathematical model, a prediction model with different performance curves with data. It’s another issue. I was doing some data from somebody that collected a lot of performance data, you know, across all our customers.

      Erik Charles [ 02:24:11 ] And it was trying to run some lines or there’s some ridiculous outliers and realized that the dataset had all the test cases. Every time somebody logged on, they’re like, hold it. Let’s just see what happens. If this goes on. And that got buried in the dataset. And yes, the statisticians will say, you know, there’s, there’s numerical smoothing. There’s elimination of outliers, but sales is all about outliers. 80/ 20 rule. There are reps that bring in 400% of quota, but there’s also fake numbers in there. So how do we get the data we need to feed the AI? Anyone want to jump on that one? Come on, Jonathan, you’re the ops guy. Where are we going to feed this AI engine? That’s going to be tough.

      Erik Charles [ 02:24:51 ] Look at all this data that Jason’s talking about that Jen’s talking about with Blake thugs, there’s microtransactions. Do we, do we trust uh Salesforce, Marketo, HubSpot, uh, Apollo, uh, and all of the different data sets? Are we going to bring in outside data sets? How are we going to do that? It’s a good question because it depends on the capabilities of the model you’re using, because not all models can necessarily rely on a cloud or chat to be too, not that they can’t do it, but for advanced stuff, they’re not trained on that particular model set. Upfront. And now there is Salesforce for that matter. So one would be the context of the information you can give because AI thrives on context.

      Jonathan Kvarfordt [ 02:25:29 ] You’ve got to make sure you have the context needed and then the training model to be able to execute, because you’re asking a word-based, um, text-based logic model to perform some advanced statistical analysis. It’s not going to be able to handle it because it doesn’t know how to do it, you know? So I think one is making sure, you know, the, um, capabilities of whatever model we might be using. And then two is. The output you’ll be getting, um, as an example, like if I wanted to have secure place, that’s one thing; everyone’s used to checking me to your cloud. I don’t know if I’d give all the information, like I’m an AI nut, but there’s some things I wouldn’t give AI to do for me because it’s privacy data and other things I need to figure out that I can’t give to an open model.

    • Jonathan Kvarfordt [ 02:26:11 ] So it’s making sure you have some, you can work with whether that’s your own model or an outsourced one, which there’s more options now than was before, but it’s just gotta be really careful with that kind of stuff. And then lastly, Talk about this. The thing that came into mind was, um, the whole transparency and fairness thing with compensation. If you’re not able to really, truly identify and communicate clearly what that is, AI is a black box. If you don’t understand what’s coming back out of it, you can explain it. It’s you have a lot of dicey situations as a result. So I don’t know if it answers your question, but that’s kind of what I was thinking. Yeah.

      Erik Charles [ 02:26:42 ] Well, because I think that’s the thing is because I think of, because you know, you can certainly take a license for one of the AI models and then allow it to only see your internal data. There’s a way of, I mean, you can put up, but then that’s just your internal data, right? That’s assuming that, you know, I mean, so then it’s like, well, whose data am I going to license to put into this? Anything. And yeah, and actually Jason, I was actually going to call you because you, you were talking about some of these overpay underpay and I’d be like overpay underpay based on what, you know, what are you comparing to? We, we, we pay the head of department of water and power in Los Angeles, $750,000 a year.

      Erik Charles [ 02:27:14 ] Um, you know, uh, you know, we can go down some of these, I mean, there are, there are public. I mean, there are certain types of information that’s very public. And then every now and then, you know, I find out what something’s going to cost. And so like comp data is one of those things of, yeah, there’s lots of comp data data sets. I know some of the people that run them, but they’re still oftentimes, you know, not all companies are uploading that information, but go ahead, Jason, do you have your name hand up? So, yeah. Yeah. So I think AI right now in its current state is definitely a trust, but verify kind of situation, right? You, you can’t, you can’t a hundred percent, except that it’s going to be good.

      Jason Napieralski [ 02:27:51 ] Right. Um, and I’ve had, you know, I’ve had, uh, tussles with AI myself that are just been angry and I’ve, you know, I’ve screamed at Siri, uh, I’m not proud of that, but I have many times. Um, but, uh, I think one of the big, one of the big factors is, is the blind spot of AI is your own data and how it mixes. And again, there’s a significant risk is, is, uh, Jonathan was saying of allowing that to participate in the whole. Um, you can get specific information from specific places. If you. You ask the questions, the great thing that AI does from, from a perspective, from a compensation model perspective. And again, I’m looking at it from like, I have an opportunity to change it completely.

      Jason Napieralski [ 02:28:29 ] I don’t, I I’m throwing out the baby and the bathwater, and I’m doing a whole new model. So AI gives you a starting point in which you can find those benchmarks, right? How did this, this name, what your roles are, what you’re doing? What are the, what does a successful one do from a top-of-the-funnel perspective? How many, you know, how many transactions per day can you expect it? Or how many is a world-class do it? I mean, there’s a low class do it and kind of triangulate. So I think there’s that information, but the whole NAI right now is, is training it on your internal data and using that internal data securely, and using it to give you results that are applicable to you.

      Jason Napieralski [ 02:29:08 ] Because one of the problems is, and this is the problem with adopting Salesforce and adopting their processes, definitely those things is that if you follow a leader, you follow a leader and you’re always behind. Yeah. Yeah. And so your own data is the key to getting out of that. Yeah. But that’s assuming your own data is, is market. Huge assumption. Everybody’s data is garbage. Like, really, especially if you use Salesforce data where you’ve had salespeople enter it, salespeople are the worst possible people in the world to enter data. And that is what you’ll do, if you’re using that as the, something to judge your models on, your’re already failing as it is. Go ahead, Blake. Yeah, I was just, uh, I was thinking back to my days in college at, uh, in econ and in my MBA and things like that.

      Blake Williams [ 02:29:54 ] And I think the, I think the whole thing is really useful is being able to arrive at a framework that has kind of abstracted away or assumed away some of the nuances so that you can get to a model that allows you to be more effective. Once you do start adding real-time data into it, and then leverage AI to tell you when you’re outside of the tolerances of the, of that model, um, or just validating what your model’s assumptions are, I think is a better use than just letting it calculate everything. Because it couldn’t go like to Jen’s point, to what Jason’s saying, your data can be off and it can be crazy. So if you’re dealing with like a level 10 crazy, then, uh, you probably want to.

      Erik Charles [ 02:30:36 ] I always love a fellow student of the dismal science. And, and actually this reminds me of my favorite line about economics though, is the greatest line in economics is all of the things being equal. Right. And that’s the problem with these data models. It’s like, how much do we have to bring in here to actually make sure we have the right timely stuff, Karen? So, you know, it’s funny. We talk about this data. We rely on crappy data today. So, I mean, you know, w I think we, we over sort of rotate on, on how disastrous that’s going to be in the sense that I would say today we have lots of issues, especially in terms of attribution.

      Karen Sage [ 02:31:17 ] In terms of all of these things that we’ve been talking about, there’s, there’s like key issues with the data that we, we, so, you know, profoundly embraced to solve those problems today. Yeah, you’re, you’re, you’re absolutely right. Because I think of a lot of comp survey data in the old days when you used to like, just, you know, buy the annual report from somebody. I always said that was 18-month-old data by the time you’ve got it, at least I worked for Sun Microsystems. And I don’t think I’m violating an NDA from 25 years ago, you know, we targeted the 65th percentile of total cash compensation was our game, 65th, but it was all always based on a year-old data. And this was during just as the . com boom was kicking off.

      Erik Charles[ 02:32:01 ] We were selling a lot of servers to all the . coms that needed a server. And we were every time an employee got mad at their manager, they didn’t leave because of pay. They left because of the manager. But the second they went on the market, they found out they could be making 30% more because there was such an acceleration. You know, of, of, of a hiring and buying it. But, but actually on that crappy data, just to have a fun little switch it around. One of the biggest challenges or one of the many challenges in sales, let’s say is the quota, you know, what’s the quota. Well, the executive said they want to grow 20%. So they gave 25% growth to the head of sales and he gave 30% growth to the sales team, hired more sales reps.

      Erik Charles [ 02:32:46 ] And threw them into random territories. And I, I always remember, uh, uh, Adam Ecker over at, um, uh, buddy of mine, uh, Simon Kutcher would talk about like, and companies like to put like enterprise sales reps in any city that has an NFL team. That was, that was their data analysis for a lot of companies. Because you figure if they’ve got an NFL team, they must have enough business. And I used to laugh. I said, do you know how long Los Angeles didn’t have any NFL teams and now they have two in San Diego has none. Does that mean San Diego is like, he’s like, yeah, it was a bad model. So with that. Bad model. How can we get the AIs to start building some better sales territories in there, then do better sales quotas?

      Erik Charles [ 02:33:24 ] Just because we’re talking about measure and performance, what can we expect out of some? All right, Jen, take the lead. Almost anybody would be better than a first-time founder drawing a straight line from friends and family sales, right? I mean, the AI is not going to get that a whole lot worse than, um, than the humans. Well, in a lot of cases. Now, when you’re talking about more mature organizations, like then at least you should have some knowledge and experience that you can test, right? So it’s almost like running two in parallel or designing yours, designing have AI design it and maybe use the AI to make yours better. Yeah. But should the AI then say not only is it I’ll get you. Yeah.

    • Jenn Steele [ 02:34:14 ] Should the AI then say great So we’ve designed designed the territories but um you know uh Blake is a brand new rep and we know it takes brand new reps can only bring in about two thirds of, of of what uh rep with two years of experience And hopefully you know that data by the way do we adjust it at that point too How can we let the AI actually create achievable measurable quotas based on that territory as well Cause it territory has to equal quota. If it doesn’t, then they’re both made up. Okay. Go ahead. Blake. Yeah. Yeah. I think, uh, you hit the nail on the head and that’s right where I was going to go is, once it does establish the, uh, the quota or we establish the quota, I think it’s more important that we understand, cause it won’t be binary.

      Blake Williams [ 02:34:58 ] It’ll be, what are the contributing factors on why this may not be hit and then understanding those insights, right? Why wouldn’t that rep hit that quota and diving into those insights so that you can prepare, or at least actively seek out how to solve for that faster and make that, that number, more likely. Mm-hmm. Jonathan. I was just thinking, there’s, there’s these other factors involved. Like, there’s the regional perspective, whether it’s worldwide or US-only or whatever, then you have the personality set of the salesperson, and the skill level of a salesperson, which may match West coast versus East coast or, um, the product market fit or the industry they’re going after. Like, there are all these different factors you have to take into consideration that I think AI would be phenomenal if it had the context to help you make those decisions and say, ‘Hey, based on what you’re doing.’ Based on the personality profile,

      Jonathan Kvarfordt [ 02:35:47 ] the industry, the, the, what you’re going after revenue-wise and quota, this person would be the best match because of these reasons. Um, and that’s something that me as a human, I can do it, but it takes time. So having something to help me kind of give those insights quickly would be best. Um, the one thing you have to be aware of though, is just the, the downside of it is just the assumptions of the model to make sure it’s not, um, marking something more important than what it should be to make sure you’re, you’re, it’s just more of a feedback mechanism versus the decision maker. Yeah. Jason? Yeah. Just triggered something for me, um, the assumptions and, um, the so few companies test their assumptions, um, on a regular basis or ever.

      Jason Napieralski [ 02:36:27 ] Um, and you know, AI is a real easy path to hell. If you trust your assumptions and you write the questions based on your assumptions and the AI, it will lead you down, down, down a bad, dark place because if your assumptions are wrong, um, because it, it can’t AI. AI cannot think. Yeah. And so if you want it to come up with its own, your assumptions, it won’t work. Yeah. And, and, and I’m, I’m going to pull this in and I’ll get to you, Jen, because I love it. Uh, some input from Julia here. It’s like, are we ready? Uh, and I’ll, I’ll, I’ll read her, her comment. Can AI AB test incentives? And I’m going to go slightly different. Is it time to start?

      Erik Charles[ 02:37:10 ] Because the AI will let us AB testing, even if we’re going to do. This beef in this territory and this and a, and a different one, because Blake and I got into an argument about which one would work better. And I said, fine, I’m going to run this in California. You run it in New York. Let’s let the best contest win. Now, Jonathan, that ignores your comment that each market is a micro market. So we might miss it, but do we do something like that? Potentially. I mean, it’s, it’s a thought and noodle on it, but Jen had her hand up. Although I did just get cold. Chills at the thought of AB testing, full comp plans.

      Jenn Steele [ 02:37:47 ] If anybody talks to each other, but, um, I was going to say kind of building on what Jason said, like AI can be self-critical and there was something that Jason said about like working with it, right? It’s not just a, here’s the data come up with this. It’s wait, what can go wrong with this? Well, what about this assumption, right? As we work with it, it’s not a one and done. It’s more of a collaboration. Because it can’t think. But I love the way that you’re like, so what’s wrong with the model you build, or how can people game the model you build? And it’ll just tell you, which is better than any, you know, sales ops person I’ve ever worked alongside.

      Erik Charles [ 02:38:26 ] Well, and that’s this question on optimizing the compensation structure, both how much pay should be at risk. You know, how many factors can we have in there? How can we optimize all these little comp structures? Yes. From the, from. And I agree, by the way, Jason, from the CEO down to the, to the someone in the mailroom, if anybody still had mailrooms use that someday, I won’t even be able to make that reference. I just, as I said, I realized, hold it. Nobody gets Amazon packages still come in. That’s true. Yeah, so the package, so now it’s package delivery. And so that mail, um, my undergraduate Alma mater now has a package delivery. They don’t allow packages to the dorms. They force it all to a central location because it was just too many Amazon trucks.

      Erik Charles [ 02:39:13 ] All over campus, just double parking, running up and down, uh, dorm rooms that are all under a lockdown. But, uh, you know, can we get the AI to sit there? You know, first it’s going to take humans thinking about what matters at each one of those roles. What, what do we want out of the person in the mailroom? For example, um, can we measure it and then can we build it into the plan? And now I’m just going to go in order Jason first. Yeah. So, um, I think Jen said something. Brilliant that I think, um, a lot of people don’t consider is that, um, AI, not only can it not think, but it doesn’t have an ego.

      Erik Charles [ 02:39:52 ] So it’s not offended by the questions and, you know, everybody’s been in a room with your sales ops guy and you’ve asked them, ‘Why is this like that?’ And they’ve gotten defensive and it’s gotten the wrong way, or it’s been the wrong way. Or you ask a sales manager, ‘What’s happening in your market?’ And then they want to color it. You know, sales managers are great at this, you know, colouring all of the different things, you know, telling you about the delivery. They’re very good, where you just want to see the baby; AI doesn’t do that stuff. So when you have a place, a controversial question, AI is a real good resource for it. Let’s see, Jonathan. Yeah. I have a couple of thoughts.

      Erik Charles [ 02:40:29 ] One was, um, to your point, uh, Jen, about, um, people: you can go back and forth. I think it’s interesting to think of like your unconscious incompetencies that you don’t know; you don’t know. And so therefore, if you have assumptions, you’re not aware of, like, you don’t know to even prompt him to mitigate or correct that, which is. He has to be careful, but that’s cool to be on AI. As you can say, what am I not asking? They can tell you that, which is nice. Um, the other thought was too, is that your, your question earlier, Erica Brown, um, compensation is like, there’s so many things around correlation and causation that becomes a whole, you know, data scientist,

      Erik Charles [ 02:41:01 ] fun time and nightmare at the same time, because it, you have a hypothesis that if you do one thing, it has outcome X results, but you may not be looking at all the data sets going into it to say, yes, it performed better, but the, there’s also in California, which had this thing happen in the marketplace that also was a factor and not just the compensation, you know, so making sure you have that data accessible. So it’s not just looking at one narrow set to make sure that you don’t assume that that was the case. And that incentive will work everywhere else. Cause it may not, you know, no, that’s a great point on historical. Cause I, I was dealing with some construction materials companies and, and hurricane Sandy had a huge impact in that space of both.

      Erik Charles [ 02:41:41 ] Killing business in some territory. And then soon after it, a ridiculous amount of business after that going back and forth. So Blake. Just taking that further because it is, it is this infinite game. What about being able to trigger different behaviors in the individual sellers, actual, like those micro behaviors to say, ‘Hey, what if you tried this’ or, ‘Hey, do 10% more of this and let’s see how, how that adjusts you potentially hitting that quota.’ And then being able to see if those things actually have the, again, testing our assumptions in real time to drive real behavior, to see if that does impact, to learn what to measure and what to drive. So you’re going down almost the AI coaching model then.

      Erik Charles [ 02:42:34 ] Cause I was going to touch on that and I’ll come back to it, but I know Jason had his hand up. Yeah. So, uh, you know, what Jonathan was saying is that there’s often other factors at work, you’re in a success or failure of a sales rep and the quality of the rep, um, you know, is often not, the story is not told by their sales made. Um, and I’ll get an enterprise. It’s very common that, um, you get some big hitters, rainmakers, and they just get fed all the good deals and everybody else that’s coming up is suffering, um, you know, and working that. And that’s why I think that compensation models moving up the funnel and less on the final funnel can identify your, your potential agent. Okay.

      Erik Charles [ 02:43:17 ] We’ve, we’ve heard dramatic, dramatic stories out the past. I mean, I think like, you know, to, you know, with, with James down there, you know, what’s, what’s the, you know, I don’t see him doing aά the, I alone, your reputationu. Um, he never said another thing. Uh, so he, I mean, look, it’s, he’s not going to be tuition where everybody else adds to what he does because he does go toins and turns, third and fourth, of course, and it’s a reality. you know, he’s in Chicago and that would be something that I would, definitely get him a task. which is completely useless and unhelpful. And then you end up with the rich getting richer. So the guys that are, you know, and everybody knows the sales guy that never does a darn thing, but he just gets the deals fed to him.

      Erik Charles [ 02:43:55 ] And he, you know, those are going to close regardless. And also the other thing about compensation on salespeople is that now, you know, particularly in SaaS and tech things, 80% of the sale is done independent of a sales rep. It’s done through the website. It’s done through YouTube videos. It’s done through testimonials and references and other things. So, you know, the credit given to them should be equivalent to what the credit for the 20% of what they would have gotten if they would have had to do 100% like in the old days. Well, it was a different panel. It was this question of we won’t need sales reps when it’s an AI that’s running procurement. And then we’ll just have our license engine go talk to your license engine to negotiate the final and I’ll visit the contract.

      Erik Charles [ 02:44:43 ] But as long as procurement is still human, the sales site, you’ll still need a human on the other side. And we’ll generically use sales on that. I wanted to touch on a little bit there because you’re talking about that end of the month and the forecast. And that’s an interesting one because, I mean, Salesforce, you know, ties in Einstein to try and do the forecast. Of course, Clary came out and did it. My former employer exactly purchased Top Ops and now sells that as exactly forecasting. Or looking at all the data, looking at past performance within the, again, within the company, past performance at the rep level, all of these do this to say, oh, you know, Jonathan is, Jonathan is, we know he’s a sandbagger.

    • Erik Charles [ 02:45:28 ] So we’re going to look at his forecast and we’re going to erase it. But, you know, Jason’s kind of negative. So we know whenever, you know, as well. So we know that. But Jen is always happy, but doesn’t quite actually hit it. So we’re going to, we’re going to decrease that. And it’s, you know, the AI is trying to look at all that information. Anybody have some good forecasting stories where you’ve, have you seen AI, you know, really help with the forecast of what’s going to happen? Well, what you said was the wrong problem to solve. First of all, you know, trying to smooth out bad data via AI is not, is not good. You need to get the data better. And the reason why forecasting sucks is that we focus on the data.

      [ 02:46:11 ] We focus on the bottom of the funnel and not the top of the funnel. It’s that simple. So if you align the compensation with the top of the funnel activity and you’re confident, you know what your cookbook is to get a sale, then forecasting happens automatically. You’re confident, you know, the cookbook to get a sale though, is that’s one of those all other things being equal assumptions. I feel. That’s why AI can run those models and figure out what your cookbook is. Generally, the backups can do that. But that’s where we need the AI to read our email, our calendar, our Salesforce instance and everything else to get anywhere close to it. I think years ago had a way of like actually scraping everything and at least updating Salesforce.

      Erik Charles [ 02:46:52 ] So you actually knew all the people talk to. I mean, one of my personal measures from a very physical standard when I was running some sales teams is like, you know, someone would put something up in the forecast and if they hadn’t talked to legal yet, I was like, not a chance. We are three months from the end of the quarter and you don’t have legal, you’re not going to close that because it’s legal. I’m sorry. I love them, but that’s what they’re doing. Karen. So we, you know, we talk about AI sort of correcting the sales assumptions that they make and, you know, et cetera, et cetera. What happens when AI goes up to an executive and says that hockey strip you drove, you know, based on, like Jen said, the friends and family, it’s ridiculous.

      Karen Sage [ 02:47:39 ] Your sales can’t meet their quotas. You know, the model that you’ve set is incorrect. What happens then? I mean, does the whole thing implode? Well, and that’s where I was touching on the need for not just your data, but can we get companies to share data somehow to, you know, do you build a consortium of companies who say, hey, we’re all kind of in close enough to the same space, maybe not direct competitors, or maybe you do. And of course, the problem there is that whole collusion, Federal Trade Commission, and a few things like that. But there’s enough fuzziness there. But Jonathan, you had your hand up. Sorry about that. No, it’s fine. We kind of moved on, but now I have two thoughts. One is there’s tech that exists now.

      Jonathan Kvarfordt [ 02:48:28 ] I do watch. Yeah, it’s okay. I know you do. There’s tech now exists that does what you’re referring to. So like there’s one of the companies I advise for, Momentum, has a product that can go back into both emails and calls and then batch the information and put it back into CRM to correct whatever information was missing in the first place. Because we know that it sells people. And again, if you think about it, I don’t think they should be responsible for doing the data. They should be responsible for retrieving it out of the customer, but actually writing it down. Now we have AI to do it. Why are we having to do that? I want them focusing on the customer, you know?

      Jonathan Kvarfordt [ 02:48:57 ] So having that capability can change that dynamic so that bad data goes to hopefully better data. And then secondly, you mentioned something before about market sharing information. I had a thought around that, which was if you think about the other side of it, there is that risk of, what’s the word, uniformity. So, if the market as a whole all knows what each other’s doing, it takes away from differentiating yourself where it could, essentially. So there’s strengths to sharing data, but there’s also the weakness of like, how much do you share to take away your competitive advantage, you know? You follow the leader, you follow the leader. Yeah. But as we’ve learned from any Tom Peters book, again, showing my age, all you need to do is show up in a Tom Peters book.

      Erik Charles [ 02:49:40 ] And you’re no longer the leading company. Or they used to call it the Sports Illustrated curse. You know, if you make the cover of Sports Illustrated, it’s time to retire and sell off jerseys. Because odds are, you will not last there. I mean, it was some interesting couple of data elements. And we certainly have seen in the NFL draft how often the fourth or fifth round becomes the marquee quarterback. And we all have plenty of examples of that one. I mean, that goes back to, like I said, what data are we willing to trust

      Erik Charles [ 02:50:20 ] against Um, what do we have done? You know, we announced earlier on that, if you take just a few days out and you just go through your own OoBaN. And think, man, I’m just trying and actually get that kind of focus in here, and I would just have to do all kinds of things. I would do anything WLF. I would do people, but I wouldn’t waste time and time unless I, you know, know that I want to risk everything, Yeah, ciud. I don’t mean to spoil visuals. Okay, got it. And where can we go on the top of the funnel is interesting. I jokingly said, introducing this is like, at what point do we pay the AI engine a commission? Oh, I’ll increase your capacity to AWS if you can bring in more leads.

      Erik Charles [ 02:50:57 ] But at the same time, that is real. And activity-based incentives, I will admit I was against them. The idea of paying salespeople to do their jobs irritated me until I actually saw a study that was done by the University of Wisconsin, did it in, in cooperation with some folks from McKinsey, where they went into a company and they paid the reps in that case to simply put in next steps and notes. And they tried paying the reps and paying the managers for that data update. And then what they looked at it over time is that when it was done, they actually got an 8% lift in revenue. Just by having them do it at the right stage.

      Erik Charles [ 02:51:40 ] Now, the other interesting side on that study was that not only did they get the lift in revenue, but there was no difference between just paying the managers or paying the managers and the reps. So it was back to a focus of getting that done. All right, go ahead, Blake. Yeah. So I mentioned that I talked to Yuri, the CEO at AI SDR, and our business, Demandy, is built on that premise that content is the new sales because, you know, we’re not going to be able to do that. So to Jason’s point, 80% of the buying process happens long before that. If you can scale content with your network, you end up generating a ton more potential signal as long as you’re curating content around the key scenarios that your audience struggle with.

      Blake Williams [ 02:52:24 ] To that point, I asked, how are people buying this AI SDR and how are they paying for it? For right now, to your point, Eric, is that it’s very activity-driven. So it’s like 100 emails sent or 1,000 emails sent or 500 signals responded to. And I think it’s a great time. I know Jason’s definitely going to hit on incentive structure or measuring the right thing. I’ve done this like eight times with him and he always zooms out the right way. And I think what’s really key is measuring newly opened conversations by that AI SDR. And to that point, if that AI is opening many, many more digital conversations, then it comes about compensating the SDR or the marketing team on being able to harvest any demand that’s raising their hand there.

      Blake Williams [ 02:53:24 ] Long digression. I just got excited about the content, the new sales that take us away, Jason. Yep. Yeah. So I think one of the important things is that to know, sales, sales leaders, particularly VP Sales, CROs, those types of get fired for one reason, not for lack of sales, for poor forecasting. Typically, they get for poor forecasting because without the forecasting knowledge, you don’t have the ability to make shifts, to change pivot, to adjust resources, to get the things you’re doing right. And again, the AI really can help without an ego, can say what is going wrong within the sales system. And if you’re training the AI to look at the top, the top of the funnel activity, you still have a chance to make a change mid-cycle.

      Jason Napieralski [ 02:54:13 ] Whereas, if you’re focused on the 30-day scream-a-thon, you’ve got no chance. So, it really is; it could be a game changer if you compensate for the activity. And, really, the best salespeople are the ones that are the most boring. They don’t go out for lunch. They don’t do dinners. They don’t do basketball games. They just crank activity. And they are always the ones that are the most successful, the best forecasters, and the ones you can count on. And so, if you align your compensation model on trying to make those people, you’re going to be very successful. And I agree. And Blake, just to let you know, when I’ve used AISDR, Ted, I’d only pay for booked and held meetings. That’s the way to do it.

      Erik Charles [ 02:54:55 ] You know, but that was my own because these marketing teams, I’ve run marketing teams where they were measured on stage two pipeline. So we knew the activity it took to get there, but it’s such a convoluted path. So, all right, we’re down to the last five or so minutes. So I think to best use this up, what I’d love to do is go around, we’ll go around the horn one last time. And what I want to hear is your forecast. What are we going to see AI’s impact and compensation? I’ll give you an 18 month window, but I’d love to say by the end of 2025, do we want to go to one year? Do we need to give ourselves an extra six months? What do y’all think? Do we get one year?

    • Jonathan Kvarfordt [ 02:55:38 ] Alright, by the end of 2025, Jonathan, what are we going to see? Alright, mine’s kind of a larger view of it, but I think that roles and things are going to start drastically changing so that a lot of the tasks or things we’re requiring from people will be kind of to the point of what we’ve been talking about. Like, a lot of what a salesperson would have done in the past is going to be narrowed because AI will be taking over a lot of it. So, we gotta be really fine-tuned on behaviors we need to have from humans and how do we compensate that? So I think it’s going to be more about what is that worth? From a human, getting really fine-tuned on that to make sure it’s working correctly.

      Jonathan Kvarfordt [ 02:56:11 ] And that you look at that and you zoom out and say, times that by the entire organization, not just salespeople, but everybody who has a hand in any type of compensation based on performance of some kind. So. All right, Jen. I mean, I think in general for AI and sales where the signal to noise ratio is going to be dreadful because suddenly it’s the cost to annoy people will significantly drop. As such, unfortunately, or fortunately echoing Jonathan, like that human piece is going to become more and more important. And so incentivizing it will be vital. Oh, okay. Just because I can say it. What would you incentivize on the human side then? Since yeah, the AI is going to handle all the automated stuff. I think we need to learn that.

      Erik Charles [ 02:56:59 ] Like to some extent, you know, booked meetings are a thing, AI can book a meeting. So then what? What is the human thing? Yeah. Karen. So maybe it’s going out to dinner and all those things that Jason mentioned that are kind of, you know, not good sales tactics. I mean, that’s an interesting question, right? You know, because if the initial emails and the heck even a demo potentially going that direction. So, we’d even touching sales engineers how long before the AI can handle our demos for us to get us going down these things would be interesting to see. I would, I mean, look, because a good sales engineer is even harder to find than a good sales rep in my opinion, that combination of skills and approach.

      Erik Charles [ 02:57:46 ] And I would love to be able to just have an AI now, and Claude’s going to show you the product, ask it any question, and it’ll show it to you. And yeah, the rep is back to that personal side as we’re kind of seeing events coming back. Blake. Yeah, I love that using AI to measure how great you are at building relationships, let alone just the activities of sales, right? And what’s your relationship quantity across all the people that you’re talking to and the strength of that. I think the rate of what’s possible and the rate of what will actually be adopted by 2025, given how slow and you know tepid people are will probably be drastically different.

      Erik Charles [ 02:58:26 ] I think there’ll probably be some really, really smart models with cultivated LLMs that have the framework to be able to either source the data, de-anonymize it and still have it be you know be relevant definitely by 2025 for, for this. All right, Jason, I would say ‘bring us home’ or, or, or zoom back out. Perhaps that’s the better statement for you. Yeah, I think one of the things that is, that AI will never do is give you customer obsession. And my dream of an AI situation of an AI compensated model is to closely align the success and needs of a customer to what the, the, the, the compensation of the rep. And so if you have the, the, almost all the ills in our world is due to misaligned incentives and that accompanies, and I think AI is also going to narrow the feature gap between product and service across the board.

      Jason Napieralski [ 02:59:21 ] So, the differentiation is going to be customer success, customer obsession, and that type of thing. And AI lining those things up, which are not easy to measure and not easy to do, require getting data from customers’ performance. That’s going to be the Holy grail for me in a compensation model. I think that’ll be huge. I agree. Last minute here. So, I just want to say, I’m going to be a little bit of a wet blanket. I think we’re going to see a few companies do some amazing things and 2026 will be the year for everybody to follow. I think 2025 is the year we’ll see a few leaders actually pop out. I don’t think they’re quite there yet. Everybody’s testing, but I think that’s what we’ll get. Thank you everybody for joining Julia.

      Julia Nimchinski [ 03:00:00 ] I think I stopped on time. So I’m not in trouble with you. And Justin. Last words from you boss. Thank you so much, Eric. Thank you. Everyone needs so many notes, Eric, I can’t help but ask you if you’re a building exactly again or now, what would be your value prop in the light of AI? I will say exactly actually has an amazing data set. I would have just had, I would have from day one. And I’ve actually advised a lot of companies start collecting that data across your client base, anonymize, aggregate, but clean it as you do it. Get rid of the test cases and things so that you actually have a fully actionable data set that the AI can do the reporting out of from day one. Thank you.

    Table of contents
    Watch. Learn. Practice 1:1
    Experience personalized coaching with summit speakers on the HSE marketplace.