-
Helen Lin [ 03:01:52 ] Um, you know, in terms of forecasting things like that, classic machine machine learning models still work much better. I would i want to say still, um, because you know LLMS are fast catching up, but we hopefully within the next year or two we’ll see LLMS being able to do really good forecasting as well. But for now, uh, we’re relying on classic machine machine learning models as well as uh people’s kind of own judgment for forecasting. So I want to just share with you um, you know, for discern like what we’re doing with regard to AI so you know, we analyze companies’ performance right, and we’re very much focused on V2B software. companies so when you think about that we’re looking at the marketing funnel the sales funnel uh i’m going to focus a little bit more on the sales funnel today so i just want to share with you what thing what what we do uh that has worked well for us uh let me just
-
Helen Lin [ 03:02:56 ] oops share my screen so the first thing i wanted to share with you is really just you know every sales organization wants to know kind of you know wants to forecast right want to know kind of where how well they’re doing um for the quarter where they’re going to end up in the quarter and everyone asks what’s happening so this is one of the things that we have put together So, this graph has a lot of information about what’s going on in the market and what’s happening in the market so, the first thing is uh, three lines: the top line is the daughter line, is the target; the green line is actually Discern’s AI forecast for this particular company; and the red line down at the bottom is what the company actually closed one by the end of the quarter.
-
Helen Lin [ 03:03:42 ] Then we really run this model every single day and what we’re doing here is that we’re literally just employing you can use a random force model or logistic regression model; we run a bunch of models for companies, take it into consideration like you. Know a lot of companies are doing a lot of work on who the AE is, who they’re selling into, what’s the company size that they’re selling into, any you know which industries uh what type of opportunity it is, pretty much anything you can think of that’s related to a deal or opportunity. And we are able to, you can see the green line, this is Q3 of 2024. We’re able to very accurately forecast what the quarter is going to end up um really starting week two right?
Helen Lin [ 03:04:27 ] So here, what we’re doing is that we’re looking at so we do have opportunity scores, we’re looking at what’s close one and then of the open opportunities. What do we expect the company to close, so that’s existing open what do we expect the company to create and close based on historical patterns and what do we expect them to pull in from future quarters? So all of those things add up to our forecast, which um at that point was six point five. So that’s what we’re looking at, so we’re looking at four million and uh, the company ended up like six point six million. So, um, that’s one thing that I wanted to share with you. The other thing is, as we’re talking about forecasting for the company, then you know, then we talk about, okay, well great, you know, you’ve got the AI forecast.
-
Helen Lin [ 03:05:16 ] But what, what’s my what are my um sales people thinking right? So we do have a forecasting module where people are able to put in like okay yes I think I’m going to close this deal and I’m going to close this at maybe a hundred thousand or two hundred thousand and um there are some other deals that’s not even in a pipeline that won’t forecast for and I want to submit that um and then companies are able to see kind of a weekly summary of what everyone forecasted, how accurate they are, how much they have won compared to quota at the end of the quarter to look at accuracy, so really it’s bringing humans and AI together to human forecast.
-
Helen Lin [ 03:05:57 ] And AI forecast, um, because we study so much of kind of um, the, the details of every single opportunity, um, and the conversion rates. What we’re able to do is then you know utilizing like optimization models and other models to allow companies to do the type of planning that they want. So for example, companies can set up bookings targets. Companies can figure out what their pipe creation potentially could be both an ML level or MQL top the funnel and pipeline creation. And then figure out what their, you know, we can populate their historical conversion rates but they can also populate different conversion rates going forward. And for us to be able to tell the companies what their bookings are going to be over the next few years, so that helps a lot in terms of planning, to say, ‘Hey, if I wanted to get to a certain target, I’m going to do this and I’m going to do this and I’m going to do it.
Helen Lin [ 03:06:57 ] Uh, what do I really need to do, and in which area do I need to do um? And then breaking down everything you know bookings into multi-year new business upsell so I’m going to take a little break here just to see if anyone has any questions are people surprised by kind of the type of you know leverage of classic machine learning models here. Mark, my question. Would be how are you bridging the gap between what a machine learning model is capable of doing and then causality, because if it’s purely pattern-based right that is that’s going to get you kind of like in the zone okay but it’s not going to like factor in things like externalities. So, how are you, how are you how are you dealing that dealing with that so so here we don’t really distinguish between causality versus correlation right?
-
Helen Lin [ 03:08:03 ] So we are literally looking at just correlation-if an opportunity has this these characteristics, what do we expect it to do? So we assign a score to every opportunity to say okay this is 83 percent and if It’s 83 times the amount that’s literally what goes into our forecast, and we do test it over and over again. Now, externalities are not going to be the same as what we expect them to be, but we are we protecting our crocodile causality. I would say this type of forecasting is much easier for small to mid-size deals. Right? If you have deals that are a million dollar several million dollars where it’s very nuanced, then that is much equation more unusual possible and what comments you would like to hear that might be of interest?
Helen Lin [ 03:08:46 ] You know well in this case it is what we truly desire. This is the expense, as you mentioned earlier. I would think that would be would nothing but 감사합니다 that you’re looking at it works right um you know i’d i’d love to talk to you later if you’ve got time no absolutely and then so the other thing that we actually wanted to talk to you guys about is that um that’s great that people are all thinking about like ai etc but a lot of companies on the very basic side are not even tracking kind of their tape and relative to target how they’re pacing so here what you can see this particular company is looking at actuals were versus quarterly target versus attainment now this is type creation you can do the same on booking site right but what companies oftentimes are not looking at is that they’re not looking at pacing and so what we’re what we’d like to do is to for companies to hold on one second let me see if i could uh i’ll share for a little little bit and then pull up a pacing diagram um to illustrate what i’m talking about
-
Helen Lin [ 03:10:03 ] okay so hopefully this uh comes up so what we’re saying is that the actual number could be very low because it’s in the middle of a quarter right or the beginning of a quarter and the target could be very high but if you continue this particular pattern again it’s pattern recognition what do we expect you to end up the quarter at so very early on for this one i mean the whole point of doing forecasting and doing this type of pacing analysis is for companies to be able to adjust and correct ahead of time right so you don’t want to adjust like week 10 12 of a quarter it’s too late so week three four five of a quarter i’m already seeing what is happening and that’s why pacing analysis is actually quite important um to know okay where am i not doing as well where am i doing where am i not doing a line where am i doing on my well if I continue this particular pattern down into the portal.
Helen Lin [ 03:10:58 ] So this is not like machine learning or AI at all. This is just like basic things that we know companies should be doing, but very few companies are actually doing. Okay, I’m going to stop sharing. So that’s a quick demo from discern from a forecasting perspective, you know, kind of pipeline planning, reverse modeling, modeling, and then pacing. Thank you so much, Helen. And do you have any questions for this panel? No, no, no. I’m just asking some of our panelists and I received these questions from the audience. One is really philosophical. Helen, why is forecasting so hard? Hmm. Okay. I don’t think forecasting is hard. So forecasting, it depends, right? If you’re a very small company, you have very few data points. That is hard. If you have a large company, you have many data points.
-
Helen Lin [ 03:12:03 ] You just need to do it. So it’s not a hard technical problem. It is a, it’s a harder process issue. Now, machine learning models, AI models, they can, they can do it. They can do it. They can do it. They can do it. They can do it. They can actually figure out, you know, which AEs have bad CRM hygiene or who has, you know, who has really good hygiene and then really take all of that into consideration in modeling, right? So it’s actually not as hard as people think. It’s actually quite easy to solve as long as you have hundreds of data points. That’s, I mean, we’re not talking about big data here. We’re talking about small data. We don’t need, you know, millions of data points.
Helen Lin [ 03:12:45 ] We need hundreds, thousands of data points and we can actually do forecasting quite easily. Now, going back to what Mark was talking about, taking into consideration of externality while making your team do the forecast every week, like doing your forecast by days or forecast Tuesdays consistently every single week and really getting that into the team’s mind. Now, that’s a process issue. That’s much harder than the technical issue that we’re trying to solve for. Yeah. Another question here. How are you dealing with a clean input problem? So I think I was just addressing that, right? We just need to know when opportunity is created when it’s closed. We don’t really need to know every single thing that’s, you know, people, I think, over index on how much an activity, activities are going to impact kind of bookings or one.
-
Helen Lin [ 03:13:43 ] There’s really not that much. Correlation because one person’s low level, like low quality activity versus high quality activity drive very, very different results, right? So we actually don’t think input is that important. We just need some data points. The AI models, the machine learning models are smart enough to take all of that into consideration to score an opportunity and to actually predict wins. Does this solution replace any options? Who are your competitors? Who are our competitors? Well, everyone who does like AI forecasting, potentially. Look, discern is actually very analytics-oriented. We’re not a conversational intelligence product. So, you know, we’re not, we’re not going. We’re not clearing, you know, we’re not going. We’re not doing conversational intelligence. We’re not helping companies to outreach for BDRs.
Helen Lin [ 03:14:39 ] What we do is we help companies automate reporting to get themselves to do that kind of out of like Excel reporting. We provide a ton of different analytics, as well as some forecasting. So forecasting is just one of the many things that we do. Most of our value add comes from getting away from manual reporting, getting all the internal weekly, monthly, like board reporting out of the way, but really be able to study the data and make the right decisions and optimize for performance rather than spending all that much time thinking about reporting. Okay. Ellen, can you share some customer stories that want to inspire me the most? Oh, okay. I, there are so many customer stories. And this one is not even related to forecasting.
Helen Lin [ 03:15:28 ] But this is a very recent, most recent story that I had a conversation with a customer who basically, one of the things that Discern does, though, we automate ARR calculation for every company, right. So if you look at the finance team who needs to do ARR calculation, it’s not in the CRM, it’s not in the billing system, it’s not in the accounting system. So it’s mostly like in spreadsheets. And you have to bring data from multiple systems and really understand how to apply consistent ARR rules, which can get very complicated. So for this This particular customer, once they launched Discerne, they were like, okay, for month and close, we used to have five people spending five days each at least full time on this particular issue.
Helen Lin [ 03:16:12 ] And it’s a customer with low tens of millions of AR. And now they have Discerne, it’s just one person, one day. So this is the type of efficiency gain that we can deliver to customers. It’s always great to have it quantified, but also it’s not just cost savings they’re seeing into the future for risk mitigation, for forecasting, again, reporting, but that’s more on the AR, NRR, GRR side. So how far out into the future can you actually go with Fidelity? Oh my gosh. As long as you want, we can also go back. We can go back as far as people want. So, as soon as we implement with a customer, we take all of their history. So day one, because our models are based on history, right?
Helen Lin [ 03:17:05 ] And so then, now in terms of, if you’re talking about ARR forecasting, things like that, right? There is a little bit of a duration issue because there are only so many multi-year deals into the future. Now, we don’t know. We can always use historical renewals, conversion rates, renewals, win rates, historical renewal rates to apply. So, yes, you can go as far as into the future as long as you have assumptions to the future. We’re not giving assumptions to the customers. We’re allowing customers; they know their business; they can put in their own assumptions. So, the way that I see this is that you’ve done some really cool stuff here. We’re doing things like this because it’s not just you at all, right? Bumps up against a problem is that past is not prologue, right?
-
Mark Stouse [ 03:18:10 ] And so I think until you, well, so I think that what we’re saying— Well, past is prologue in some ways for us, right? Yeah, but past in real terms is not, right? Yeah. We only have to look at the last five years to see evidence of that, right? Right. Okay. Yes, yes. I want to address that too. In terms of recency, oftentimes we just look at the last two quarters. We don’t look in the past five years to figure out what’s happening because things have changed. So we look at last two quarters and most last four quarters, and then every single quarter or every single month, depending on some companies that are really on a monthly targets, others are on quarterly. So every single month, every single quarter, we update with the latest data.
Mark Stouse [ 03:18:59 ] So then you have to basically feed the model the latest data for it to work. So yeah, that recency is important. So maybe not past is prologue, but present is prologue into the future. Yeah. I mean, I think that where I would really encourage you to go with this, right, because I think it has a lot of potential is getting more. So if we just think about deals for a second, right? Deals have two sides minimum, right? The more information you can bring in from the customer side of every deal, which is a headwind tailwind kind of effect on the deal, the more interesting this is going to get. I think that the externalities, I mean, if you kind of look at it just from pure data science principles, right? Yeah.
Mark Stouse [ 03:19:51 ] In order to get an accurate model, you know, the goal is to get something as close to real life as possible. And two-thirds of that are always externalities that you don’t control. So I think that the more of that you can bring into your product, the more of that perspective you can bring into the product, the richer it gets. Absolutely. Yeah. And then. So. I mean, I think that the more you bring in those internalities, the more you bring in a human research model that you can have the, the manual forecasting with people writing in, you know, why, right, why they think they’re forecasting this and why they don’t think they’re forecasting for this for a particular opportunity, then bring in those kind of, you know, human based evaluation into the AI model, probably it will make a lot of sense.
Mark Stouse [ 03:20:42 ] So that’s a very good point. Yeah, I mean, I think that the sticky part here always in this kind of a law of gravity in this whole thing, right? Right, that affects us all, is that for data to exist, the thing that it’s measuring has to have already happened, right? So even if it’s in the last 30 seconds, it’s still the past, right? And so that’s the reason why data by itself can’t give you what you need in the end as far as making a better decision, because it’s always and only about the past, and you can’t extrapolate accurately out of the past, right? So I think the dynamism that you talk about is already kind of being in your product to some degree, right, is the secret, right?
-
Mark Stouse [ 03:21:43 ] Because it’s that dynamism together with the creation of a feedback loop that matters most. No, absolutely. Yeah. Always taking into consideration, like, new factors that we can get our hands on, as well as, you know, as soon as if something happens, take that into consideration and kind of, you know, retrain the model and see what’s happening there. Great. Awesome. Okay, cool. Thank you so much. Thanks for having me. Thank you. Thanks so much, Mark. Thank you, Anne. And everyone, for coming to our second AI Summit, only on Harvard. Hard skill exchange. We are building the stock market of skills. So if you love the speakers, you can book one-on-one sessions with them on the platform, hardskill . exchange. And we will return in February with our third AI Summit.
Julia Nimchinski [ 03:22:33 ] Lots of exciting themes. Very practical. Spoiler alert, we’ll have the CEO of G2, of course, Mark Stoose, of course, and Hollander, hopefully Helen too. And Justin, take us. Well, it’s been a very remarkable three days. And yeah, there’s a lot of hype about AI. And you normally just hear about writing emails with ChatGPT. And that’s a lot of fun. You know, it’s like watching cats with snorkels. But it was a lot of great matter on the last three days. I learned a ton. And wow, we’re gonna have the recordings up and live probably by the end of next week. I want to thank all of our sponsors, the audience, the community, everyone that supports us the last three years. And we’ll see you next week. Bye. Thanks again. Follow Mark Stoose, follow Helen Lin, follow all the amazing presenters. And that’s all she wrote. We’ll see you in another month. Thank you.
- Introduction to AI Forecasting Challenges
- Understanding Discern’s Forecasting Process
- Components of AI Forecasting Models
- Combining Human Judgment with AI Forecasting
- Using Historical Data for Pipeline Planning
- Addressing Correlation vs. Causation in Forecasting
- Pacing Analysis for Better Sales Performance
- Challenges in Forecasting and Clean Data Inputs
- Discern’s Broader Value Proposition
- Adapting Forecasting to Dynamic Market Conditions
- Closing Remarks and Vision for the Future