Text transcript

Executive RoundtableAI Product-Led Growth (AI-PLG)

Held February 11–13
Disclaimer: This transcript was created using AI
  • 123
    00:29:42.540 –> 00:29:48.159
    Julia Nimchinski: Yeah, we are shifting to our next session. Connie Kwan, welcome to the show

    124
    00:29:49.340 –> 00:29:57.689
    Julia Nimchinski: product like growth. And AI. Connie is the CEO or storyteller and former product leader at blessing. And Microsoft.

    125
    00:29:57.950 –> 00:29:59.109
    Julia Nimchinski: Happy! You’ve been.

    126
    00:30:00.770 –> 00:30:01.665
    Connie Kwan (4Storyteller): Great! How are you?

    127
    00:30:04.252 –> 00:30:13.059
    Julia Nimchinski: We’re super excited. Yeah, else is joining us. Kristen Berman, I read your newsletter.

    128
    00:30:13.380 –> 00:30:19.339
    Julia Nimchinski: as many thousands of other go to market leaders tell us more about it. Shameless plug.

    129
    00:30:20.732 –> 00:30:36.109
    Kristen Berman: Yeah, so I actually have a couple of newsletters. But so CEO, behavioral scientist rational labs, we understand user psychology and then build products that align with it, basically getting people to do anything.

    130
    00:30:36.110 –> 00:30:54.389
    Kristen Berman: And with AI comes good and bad to drive us to take action. So Irration Labs has a newsletter. We have a substack product, teardowns. And then I was just featured in Kyle’s growth, unhinged with a nice article on kind of how product leaders could think about positioning AI features.

    131
    00:30:57.380 –> 00:31:05.569
    Julia Nimchinski: Amazing, Connie, what is? I don’t know the number one product, that trend that you’re seeing now, and how AI is impacting it.

    132
    00:31:07.790 –> 00:31:09.929
    Connie Kwan (4Storyteller): Number one part of that trend.

    133
    00:31:11.340 –> 00:31:13.350
    Connie Kwan (4Storyteller): I need to think about that one a little bit.

    134
    00:31:13.780 –> 00:31:15.770
    Connie Kwan (4Storyteller): I would say there is a

    135
    00:31:16.280 –> 00:31:23.389
    Connie Kwan (4Storyteller): there’s a merging of the functions because of AI so product engineering design are merging

    136
    00:31:23.560 –> 00:31:40.739
    Connie Kwan (4Storyteller): very heavily. Folks that are in product are able to code using cursor so they can put together prototypes really quickly, they can generate designs, and then vice versa for everybody else. So the future of how you know what we call the triad is going to evolve is is

    137
    00:31:40.890 –> 00:31:42.770
    Connie Kwan (4Storyteller): is going to be interesting.

    138
    00:31:45.730 –> 00:31:53.150
    Julia Nimchinski: I don’t want to steal the show here. Let’s just get started. Let’s dive into it. We can do a round of introductions, but we prefer not to.

    139
    00:31:53.260 –> 00:31:55.149
    Julia Nimchinski: The stage is yours, Connie.

    140
    00:31:55.830 –> 00:32:02.279
    Connie Kwan (4Storyteller): Okay, sounds good. We don’t wanna do introductions, not even like a 1 liner for folks.

    141
    00:32:02.280 –> 00:32:08.469
    Julia Nimchinski: One liner is fine, but our folks love just to dig into the topic right away.

    142
    00:32:08.940 –> 00:32:21.439
    Connie Kwan (4Storyteller): Yeah, sounds good. I think a little context is helpful. Let’s just go around the table here. I’ll I’ll go on my screen and order. Maria’s the 1st one, and just give us the 60 second version.

    143
    00:32:21.810 –> 00:32:39.549
    Maria Thomas: Great. Hi, everyone. I’m Maria Thomas. I’m a chief product officer, board member, and I’m a growth and AI enthusiast. So I embrace AI in product development in our go to market motions, in personalization of onboarding and excited to be here.

    144
    00:32:40.030 –> 00:32:42.520
    Maria Thomas: I’ll popcorn over to Betsy.

    145
    00:32:43.000 –> 00:32:43.779
    Connie Kwan (4Storyteller): Sounds good.

    146
    00:32:44.810 –> 00:32:52.802
    Betsie Hoyt: Hi, everyone. I’m a product management leader with 20 years of experience in Sas, and that’s across a lot of different industries.

    147
    00:32:53.280 –> 00:33:05.800
    Betsie Hoyt: And as a Pm. I’m naturally curious and fundamentally believe in constant iterations. The only way that we can make the product and the processes better is being early adopters of emerging tech.

    148
    00:33:05.800 –> 00:33:23.609
    Betsie Hoyt: And I freaking love. AI, because it enables me to solve the problems for that I’m facing and that my team’s facing. And I cannot wait to talk about all the practical applications I’ve been experimenting with and learning about the things that this panel is doing. So let’s go.

    149
    00:33:24.540 –> 00:33:26.700
    Connie Kwan (4Storyteller): Sounds good. Kristen.

    150
    00:33:27.880 –> 00:33:52.490
    Kristen Berman: Hello! I’m Beveril, scientist, CEO, of rational labs we’ve worked with kind of the top tech companies, Linkedin intuit Microsoft, and then a fair amount of and retail companies, healthcare companies, optum Walmart to drive behavior change. And so we’re really thinking about the small details of life that get people to do anything, and then integrating that into the top of the funnel, and the product excited to be here.

    151
    00:33:53.480 –> 00:33:54.730
    Connie Kwan (4Storyteller): Awesome, and Dan.

    152
    00:33:55.500 –> 00:34:25.500
    Daniel Baddeley: Hey, everybody, Dan, here CEO, and founder of best defense, spend most of my career in executive leadership positions, leading teams that fortune. 100 companies spent most of my time in development operations development, you know, devsecops, solutions, architecture. We focus primarily around being able to obviously have like risk mitigation for end customers, compliance, support, and being proactive with your alerts and incidents as they come up to be able to drive that customer retention and drive that customer, Trust.

    153
    00:34:26.690 –> 00:34:53.076
    Connie Kwan (4Storyteller): Awesome. Very nice to meet all of you. I’m Connie Kwan, and I help founders and executives unlock their inner storyteller. I’m walking. I’m working on 2 companies right now. One is for storyteller, where there’s a class on thought, leadership and storytelling. And then I have a done for you company that I’m working on right now, called Guest Spark AI, using AI to do done for you thought leadership. And my background is in product as well for about 20 years. So let’s dive in today,

    154
    00:34:53.650 –> 00:35:20.479
    Connie Kwan (4Storyteller): trust and brand. So we’re gonna go through this flow where we’re gonna talk about product from the user journey. So top of funnel, first, st awareness and acquisition through the sign up experience retention strategies and then execution challenges in terms of using AI in the products, and all of it is being touched by AI. So I think that will be an interesting flow for us as product. Folks talk about it in that order. So 1st of all, let’s talk about

    155
    00:35:20.480 –> 00:35:38.200
    Connie Kwan (4Storyteller): top of funnel engagement. Trust in brands is shifting more towards individuals. 82% of b 2 b buyers, according to Edelman, in 2024 is more likely to engage with a company whose leadership team is more visible online. So my question for the group is.

    156
    00:35:38.200 –> 00:35:55.990
    Connie Kwan (4Storyteller): how can AI driven content, creation tools? And this talk to talk about acquisition now right enhance the efforts for a brand to drive top of funnel engagement, and I’ll kick it off with Kristen here. Who’s doing the behavior? Science. So tell us more about what the data is showing right now.

    157
    00:35:56.500 –> 00:36:06.459
    Kristen Berman: Yeah. Well, I mean, I think if you’ve been in kind of marketing or product led growth, we know that there has been a trend to go from static landing pages, static, content to much more dynamic

    158
    00:36:06.470 –> 00:36:30.199
    Kristen Berman: content. And you know, sometimes this looks like personalized landing page based on channel, but in general, it takes advantage of. You know, people are present bias. They actually want to do the thing that they are motivated to do right. Now, in a perfect world. We want to learn about everything we care about all the features. But actually, we care about the one thing that we’re searching for in this moment. And so you know that the opportunity for AI

    159
    00:36:30.310 –> 00:36:56.149
    Kristen Berman: content is to be more targeted into what people are actually looking for actually want to accomplish in the moment, and that content can actually be more helpful to people. And obviously kind of you know, we’re looking for trust signals to say, is this true. We live in a very noisy world, we don’t have much time. Our attention is a scarce resource, and so most of trust is built on these types of heuristics and credibility signals

    160
    00:36:56.150 –> 00:37:21.789
    Kristen Berman: that you get. And I think that’s kind of one thing that AI content creators or tools need to really understand is that people just don’t have that much attention. We have kids, we have work, we have our phones, and we’re looking for the real thing of trust. And those types of signals, even in AI generated world. Don’t go away, and they may be reinvented, but we’re looking for some sense of I’m not getting duped.

    161
    00:37:24.390 –> 00:37:47.659
    Connie Kwan (4Storyteller): Awesome. You mentioned earlier that the there’s customization and and it’s trying to target the specific need of the user right? People coming to the same tool might be using it for different reasons. So are you actually creating? Are you seeing clients create different landing pages or different pricing pages, or different. So like customizing all the way through the entire acquisition, experience.

    162
    00:37:47.660 –> 00:38:10.039
    Kristen Berman: You know, we work with some of the top tech clients, and we’re not seeing it yet. I would say, this is the tools that are not there, and the concept is there, and I trust that we’ll get there. You know most of these companies are doing some sort of personalized landing page based on channel based on ad etc, and that is a big leap forward in conversion. But I have yet to see a company doing real dynamic

    163
    00:38:10.080 –> 00:38:19.529
    Kristen Berman: pricing and landing page stuff, and don’t doubt that that’s kind of where the tools and infrastructure will will take us.

    164
    00:38:21.170 –> 00:38:24.169
    Kristen Berman: I don’t know if other people have different experiences. But we’re kind of.

    165
    00:38:24.540 –> 00:38:31.209
    Connie Kwan (4Storyteller): I’m just having Kristen kick us off, but feel free to chime in on any of these ones.

    166
    00:38:31.530 –> 00:39:00.270
    Maria Thomas: Yeah, I was going to jump in here with the from product led lens. Specifically, Kristin completely agree. The concept is there? That’s the dream. That’s what AI is going to empower for us. Right? So ultimately, let’s take it all the way to user onboarding and product. Right? The dream is that every user, not just to their landing page on the marketing side, but all the way into product. It’s infinitely customized and personalized for them. Again, tools are not quite there, but

    167
    00:39:00.270 –> 00:39:19.490
    Maria Thomas: I am very excited for that to happen. And you know, to a degree, we’ve all been doing this this level of personalization already. But yeah, can’t wait to connect all the dots, and that’s whoever is building those tools out there. Infrastructure side. Please build them faster. We can’t wait to implement them.

    168
    00:39:23.660 –> 00:39:27.939
    Connie Kwan (4Storyteller): Awesome any other thoughts on this one. I’ll move on to the next question.

    169
    00:39:28.210 –> 00:39:50.500
    Kristen Berman: Maybe I’ll add one contrarian fact is that as the world becomes easier, actually, I think people value things that take a little bit more time. So if you magically imagine going to a therapist, and then they just give you advice immediately you’d be like, I just don’t trust you. I haven’t said anything. And so in some in some ways this idea of giving our opinion is actually very helpful to trust.

    170
    00:39:50.570 –> 00:40:16.810
    Kristen Berman: And this is contrarian to all you know, like friction is bad sort of conversion flows, but actually goes to increase the endowment that a user feels about the solution that they’re getting. And so I do worry that as magical experiences happen, tech teams will release them and be shocked that people don’t trust the advice, the recommendations, the personalization, because the user has not had a touched it.

    171
    00:40:17.670 –> 00:40:34.681
    Connie Kwan (4Storyteller): Yeah, yeah, that’s a great segue. I actually wanted to ask the next question, which is, how has establishing individual thought leadership within the organ organization add to building trust with users. How is that trust being built now? Especially with a lot of

    172
    00:40:35.290 –> 00:41:03.299
    Connie Kwan (4Storyteller): generated content data. There’s less trust there, and people are turning to. Oh, I know this individual, I trust what they’re saying, and that’s how I’m going to attribute trust now. So how has that shifted this acquisition path in terms of almost almost word of mouth. Well, it’s almost like we’re going back one step and trying to associate trust through the network that we know. Maria, you you worked at Buffer. You were at your rebrandly you want to kick us off on this discussion.

    173
    00:41:03.680 –> 00:41:15.799
    Maria Thomas: Absolutely so. Yes, authentic thought. Leadership is something that is highly effective, not at the company level and at the individual level. So let me break it down a little bit. So

    174
    00:41:15.870 –> 00:41:39.779
    Maria Thomas: so Buffer is a well-known company, not only for its products, but also for its thought leadership in the area of transparency. And, for example, as a company buffer decided that it discloses all of its financials, employees, salaries, every challenging business decision. And what does it have to do with social media marketing, which is what Buffer really builds.

    175
    00:41:39.910 –> 00:42:04.889
    Maria Thomas: You know, it’s a kind of 2 different topics. So one of my key advice with AI or without is for brands companies to really choose a topic that connects to their truth and to their values, embrace it, and maybe don’t pick something that’s very much like I’ll use the American expression, an apple pie choice. So something like don’t choose.

    176
    00:42:04.890 –> 00:42:19.659
    Maria Thomas: maybe. Oh, let’s make Internet safe, right? Something that everyone would agree with, you know, choose something that’s a little bit more unique to you. But again, I stress that it. It may have something to do with your product, and what you actually the what you you build.

    177
    00:42:19.660 –> 00:42:37.179
    Maria Thomas: But it can very much be about how you operate, what values you believe in, what are topics that you really want to be leading in so and in that sense, how does AI help companies and brands build that strong social presence?

    178
    00:42:37.180 –> 00:43:00.120
    Maria Thomas: Authenticity is key, but so is quantity. So as a growth leader, I stress to teams that yes, it’s about high quality, authentic content. But please don’t neglect the frequency of how often you share in your blogs, how frequently your blogs are going out, how often you publish on social!

    179
    00:43:00.270 –> 00:43:28.889
    Maria Thomas: I had to push myself personally as a Gen. Xer to become a thought leader and become much more visible on Linkedin. And honestly, everyone it really comes down to just do it, do it often don’t stress or worry too much about that. You’re going to make a mistake. Let it be a little bit controversial. And another thing that I drive within companies where where I lead is make sure that everyone on the team really embraces thought leadership. Engineers

    180
    00:43:28.890 –> 00:43:36.249
    Maria Thomas: build, build in public right, share what you are building today. You know what challenges you face. You face

    181
    00:43:36.250 –> 00:44:01.340
    Maria Thomas: all of that contributes to encouraging more people discovering your company and discovering you. It’s a really win-win. It helps your company, your product succeed. If you’re not comfortable speaking about yourself, talk about the team, talk about wonderful people you work with. That’s a great one it’s easy for introverts to pursue. But as far as again, use of AI

    182
    00:44:01.380 –> 00:44:18.750
    Maria Thomas: AI in these cases is really great for ideation. So at rebrandly we use AI in that way. We don’t have AI write our blog posts, or we don’t have them write our social posts, but it helps us just collect and solicit ideas, organize them.

    183
    00:44:18.750 –> 00:44:33.879
    Maria Thomas: And then for those of us for whom English is not a native language, it does help with, you know, just pure copy. Finessing and streamlining saves time like. Pick me pick pick a great title and help, you know, automate this process as much as possible.

    184
    00:44:35.020 –> 00:44:38.619
    Connie Kwan (4Storyteller): That’s really good advice, Betsy. Go ahead.

    185
    00:44:38.620 –> 00:44:53.810
    Betsie Hoyt: Yeah, I wanna add to what Maria was saying of, I love that you said of the the permission. It’s giving your team permission and helping, especially the introverts. I work with a lot of engineers

    186
    00:44:53.810 –> 00:45:03.390
    Betsie Hoyt: and I. They have fantastic ideas, but they don’t want to grab the microphone, so to speak, and I love the concept of

    187
    00:45:03.907 –> 00:45:18.389
    Betsie Hoyt: having that hackathon or that pitch meeting and using AI to how we can do this faster. The idea of using it as a strategy partner to sound

    188
    00:45:18.390 –> 00:45:35.239
    Betsie Hoyt: better in how you’re writing your posts, or how you’re articulating. The information really helps to have that sounding board, and I found through my work with match and with Trimble, I mean, you couldn’t get 2 opposite ends of the spectrum as far as product

    189
    00:45:35.280 –> 00:45:58.239
    Betsie Hoyt: those types of permissions, of of using the technology to get out there to write the information and to talk about what’s going on. Because a lot of these teams have really great information, but they’re in these little funnels and pockets and saying, Oh, I couldn’t possibly speak on behalf of the customer. Couldn’t possibly speak on behalf of the company, so

    190
    00:45:58.450 –> 00:46:03.939
    Betsie Hoyt: I love that thought of giving everybody permission to to be advocates.

    191
    00:46:03.940 –> 00:46:21.600
    Daniel Baddeley: Oh, I think I feel so, attacked Betsy, with the Introverted engineer comment so. No, no, it’s very true, it’s very true, and I think it really also comes down to authenticity, like we were saying right, being able to have full transparency, being able to add proper citation on material

    192
    00:46:21.690 –> 00:46:37.219
    Daniel Baddeley: when it comes down to right, hey? You know, AI is just generating all this random stuff. Well, it’s like, well, where did it even come from? Right as people put more and more and stuff into these Llms right? That goes into the training data, whether it’s accurate or not, these things start to hallucinate, but being able to have like actual valid citations around. Your content.

    193
    00:46:37.390 –> 00:47:01.430
    Daniel Baddeley: I think, is extremely important, right? And if you are going the route of generating out your media, your marketing material. Maybe you’re subscribed to certain rss feeds around security rights to be able to inform your end customers that there were data breaches, right? Because it’s our data. At the end of the day. We want to know when this is happening. I think it’s extremely important just to be able to let them know like, Hey, this is where the information is coming from, hey? It came directly from Csa or from the FBI,

    194
    00:47:01.590 –> 00:47:07.650
    Daniel Baddeley: and sure it might be an automated message post. But that’s you. Being proactive for your end. Customers let them know that something happened.

    195
    00:47:09.840 –> 00:47:18.079
    Connie Kwan (4Storyteller): Thank you for that perspective, Dan. Awesome. Let’s shift gears to talking about more mid funnel. So growth strategies.

    196
    00:47:18.620 –> 00:47:40.610
    Connie Kwan (4Storyteller): According to Openview, who did a report on this. Plg companies are growing 50% faster than traditional Saas companies. And AI is helping by automating, ab testing, optimizing conversion funnels and accelerating product discovery. So my question is, how do you measure the effectiveness of AI driven growth initiatives in a Plg. Motion company?

    197
    00:47:40.740 –> 00:47:44.079
    Connie Kwan (4Storyteller): What are some of the Kpis. That matter the most.

    198
    00:47:44.270 –> 00:47:50.170
    Connie Kwan (4Storyteller): and I’ll kick this question over to Betsy to get us started here.

    199
    00:47:50.970 –> 00:47:53.429
    Betsie Hoyt: Oh, giving me the mic. Okay.

    200
    00:47:53.880 –> 00:47:54.440
    Connie Kwan (4Storyteller): Yeah.

    201
    00:47:54.440 –> 00:48:03.049
    Betsie Hoyt: So. I think anytime that you’re looking at your key performance indicators. You have to make sure that.

    202
    00:48:03.050 –> 00:48:25.199
    Betsie Hoyt: are you measuring the right thing, and the the way I like to look back is is 1st of all, look at what the challenge is that you’re trying to solve, for so in the case of if you’re trying to have the onboarding go faster. What are your blockers? And I like to create the the kpis for the team of going all right. We have a mountain of

    203
    00:48:25.200 –> 00:48:38.649
    Betsie Hoyt: user guides that we need to update, is it? Let’s go ahead and put key performance indicators on the volume of the user guides we’re going to be updating and how quickly we can get those done.

    204
    00:48:38.750 –> 00:48:58.030
    Betsie Hoyt: And you then go to the conclusion of going all right. I’m going to need AI. I’m going to need to measure these and get them out faster and validate them. I think that’s 1 of the key points of how am I going to validate to what Daniel was saying earlier?

    205
    00:48:58.410 –> 00:49:04.499
    Betsie Hoyt: Are they available? Are they accurate? Where are my sources and getting them out and testing them.

    206
    00:49:08.710 –> 00:49:30.840
    Maria Thomas: Yeah, I’ll jump in next, Connie. If that’s all right. I’ll so as a as a product leader, you know, one of part of part of my role is to evaluate our product and design operations and growth tech stack, so to speak, right? So it’s ultimately comes down to tool selection, and to to answer your question on, how do I measure

    207
    00:49:30.840 –> 00:49:50.529
    Maria Thomas: incorporating AI into successful growth operation? Right? So let’s take. Let’s take a example of observability tools, so observability tools as such as clarity, log, rocket, smart look. You know, there are many of them that do. Maybe session replay. So you can observe how users are getting through your funnel

    208
    00:49:50.530 –> 00:50:06.589
    Maria Thomas: I’ll use an example of. I recently got a demo, and I’m very excited about implementing log rocket. I’m not affiliated with them, but I generally am, and the reason is is because it provides it’s just as all the other session replay tools.

    209
    00:50:06.590 –> 00:50:30.559
    Maria Thomas: but it provides AI summaries. So what just happened? My growth team is better empowered. They just save time on analysis, or where we have the biggest blockers in our funnel. They don’t have to go create charts and graphs. So their time just got freed up to do more other valuable tasks, right? So any amount. I can save my team time in analysis of

    210
    00:50:30.610 –> 00:50:49.990
    Maria Thomas: understanding where you know we have friction and growth, or what’s the greatest? What’s the next greatest growth lever. I’m going to push them towards using that tool that empowers them more. That’s really what it is fundamentally is. Can I save my team time on that discovery. Same thing from

    211
    00:50:50.000 –> 00:51:16.870
    Maria Thomas: we Brandly, are a marketing analytics tool. We are implementing AI in a similar fashion, helping our customers get to insights faster. Right. So let’s say, we provide analytics rather than having customer review along dashboard. We’re putting insights right at the top of the page. So that’s I think that’s ultimately what it comes down to is saving time. So it’s a it’s automation on steroids.

    212
    00:51:17.690 –> 00:51:46.970
    Daniel Baddeley: Yeah, I do believe that being able to get that feedback loop executed a bit faster. Right? Get your customer feedback figure out, you know if there’s market affirmation for maybe even solutions or features that they’re requesting, or problems that they’re having within the application, being able to do some basic level complexity analysis. Figure out what the weight and value of that is for your organization to be able to put that into your product roadmap right to be able to get that executed, get it back in the customer’s hands, make their experience better. I think that there is a lot of value in that.

    213
    00:51:47.960 –> 00:51:48.440
    Connie Kwan (4Storyteller): Attend

    214
    00:51:48.440 –> 00:51:55.560
    Connie Kwan (4Storyteller): is everyone seeing that you’re using off the shelf tools for this? Or are you doing it all in house? Because off the shelf tools aren’t available yet.

    215
    00:51:56.150 –> 00:52:01.370
    Connie Kwan (4Storyteller): Using AI driven analytics for the when analyzing.

    216
    00:52:02.030 –> 00:52:29.279
    Daniel Baddeley: Seen anything for like, if you’re using like, you know your your, let’s say you’re using Hubspot right? And you have your sales engineers on the phone, or you’re talking with customers trying to get their feedback. Maybe seeing what deltas exist with your product, with what their holistic solution is right. And I haven’t really seen too many things out there about getting customer feedback and being able to do an automated like. I don’t want to call it a roadmap analysis for what? That would look like implementing the feature. But

    217
    00:52:29.310 –> 00:52:44.079
    Daniel Baddeley: generally it’s just like, Hey, how many times have people brought this up on the phone call and then saying, like, Okay, well, we’ve heard about this thing 10 times. We have to go have internal discussions about the feature, you know. Let’s try to figure out what the complexity is with the team. Let’s figure out what the value is.

    218
    00:52:44.441 –> 00:53:09.250
    Daniel Baddeley: I haven’t really seen too much in terms of being more proactive about getting these things into the system, getting them analyzed, creating, you know, maybe auto generating out, you know, epics or stories in Jira and letting you be able to see what that might look like right from a more like product level perspective, and then being able to use those to be able to speak with your engineering team and maybe make the end goal a little bit more clear on a user story scenario.

    219
    00:53:09.260 –> 00:53:22.199
    Daniel Baddeley: I haven’t really seen too much for that. I think that if that does exist I’d love to know about that tool. Otherwise I think that that’s something that where there’s a there’s a hole in the market, and someone could plug that rather and make some money off that.

    220
    00:53:22.520 –> 00:53:32.949
    Betsie Hoyt: Right same, Daniel. I I think, though you you hit the nail on the head of it’s determining what the patterns are, because

    221
    00:53:33.040 –> 00:53:59.249
    Betsie Hoyt: when you’re looking at things from all the different channels of what’s coming in from customer service. What’s coming in from sales and trying to collate to say, what’s the stack ranking. And then look at the Roi to build that roadmap. AI really is that power tool. So I don’t think we should undercut that and say that that’s not valuable. But yeah, if somebody can write out the system to do that, sign me up. I want to be your your 1st guinea pig for that, because that’s that’s golden.

    222
    00:54:00.130 –> 00:54:25.140
    Kristen Berman: I think there are these aggregators out there, but they to do it does take a lot of onboarding because you have to integrate all your systems. And so the uptake and the viralness of this is very slow, because to get it to actually work, you need all the systems in there. One thing that our team is thinking about now, because we work with all these companies integrating and launching AI features and trying to monetize them is the value proposition of saving time versus doing more.

    223
    00:54:25.230 –> 00:54:48.580
    Kristen Berman: So I’m skeptical. That kind of AI saves people time. This is, for sure, true, and if it’s saving me 5 min on a task, but it’s doing 5 times a day every time. I say that 5 min I don’t fully understand that it adds up to a week of my life at the end of the year. It’s very hard. Saving time is an abstract future concept versus doing more. I can believe I’m a better person. My team can be.

    224
    00:54:48.580 –> 00:54:55.460
    Kristen Berman: My team is mediocre employees. Of course they can be better. And so we have this kind of more concrete understanding of doing more

    225
    00:54:55.824 –> 00:55:13.699
    Kristen Berman: or being better. And so I think some of the AI features right now, and even within internal adoption, you know, internal adoption can be low, especially these large companies, because they’re defending their fiefdom. You know, this is if you make qual research summaries easier. The quality research

    226
    00:55:13.730 –> 00:55:22.199
    Kristen Berman: leader doesn’t have a budget to hire vendors to go do qual research, and they’ve slashed their budget and they’ve slashed their headcount. Not ideal, but doing more.

    227
    00:55:22.330 –> 00:55:32.259
    Kristen Berman: Now, all of a sudden you’ve made this person more productive. And so I think there should be a reframe here on this to get more less cognitive dissonance and and more excitement.

    228
    00:55:33.850 –> 00:55:34.570
    Maria Thomas: Moment.

    229
    00:55:37.010 –> 00:55:41.379
    Connie Kwan (4Storyteller): Have you seen AI influence virality in plg products?

    230
    00:55:41.800 –> 00:55:46.740
    Connie Kwan (4Storyteller): Are there specific features or models that are driving organic growth.

    231
    00:55:47.760 –> 00:56:02.390
    Maria Thomas: Yeah, yeah, 100%. I’ll jump in on that since the products I typically oversee, they have this. They have a a really large premium model, typically, just broad, horizontal footprint.

    232
    00:56:02.390 –> 00:56:24.999
    Maria Thomas: So rebrandly is a type of product like that buffer, bitly and sitely. All of them had some similar approach. And there were these gifts of ability to actually drive virality. So let’s say you are via product. You are publishing your social posts. Each social post can bring virality back to your product.

    233
    00:56:25.000 –> 00:56:50.270
    Maria Thomas: You know, short links are branded and bring virality back to your product. And there are a lot of things you can do to basically put your virality also on steroids. Right? So you’re picking up a theme here. The principles of virality are still the same, you need to introduce your brand out into the wild, and reach as many possible end users and prospect as possible. Now, can you make that

    234
    00:56:50.270 –> 00:57:14.759
    Maria Thomas: very efficient and again back to personalization. So at rebrandly as we speak, the team is developing an AI driven feature that allows to customize metadata for short links automatically for our customers. So they would use our Api and then automatically based on content and descriptors and some inputs

    235
    00:57:14.760 –> 00:57:25.060
    Maria Thomas: from the customer, the metadata of the links that goes out will promote the brand of the customer, and also can can help with virality for ourselves. So

    236
    00:57:25.398 –> 00:57:47.729
    Maria Thomas: so again, that’s a use of AI within that flow. But the principles of virality are still the same, right like it’s being able to have a watermark, or, you know, powered by insert your brand ability or just distribute your product visibility to as many people as possible. It would be great to hear if there are other examples of what folks are thinking about.

    237
    00:57:49.460 –> 00:57:55.240
    Connie Kwan (4Storyteller): You want. It sounds like in the the mechanics of virality is similar, but that you can do it faster now with AI.

    238
    00:57:55.410 –> 00:58:03.609
    Maria Thomas: You can do it faster, smarter, right? So again, cause it cause it cause you can give AI a lot more inputs right than rather than

    239
    00:58:04.088 –> 00:58:25.981
    Maria Thomas: you know, just promote my brand. But you can. You can say, like on this topic or in this, in this in this context or use these inputs versus these inputs time, it can be time, everything from time of day to day of the week, to what’s the content behind your, you know, whatever the piece of value shipping in our case, it’s a link. So

    240
    00:58:26.675 –> 00:58:37.469
    Maria Thomas: so yeah, I think it’s just it can just simply absorb more and make the ultimate viral deliverable smarter and faster for you and your customers.

    241
    00:58:38.060 –> 00:58:38.650
    Connie Kwan (4Storyteller): Yeah.

    242
    00:58:38.915 –> 00:58:39.709
    Kristen Berman: Would really add.

    243
    00:58:39.710 –> 00:58:40.990
    Connie Kwan (4Storyteller): And then go ahead.

    244
    00:58:40.990 –> 00:58:41.889
    Kristen Berman: Oh, yeah.

    245
    00:58:41.960 –> 00:59:00.029
    Kristen Berman: I think that the virality of distribution delivery is clear. AI can like help with that. I’m going to compare AI to Botox, by which, if women get it, they don’t talk about it, and that’s hard for it to go, Viral, because, like you’re like, Oh, I did this, AI content.

    246
    00:59:00.030 –> 00:59:19.900
    Kristen Berman: You actually don’t want to brag to your followers that you did, that you don’t want to share that because of the trust and credibility thing that we talked about. And so I think we still have a little bit of ways to go. But, by the way, this is shared in closed community. So a group of friends will share that they get Botox. A group of founders will share that they’re using AI generated content.

    247
    00:59:19.900 –> 00:59:33.230
    Kristen Berman: So virality becomes less of a spray. And pray sort of thing. When you’re saying I use AI to improve my business operations and more of a closed group thing where you’re sharing how you are smarter, better, stronger, and helping your.

    248
    00:59:33.230 –> 00:59:44.239
    Kristen Berman: you know peers improve, but I think for sure, delivery and distribution. People are fine to say a little bit more than actually like content creation or the core of your product.

    249
    00:59:44.350 –> 00:59:50.580
    Kristen Berman: People have the intuition that this is yet not yet socially desirable from a user perspective.

    250
    00:59:51.010 –> 00:59:58.689
    Kristen Berman: You know, if you look at people in Kansas, you know, you’re like they don’t get that. AI is going to help them. They’re they’re skeptical.

    251
    00:59:59.080 –> 01:00:20.140
    Maria Thomas: 100, and to be clear like the the the use case I was describing would be more, you know, again it’s back to do more or save time for the customer that they don’t need to manually enter all the inputs that these inputs are already proposed and generated by, for example, right? And and to be clear, like, I’m definitely not suggesting that.

    252
    01:00:20.509 –> 01:00:42.559
    Maria Thomas: We auto generate everything for the customer or ourselves, you know, using AI. But it is an interesting controversy around, you know. Should we be closed doors, or about this, or quiet about it? I I don’t know. I I think that again, as an AI enthusiast, I share widely everything we use and everything we do. And I think it’s only helps.

    253
    01:00:42.600 –> 01:00:43.659
    Maria Thomas: Was it option? Yeah.

    254
    01:00:43.660 –> 01:01:05.240
    Connie Kwan (4Storyteller): It’s a little bit of a secret little secret weapon in the back there. Very good. We have a question coming in from the audience, and they want to know. Will Plg go away with AI bots, buying and selling, basically just bots talking to each other to make the purchases. I tell AI. I want this dream of mine, and then it goes and figures it out for me.

    255
    01:01:06.590 –> 01:01:07.690
    Connie Kwan (4Storyteller): Thoughts.

    256
    01:01:07.960 –> 01:01:12.529
    Daniel Baddeley: I I think that that’s extremely dangerous. For the same reason why you don’t really want

    257
    01:01:12.680 –> 01:01:33.059
    Daniel Baddeley: you still want human in the loop elements when it comes to like remediation. Like, let’s say a cyber incident happens, and you have this system, and you’ve got your automation in place, and it determines that the best case scenario is just take your database completely offline because there’s a breach going on. Then you have a complete global outage. Right? That’s just an example

    258
    01:01:33.150 –> 01:01:47.430
    Daniel Baddeley: so like, let’s say, you know the system that you have. You’re you’re trying some new stuff you got agentic. AI. You’re trying to have it learn through induction, and it makes the wrong assumption on what your business needs. And then it places a purchase order for half a million dollars.

    259
    01:01:47.950 –> 01:01:53.879
    Daniel Baddeley: I think that, you know that could be fairly dangerous. I’m not saying that it couldn’t ever happen.

    260
    01:01:53.950 –> 01:02:18.910
    Daniel Baddeley: but you know you’d have to train these things reasonably well, make sure that, just like with everything else, you have alarms that you can set up thresholds, making sure that it doesn’t bankrupt your business. I think at the end of the day, though, for quite a long time. You’re what you’re really gonna get at the end of the day is is an AI system that’s gonna do an analysis based off of anything that you’re missing for your business and then just create like a po sheet that’s recommended

    261
    01:02:18.910 –> 01:02:31.159
    Daniel Baddeley: for someone to physically. Go and look at and then sign off on, click the button, and then it can go out and place the orders. But I don’t. I don’t know if people are gonna trust this thing to just run wild. People have seen their aws bills, and it’s not fun.

    262
    01:02:32.020 –> 01:02:43.060
    Maria Thomas: Yeah, yeah, I was gonna say that case in point this week. All right. So considering various tools and like buying a tool for our product operation stack.

    263
    01:02:43.150 –> 01:03:04.410
    Maria Thomas: or considering opening up a new Channel case in point, I would spend as a buyer. I would spend a lot of time in entropic Claude. Ask it all the questions you know, gather as much research as possible, and I’m having 2 calls this week with people who’ve actually used it. So that authentic feedback from

    264
    01:03:04.630 –> 01:03:21.880
    Maria Thomas: another person who actually implemented that system already is of equally much higher value than AI, telling me what product I should buy. So I think that the human element will definitely be here.

    265
    01:03:21.900 –> 01:03:45.180
    Maria Thomas: I think it may be helpful if AI like instantly connected me to people already in my network to may already have researched the same tool. So I think that would be neat. And another thing I’ll say is that how AI has AI has already completely disrupted SEO and sem like, if anyone here is in, go to market teams and watching the trends.

    266
    01:03:45.180 –> 01:03:58.910
    Maria Thomas: it’s here right? That disruption and revolution is is happening. It’s not a catastrophe. It’s not the end of the world. It’s it’s that we now have a new channel as go to market and growth leaders to go pursue, which is through Llms.

    267
    01:04:01.670 –> 01:04:08.929
    Connie Kwan (4Storyteller): You want to share a tidbit there about how you get yourself ranked. Quote unquote ranked in the Llms. So people can find you.

    268
    01:04:09.210 –> 01:04:13.499
    Maria Thomas: I I don’t. It’s like Botox. I’m gonna if I do, I would keep it. Keep it quiet.

    269
    01:04:15.750 –> 01:04:19.149
    Connie Kwan (4Storyteller): Gonna make the listeners come worth their time right.

    270
    01:04:20.000 –> 01:04:21.600
    Connie Kwan (4Storyteller): Share little secrets.

    271
    01:04:22.070 –> 01:04:36.570
    Kristen Berman: Just a tidbit on. I think. You know it took us a long time to go from walking into the bank with a teller to Atms, to Mobile banking. It took a very long time. And still at Wells, Fargo, or credit humans credit human. You have to like.

    272
    01:04:36.660 –> 01:04:50.000
    Kristen Berman: go to a branch to open an account. Trust takes a long time to build. So I think when we’re talking about this idea that AI will purchase for us, I think we could probably reasonably say, that’s a great guess. The question is like, is this, in 20 years like it took that long

    273
    01:04:50.010 –> 01:05:11.009
    Kristen Berman: for to go to tellers to, you know, like Wealthfront moving my money for me. And now I’m happy with Wealthfront moving my money for me. But that couldn’t have happened in the beginning. And so time does help trust signals, help and credibility helps. And so I think we can all have that vision. It’s almost like we’re too ahead of the curve right now, and prediction is obviously hard. We’re usually wrong.

    274
    01:05:11.010 –> 01:05:35.739
    Kristen Berman: but the idea of kind of AI purchasing something for me is likely better than a company advertising to me and me clicking on the ad. We know that their incentives are off, which is, I train my AI. I’ve contributed to those incentives, and I’m more in line with that. The world we’re living in now is like somebody’s telling me what to buy based on their incentives like, I think we’ll look back and be like

    275
    01:05:35.760 –> 01:05:50.579
    Kristen Berman: what a wild thing that we paid attention to an Instagram ad telling me to buy a sweater or software, because obviously that was all rigged. Now, you’re actually looking at signals that may align with your preferences. So that’s my optimistic and yet long term view.

    276
    01:05:52.230 –> 01:05:57.979
    Connie Kwan (4Storyteller): Well, that’s a great segue into the whole self. Serve aspect, and the execution challenges around that.

    277
    01:05:58.254 –> 01:05:59.349
    Maria Thomas: I did. I did.

    278
    01:05:59.350 –> 01:06:00.390
    Connie Kwan (4Storyteller): Can I add something.

    279
    01:06:00.390 –> 01:06:09.219
    Maria Thomas: Yeah, I did have a late breaking tidbit that I will add to how to get ranked and picked up by Llms. So it’s actually, really, really simple and straightforward.

    280
    01:06:09.946 –> 01:06:24.100
    Maria Thomas: It, it is your content. It is still your like. Basically think about quality and accuracy and clear descriptors of value, of your of your product of services that you provide.

    281
    01:06:24.170 –> 01:06:46.280
    Maria Thomas: And just look at it through the lens of Llm. Like reading, analyzing it just in the same way as Google is still doing for SEO and Sem. Right? So help centers everyone. Please pay close attention to what you are describing, as far as your features and functionality and help centers, because that’s

    282
    01:06:46.390 –> 01:07:09.660
    Maria Thomas: deep, high quality information about what your product does and does or doesn’t, and make sure that that’s up to date accurate, etc, because, again, Llms. Are only as smart as the information that you yourself are sharing, so that will be my one practical assist for those interested to be picked up by Llms.

    283
    01:07:11.500 –> 01:07:13.659
    Connie Kwan (4Storyteller): Thank you. Thank you for sharing that.

    284
    01:07:14.490 –> 01:07:25.320
    Connie Kwan (4Storyteller): My question is, so plg relies on self. Serve experience. The whole point is, try this thing. See if it fits for you. It’s free for a certain amount of time. And

    285
    01:07:25.660 –> 01:07:39.210
    Connie Kwan (4Storyteller): here comes AI trying to be a middle person, which we just discussed, right? Will it buy for you? How does that disrupt this self serve model? Are there challenges in how you integrate the AI into the self, serve model. And how do you overcome that?

    286
    01:07:39.500 –> 01:07:41.639
    Connie Kwan (4Storyteller): I’m going to kick this over to Betsy.

    287
    01:07:42.940 –> 01:07:58.380
    Betsie Hoyt: Great, great question. I think it’s more of how does AI serve this? It’s again, it’s not the the gloom and doom or the we’re going to have these robot overlords making all of these decisions. It’s

    288
    01:07:58.420 –> 01:08:28.279
    Betsie Hoyt: enabling that onboarding to happen faster and more efficiently, especially for a lot of us who are working on global markets mentioned before, of not having some native speakers of whatever wherever the product is based. And so it really starts with, I think of handling those objections, and it 2 parts of. Is the information accurate for that self serve ability, and will it be efficient? And by efficient, I mean.

    289
    01:08:28.410 –> 01:08:42.140
    Betsie Hoyt: how do I make it? Less awkward and irritating? So in a couple instances, been implementing low code solutions and having self guided implementation, and it really gets to

    290
    01:08:42.140 –> 01:08:50.350
    Betsie Hoyt: understanding. We’re not trying to be a human, but trying to have those materials that are easy to understand and quickly available.

    291
    01:08:50.350 –> 01:09:12.989
    Betsie Hoyt: And with that comes the immense amount of work, and I think where AI really helps in is collating all of that information and helping to streamline it. One of the best uses. I think of AI is taking those documents and making it more concise, helping you to get updates to them, particularly as engineering is making Updates

    292
    01:09:12.990 –> 01:09:41.679
    Betsie Hoyt: of making sure. Then you’re looking at the correlating documents. And whether you’re using Pindo or something else that those user guides then are updated, and the key, above all things is to validate. I have not seen any replacement for the human brain and eyes to say, Do I test this? Is this correct? And have that validation layer, I think, then helps again get over those objection, handling.

    293
    01:09:42.319 –> 01:10:11.209
    Connie Kwan (4Storyteller): Betsy, you’ve been designing a lot of self serve workflows at red team software. And I’m curious, this, this seems like a lot of helping with the documentation piece and making it more accessible. Cla, in the classic way, there’s for self serve. There’s a lot of walkthrough right? You can install Walkthrough, using a 3rd party product, or you can build your own walkthrough. But walk through is the the golden thing that you must have in plg. Have you seen that being disrupted, are you doing your own walkthroughs? Are you using 3rd party? Is AI. Part of it? Tell us more about that.

    294
    01:10:11.580 –> 01:10:23.719
    Betsie Hoyt: So definitely using 3rd party. So again, looking at those low code solutions to say, Okay, I want, do I want to do a guided tour, and giving giving the customer that experience to say.

    295
    01:10:23.720 –> 01:10:42.490
    Betsie Hoyt: I want you to show me point to the directions of everything that’s new. I think it’s more of the hybrid at this point of matching those low code with the materials that you’ve done in AI. I haven’t seen AI come in and specifically be able to build all of those things.

    296
    01:10:42.801 –> 01:10:47.469
    Betsie Hoyt: It gets also, I think, Connie, to the to being able to implement that faster.

    297
    01:10:47.570 –> 01:11:10.029
    Betsie Hoyt: Why recode something that somebody’s already doing really well, why don’t we make it complementary to get to that faster? And then? Now I have the analytics that I can see what the click through rates are. Do I see a drop off in the Happy Path information and telling me you know where it is, and then going back and and resolving those solutions.

    298
    01:11:11.690 –> 01:11:18.421
    Connie Kwan (4Storyteller): Got. It sounds like we’re not quite at the point where AI just anticipates what your next walkthrough step should be, and gives it to you.

    299
    01:11:19.180 –> 01:11:19.570
    Connie Kwan (4Storyteller): And so.

    300
    01:11:19.570 –> 01:11:42.489
    Betsie Hoyt: You know, as a human I can’t. I mean, there’s too many. We’re all multifaceted. I mean I you know I can’t predict that I know computers will surpass me in the ability to to predict that. But I think, as humans, we’re too complex and being able to do that would be amazing, but I don’t think we’re anywhere close to that.

    301
    01:11:44.190 –> 01:11:44.740
    Connie Kwan (4Storyteller): Yeah.

    302
    01:11:44.740 –> 01:11:47.870
    Kristen Berman: Yeah, I mean, that is kind of a rational have.

    303
    01:11:47.870 –> 01:12:15.729
    Kristen Berman: Oh, sorry. That’s what a rash love says we think about. Like, what are the psychologies underlying decision making. And then we build flows that align with those psychologies. And then we test to see if we’re right. And many times we’re not. And we’re still the experts. But many times we are, we get the incremental lift and kind of understanding. The pattern of human behavior is critical to drive, you know, like improvements and changes to these things. So

    304
    01:12:15.730 –> 01:12:24.260
    Kristen Berman: it’s likely that AI will start to understand the pattern of human behavior in the future. But I think right now it is extremely complex to figure out

    305
    01:12:24.380 –> 01:12:29.760
    Kristen Berman: for new workflows. If you haven’t trained it on something. What? What you should be building.

    306
    01:12:30.940 –> 01:12:53.840
    Maria Thomas: Yeah, yeah, I wanted to also chime chime in with the plug for conversational AI and chat bots in our self. Serve experience. Right. So great use of existing technology, we don’t need to wait. So what is self? Serve self serve means that you don’t need to write a support ticket. You do not need to get on the demo call with the sales team to understand how how something works.

    307
    01:12:54.232 –> 01:13:22.300
    Maria Thomas: Kristen. I expect the challenge as a psychologist. But here’s a here’s how we make decisions on my product teams through Brandon. Before you know, we put ourselves in the shoes of an end user. And so let’s say, let’s say I do have a support ticket right? Like that. I that I need to file like something rather complicated. You know something that that the AI Chatbot, that, of course, we already built in product can’t answer as an end user. I would love to get an instant response, even if it’s

    308
    01:13:22.670 –> 01:13:51.869
    Maria Thomas: quite wrong or a little bit wrong. But by giving me like, Hey, I think this is how your problem can be solved, and like I get that within instantly is more valuable, I think, to the end user than having to wait for a perfectly polished response from a human. So so those are the types of trade-offs that we’re starting to really, really dig in and understand and analyze. And again, through data, right like what’s ultimately better. And and where did the customer have the better experience

    309
    01:13:51.980 –> 01:14:14.500
    Maria Thomas: by talking to the sales team who was able, who were able to explain to them everything in a great detail, and fine tune, and customize a demo? Or can a AI Chatbot quickly answer some some questions that can immediately unblock me to purchase or move on. So I’m curious if that’s controversial Kristen from psychology standpoint. Again, like the

    310
    01:14:14.690 –> 01:14:24.570
    Maria Thomas: the instant gratification of getting maybe an imperfect answer versus waiting for a perfect answer for polished by human.

    311
    01:14:24.950 –> 01:14:47.910
    Kristen Berman: Yeah, I know we’re probably running about time. So we’ll just kind of say, I think in general speed matters, immediate gratification matters. People tend to remember negative experiences. So the thing I would just worry about is like the memory recall is just much higher if it doesn’t work. And so that’s kind of why the Downside Risk people are nervous, and it’s true, right if if you mess up.

    312
    01:14:47.930 –> 01:15:03.850
    Kristen Berman: they remember, and they don’t remember a fast answer. Unfortunately. So I think these are reasonable trade-offs to make, and and from business context should be made. It’s just the the downside opportunity cost could be something to think about.

    313
    01:15:05.620 –> 01:15:15.000
    Connie Kwan (4Storyteller): It. It sounds like you made some measurement comparison, Maria. I didn’t hear it wrong of comparing fast and wrong versus slow and right answers.

    314
    01:15:15.000 –> 01:15:35.282
    Maria Thomas: 100 yeah. So help. Scout is not again. Not affiliated. But it’s a Zendesk alternative that provides. I think Zendesk provides it to for instance, it provides like instant replies right? And you know, with clear disclaimer that this is, you know, AI generated. But here’s a quick, quick response. We’ll get back to you.

    315
    01:15:35.610 –> 01:15:45.190
    Maria Thomas: So we measured success of that at Buffer, and it was successful so, and we tested, you know for a long time, and then ultimately rolled it out. So

    316
    01:15:45.693 –> 01:15:46.699
    Maria Thomas: ultimately allow.

    317
    01:15:46.700 –> 01:15:57.260
    Connie Kwan (4Storyteller): But how accurate was it like, you know, the negative experience that Chris is talking about? And you’re saying it’s okay that there’s is it like 0 point 1% of the time. 5% of the time, like, what’s the tolerance.

    318
    01:15:57.420 –> 01:16:17.179
    Maria Thomas: Sure, so I don’t remember the exact the the perfect comparison. But we did measure using pnps, so transactional. Nps was the customer satisfied, and did that effectively close their ticket. Right? So and by just comparing the the 2 approaches, you know, we found that the instant reply ultimately was helping

    319
    01:16:17.180 –> 01:16:34.250
    Maria Thomas: us not only close tickets faster, but also have a satisfied customer. So you need to measure both. Not just efficiency, but also to to Kristen’s point? Did we actually answer their question? And was it a high quality response? So you need you need to measure both.

    320
    01:16:36.890 –> 01:16:49.669
    Connie Kwan (4Storyteller): Certainly had the experience of asking, and then it went down completely wrong track which I caught, but if I hadn’t it would have wasted a bunch of time, and it would have been negative experience. I wouldn’t want to go down that path again right with that product if it was a product. So

    321
    01:16:49.840 –> 01:16:56.000
    Connie Kwan (4Storyteller): I can definitely see that it can. It can go really negative. But I hear your point of measuring both

    322
    01:16:56.130 –> 01:17:21.730
    Connie Kwan (4Storyteller): cool. Let’s move on to another implementation question. So AI can service insights. But the implementation still requires a human decision making right now, right? That’s why I’m hearing about in the room. How do you help teams trust the AI driven insights without feeling like they’re blindly following a black box like there’s no way to completely verify what it’s doing or or is there? I’m gonna kick this off to Dan Daniel.

    323
    01:17:22.430 –> 01:17:30.559
    Daniel Baddeley: I would say, you know. Thank you very much, I would say, just integrating security with customer success. For instance, I think it really comes down to

    324
    01:17:30.760 –> 01:17:38.129
    Daniel Baddeley: being able to see and understand through like x AI or explainable AI. How

    325
    01:17:38.280 –> 01:18:05.249
    Daniel Baddeley: these Llms are even making their decisions. In the 1st place, I think it really comes down to being able to provide regular training on cross disciplinary workshops for your employees to understand how these models work. Fundamentally, I think it really comes down to transparent performance metrics. I think it comes down to being able to track and share key performance indicators, seeing how the systems are performing detection times, and then being able to see

    326
    01:18:05.470 –> 01:18:20.980
    Daniel Baddeley: how these models are improving over time. Right? So it’s kind of like it goes even back to like when it messes up. People remember that. Well, if you’re more transparent about about that in the beginning, and people can see. Yes, it messed up. But it it told everybody that it messed up.

    327
    01:18:20.980 –> 01:18:41.109
    Daniel Baddeley: basically. And it kind of tells on itself. And you can see how it’s been improving over time as you fine tune the models for your business and always having that human in the loop decision making where people feel more comfortable, knowing that there’s still someone with a butt in a seat making sure that these tools are just that their tools to help enable

    328
    01:18:41.472 –> 01:19:06.410
    Daniel Baddeley: and help optimize efficiency in a workplace versus taking over people’s jobs and then making assumptions and then just making decisions that could have mission critical impacts on an organization. I think help go a long way with people starting to trust AI a bit more, knowing that it’s really just the friend on your shoulder trying to give you some good advice, not telling you what to do, or making assumptions for your organization.

    329
    01:19:08.200 –> 01:19:14.620
    Connie Kwan (4Storyteller): What are you using internally to get more visibility on the behind the scenes? AI. Thinking process.

    330
    01:19:15.430 –> 01:19:38.800
    Daniel Baddeley: So you can. Well, most models allow you to be able to actually hook directly into its thinking process, and you can display that for your employees. So instead of it just being like some abstract outputs on a on a screen, you can show people specifically all of the steps, how it goes through its reasoning to come to a conclusion in the 1st place, and then those allow you to be able to identify like

    331
    01:19:38.990 –> 01:19:46.660
    Daniel Baddeley: certain key points like, Oh, this is where it made a wrong assumption immediately, and you can highlight all of that. If that helps answer your question.

    332
    01:19:47.210 –> 01:19:49.510
    Connie Kwan (4Storyteller): Yeah, sounds like you’re going straight to the the wrong model.

    333
    01:19:50.390 –> 01:19:51.500
    Daniel Baddeley: So you can have.

    334
    01:19:51.500 –> 01:20:00.220
    Connie Kwan (4Storyteller): Tool. That kind of arbitrates it or collects it and visualizes it. And we’re still not. We’re not there. We’re going straight to the raw model.

    335
    01:20:00.800 –> 01:20:23.312
    Daniel Baddeley: It kind of also goes back to like as we so. And I saw that there was a question here specifically about like, why I. AI. Stayed in the drawers for so long. And it, you know. Obviously, this stuff started in the in the mid 19 hundreds. But you can think of AI progression as a J curve, right? Like it was really flat and slow in the beginning, and then it gets exponential. Once the technology gets better.

    336
    01:20:23.750 –> 01:20:38.619
    Daniel Baddeley: really. And as we move towards Agent, take AI and AI that’s able to learn through induction, just like animals do. And as this thing is able to start reasoning better and better. You’re going to start seeing more and more trust on these things. But really, I think again, it comes down to

    337
    01:20:38.710 –> 01:20:54.190
    Daniel Baddeley: just how long people have been exposed to the technology. Most people are afraid of what they don’t know, and I think it just keeps coming back to that same old thing over and over again, the more transparent you are with what it’s doing, how it’s come to these conclusions. The more people will learn to

    338
    01:20:54.440 –> 01:21:00.629
    Daniel Baddeley: rely on it, and then eventually, maybe, you know, trust it to be able to do more and more important things on its own.

    339
    01:21:01.620 –> 01:21:03.340
    Connie Kwan (4Storyteller): Cool, Betsy! Go ahead.

    340
    01:21:03.340 –> 01:21:19.560
    Betsie Hoyt: Yeah, Daniel, I wanna add on, I I wanna underline what you said about using the AI tools and having having it quote where the sources of data? I like to think of AI as being a member of my team.

    341
    01:21:19.560 –> 01:21:49.509
    Betsie Hoyt: and I require everybody, myself included. What are your data sources. And really, the argument should be about what’s the better, more pristine data. So I love that concept of saying, Okay, AI, how did you get to this conclusion, what are your sources? So again, it’s easier to validate. So I think that’s really important. And then I wanted to also comment on AI staying in the drawer. I would be just slightly contrary to say.

    342
    01:21:49.520 –> 01:22:18.730
    Betsie Hoyt: AI hasn’t been in the drawer that it started off with machine learning. We’ve had several iterations of AI. I mean, we have it on our phones. We don’t consider siri and all of the other bots as AI necessarily, but those are the precursors, and having that natural language bot to say, Hey, play my song while I’m stuck in traffic, all of those type of components. It’s just

    343
    01:22:18.740 –> 01:22:31.289
    Betsie Hoyt: having the business applications where we get really geeked out and excited about it. So I would say it’s been here for quite a while. We just maybe gave it a different name and a context, and it’s more valuable to us now.

    344
    01:22:33.000 –> 01:22:40.899
    Maria Thomas: Yeah, I’ll add that as much as I am AI enthusiast, and use it all the time, and really push my team to use it.

    345
    01:22:41.000 –> 01:22:43.589
    Maria Thomas: There’s still no replacement for critical thinking.

    346
    01:22:43.720 –> 01:23:07.009
    Maria Thomas: right? So we can be looking at our be it P. And L. Or Arr reports, or all the growth Kpis. And I still think it requires a human brain to challenge assumptions, even the data that we see right? Because, like, like having that moment. I’m sure we’ve all been in growth meetings where you know, we’re all looking at the same data. And finally, someone says.

    347
    01:23:07.010 –> 01:23:18.900
    Maria Thomas: You know, that just doesn’t seem right. And then we identify that there’s actually maybe a data underlying data issue, right? That AI wouldn’t have picked up on in that analysis. So

    348
    01:23:20.290 –> 01:23:25.007
    Maria Thomas: big, just big big highlight on critical thinking. I think that’s why

    349
    01:23:25.480 –> 01:23:35.450
    Maria Thomas: we’re going to be around as experience season or young and bright growth. People, you know, always questioning what we see. And

    350
    01:23:35.750 –> 01:23:47.200
    Maria Thomas: the nice thing about AI is that you can always, you know, challenge back and push back and like, are you sure or like I’m seeing this? Tell me more. So I think. Yeah. Critical thinking for the win.

    351
    01:23:47.620 –> 01:23:49.939
    Daniel Baddeley: Yeah, I think like 100%,

    352
    01:23:50.280 –> 01:24:19.650
    Daniel Baddeley: we normally say to like, you know, AI is amazing. And it really helps with reducing a lot of tedious work, right? Mind, numbing stuff that you really don’t want to do like just getting definitions for things, or when it comes to programming, saying, like, hey, you know, give me some function that does this thing. It’s really great at being able to provide references or resources or assumptions based off of your existing projects or off material that already has. But it’s not great at like novel thinking.

    353
    01:24:19.830 –> 01:24:41.340
    Daniel Baddeley: Right? It’s the it’s the unique, hey? This is some brand new inspirational idea that I have about how these things can go together. It’s not really great at that, right? It’s more about like, how can I reduce the amount of time. It takes me to do this thing that I already know what needs to happen, and then just have it like help. Generate out some some work along the way

    354
    01:24:41.550 –> 01:24:43.230
    Daniel Baddeley: right more than anything else.

    355
    01:24:44.710 –> 01:24:45.060
    Julia Nimchinski: Different.

    356
    01:24:45.060 –> 01:24:51.260
    Connie Kwan (4Storyteller): It reminds me of my of my 10 year old saying something like, it’s all about work, life balance.

    357
    01:24:51.380 –> 01:25:11.179
    Connie Kwan (4Storyteller): And you know she doesn’t really know what it means, you know, she just sounds like she knows what she’s talking about, but AI sometimes comes across like that. Awesome. We’re at the end of our time here. Do we need to close now, or can I ask one more? Just have folks go around table and name one plg product they really like, or do we want to just cut it off right here, Julia.

    358
    01:25:11.650 –> 01:25:12.679
    Julia Nimchinski: Yeah, let’s do that.

    359
    01:25:13.220 –> 01:25:23.570
    Connie Kwan (4Storyteller): Okay, let’s do the audience question, which is which plg products really killing it. If you have one in mind, go ahead and name that, and we’ll wrap on that.

    360
    01:25:25.430 –> 01:25:35.960
    Maria Thomas: I think canvas doing a tremendous job, and so is notion. You know, with how they really, you know, pay attention to the nuance of growth in their growth experiences specifically.

    361
    01:25:39.470 –> 01:25:42.150
    Connie Kwan (4Storyteller): Awesome. I think Miro’s killing it.

    362
    01:25:42.770 –> 01:25:52.990
    Connie Kwan (4Storyteller): It’s definitely a try before you buy type of product. I think anything that’s very, very visual, and try before you buy is great for that like you can get to value really quickly.

    363
    01:25:52.990 –> 01:26:00.970
    Betsie Hoyt: I raised my hand to to Miro as well, Connie, I’m enjoying particularly like the workflow diagramming. And just.

    364
    01:26:01.290 –> 01:26:05.359
    Betsie Hoyt: I love the visual aspects of Miro, huge huge fan girl.

    365
    01:26:07.680 –> 01:26:22.829
    Kristen Berman: I’m more in the Figma camp. But yeah, I think big jam and stuff. But I think same thing. I think the visual aspects. You know, these companies with a visual ability to share. And wow, people is pretty easy to do an easy sales campaign.

    366
    01:26:24.490 –> 01:26:28.659
    Connie Kwan (4Storyteller): Cybersecurity might be a little trickier, totally less visual. There.

    367
    01:26:29.110 –> 01:26:56.930
    Daniel Baddeley: That’s and that’s where I go. Check it out fully. AI. Action driven insights. You know, you could have half a million devices in multiple clouds all over the world on prem locations. It’s going to be able to show you what the bad actors are doing in real time on your networks and help enable your 1st response teams. And that’s kind of the whole point, right? And why we’re here talking about AI and enablement right when it comes to incident response times for your business. You really only have

    368
    01:26:57.140 –> 01:27:04.809
    Daniel Baddeley: 5 to 10 min tops to be able to actually do something about someone that breached your network before they take all your information.

    369
    01:27:05.040 –> 01:27:05.570
    Daniel Baddeley: So.

    370
    01:27:05.570 –> 01:27:22.979
    Connie Kwan (4Storyteller): Okay, I stand corrected. Maybe there’s some visual sexy dashboard that really hooks you when you plg that product too. Alright? Well, that wraps it for today. Really love the conversation. Thank you, Maria Kristen, Daniel and Betsy and Julia, for instigating this conversation.

    371
    01:27:23.580 –> 01:27:50.690
    Julia Nimchinski: Thank you so much, Connie, before we wrap up Maria Kristen, Connie, and everyone on the show. Let’s just address this question, because normally, when we talk about product led growth, we talk about sort of reflection of the average average data insights, etc, etc. Kristen writes a lot about it. You know, we can just slap AI onto the website. AI driven AI summit. Hey? Here we go. So I’m just curious

    372
    01:27:50.690 –> 01:27:56.240
    Julia Nimchinski: about probably like unusual use cases that. You’re seeing

    373
    01:27:56.580 –> 01:28:05.160
    Julia Nimchinski: unusual features, unusual insights, the inverse of the average that people are using for growth. Kristen, let’s start with you.

    374
    01:28:06.200 –> 01:28:24.109
    Kristen Berman: Yeah. So just as a way of background, we did a recent study with close to 800 people and show them different versions of landing pages. And so we replicated into it and Zendesk and superhuman and Microsoft. And basically in one condition, we had AI very prominent. You can imagine you launch a feature like

    375
    01:28:24.110 –> 01:28:43.910
    Kristen Berman: this is an AI feature. We plastered across the landing page, and in one condition we just did the value prop, right, you know it helps you be more productive. And what we found is that there was no difference between willingness to pay trust, performance, expectations, and a variety of other variables when comparing those pages.

    376
    01:28:44.060 –> 01:29:02.389
    Kristen Berman: Now this is not bad. There wasn’t a downside. It’s not like people said. I reject this this product because it has AI. But that goes against this idea that we’re building products. And AI is an inherent increase in willingness to pay for the products. And it basically pushes us back to basics. It basically says, Look.

    377
    01:29:02.390 –> 01:29:25.519
    Kristen Berman: you need to offer something more tangible to people than a technology. And I think this is kind of, you know the contrary non-contarian view. I think a lot of people know this. They agree with it. And yet companies are investing in AI features and want to talk about it. And so my kind of general hypothesis is, you know, just because you’ve done the technology you still have to go and figure out.

    378
    01:29:25.560 –> 01:29:47.729
    Kristen Berman: how do you get people to change their status quo like? What a new product! New features incredibly difficult. You have to insert yourself into their daily life, and then get them to sign up like it and pay you. Oh, my God! And so just the idea of AI is not going to do that, and that shouldn’t be surprising. And yet, if you look at all these landing pages, it tends to be forgotten. At that key moment of positioning.

    379
    01:29:49.570 –> 01:29:50.940
    Julia Nimchinski: Maria, what do you think.

    380
    01:29:52.125 –> 01:29:59.510
    Maria Thomas: Sorry. So the the question is, the AI in terms of sorry, can you, Julie? Can you restate the question again?

    381
    01:29:59.510 –> 01:30:13.209
    Julia Nimchinski: Yeah, unusual usage of data for AI to feed AI or unusual use cases for growth that basically the inverse of the average as opposed to, you know just a reflection of it.

    382
    01:30:14.764 –> 01:30:17.119
    Maria Thomas: Yeah. Hmm!

    383
    01:30:17.580 –> 01:30:18.950
    Maria Thomas: Unusual.

    384
    01:30:19.290 –> 01:30:29.559
    Maria Thomas: I don’t know. I think just just again, like more like from practical advice, you know, for people in in Plg is just, you know, really not hesitating to

    385
    01:30:29.810 –> 01:30:31.400
    Maria Thomas: embrace the

    386
    01:30:31.520 –> 01:30:39.219
    Maria Thomas: not just the data, like the data data, like right, like at its kind of core essence, but also all the visuals you know, and an understanding of

    387
    01:30:39.370 –> 01:30:53.060
    Maria Thomas: market direction and brand and kind of like psychology of your users. So feeding your, you know, whatever whatever you use for for insights, you know, don’t neglect.

    388
    01:30:53.470 –> 01:30:55.163
    Maria Thomas: You know also the

    389
    01:30:56.230 –> 01:31:09.010
    Maria Thomas: like everything that surrounds you know the the I guess the the piece of study that you’re doing. So it can be, you know, visual pictures, you know, colors that are associated with it, because at the end of the day.

    390
    01:31:09.010 –> 01:31:28.060
    Maria Thomas: You know, the the insights that you’re going to get are going to be of higher quality, and I just feel that that’s get neglected a lot. And you know AI will eventually revolutionize in the same way that we used to have. Twitter and X was just text. And now we have Youtube shorts and videos. So I think

    391
    01:31:28.060 –> 01:31:40.799
    Maria Thomas: life insights and AI outputs will be smarter, more dynamic, and more human in the experience. And and I’m also a big fan of waiting for robotics to really kick in with. Inspired by AI.

    392
    01:31:40.970 –> 01:31:54.899
    Daniel Baddeley: And don’t forget to give some tlc to that technical debt backlog, too. I know all the shiny bubbles coming out the door, you know to be able to drive. Additional revenue is fantastic. But don’t forget about the infrastructure and the foundation that’s holding up the business.

    393
    01:31:55.270 –> 01:31:56.820
    Betsie Hoyt: Woot, Woot, Daniel.

    394
    01:31:57.840 –> 01:31:58.350
    Maria Thomas: Sure.

    395
    01:31:58.930 –> 01:31:59.780
    Maria Thomas: Yeah.

    396
    01:32:00.120 –> 01:32:06.339
    Julia Nimchinski: We’re still waiting for Scott Brinker. So we can continue Connie, Daniel, Betsy.

    397
    01:32:06.810 –> 01:32:09.810
    Julia Nimchinski: any unusual use cases for Plg.

    398
    01:32:10.700 –> 01:32:11.389
    Connie Kwan (4Storyteller): Yeah,

    399
    01:32:12.200 –> 01:32:19.139
    Connie Kwan (4Storyteller): I would say, mostly when people think about AI, they think about efficiency. They think about automation. They’re thinking about saving time.

    400
    01:32:19.290 –> 01:32:31.029
    Connie Kwan (4Storyteller): And one thing that. And that was my thought, too. But as I was working with the users of the thought leadership AI that I’m building, I’m realizing that they’re actually valuing the slower side of AI.

    401
    01:32:31.150 –> 01:32:59.100
    Connie Kwan (4Storyteller): They’re valuing. Not that we can write their Linkedin post for them really quickly, but that they get time to reflect on what they want to write about, so it becomes an interview where it talks to them. Is this mindful, intentional, deliberate thinking about what their thought leadership means, and how they can bring their their special ikigai into the world. That reflection time is actually what they valued highly, and that came as a complete surprise.

    402
    01:32:59.534 –> 01:33:13.749
    Connie Kwan (4Storyteller): I personally use a therapist. AI, that I talk to, and that has done amazing things for me. So I think you know, when we think about AI like there’s this fast side, and there’s the slow side, too.

    403
    01:33:14.580 –> 01:33:15.200
    Maria Thomas: Hmm.

    404
    01:33:15.320 –> 01:33:23.040
    Daniel Baddeley: And I also think that, too, there’s there’s even stuff out there for like analyzing human emotion, right? So being able to be able to provide better customer.

  • 405
    01:33:23.360 –> 01:33:35.820
    Daniel Baddeley: support, customer success, and just the fact that understanding the emotional state of your end customers is also very valuable to be able to understand how to better engage them and and maybe potentially de escalate a situation. If

    406
    01:33:35.990 –> 01:33:53.800
    Daniel Baddeley: you know they’re not, they’re feeling some kind of way, right? You don’t want to be able to have a disgruntled customer, you know, which in turn can have a negative feedback loop right on the business, right? Maybe they start posting on Google about all of the bad system, features or support, when maybe it was just a simple misunderstanding, right? So.

    407
    01:33:58.300 –> 01:34:00.150
    Julia Nimchinski: Betsy anything to add? Yep.

    408
    01:34:00.150 –> 01:34:07.314
    Betsie Hoyt: Yeah, I was. I was just reflecting on on what Daniel just said. I had a little bit of Ptsd of like

    409
    01:34:08.340 –> 01:34:24.950
    Betsie Hoyt: it is understanding and taking that pause. So when a when a client is irate, sometimes it isn’t how quickly it is the right tone and understanding the nature. And so I love also what Connie was saying, of saying.

    410
    01:34:24.950 –> 01:34:41.899
    Betsie Hoyt: of taking a moment to realize and giving you a little bit of space to go. What do I know before I react to it? And both of those points really resonated with me to think of those really carefully.

    411
    01:34:42.635 –> 01:34:51.410
    Betsie Hoyt: Your inverse comment, Julia, I think is, I’d slightly take this to say

    412
    01:34:51.570 –> 01:35:16.290
    Betsie Hoyt: a slight pivot that I think some of the biggest mistake companies are making in Plg is not experimenting and trying. AI, what kind of Pm, would I be if I didn’t say you’ve got to experiment? Do some A B tests, maybe do a focus group, but start off. So if you’re hesitating, don’t, because the data is telling us if you’re not using AI, you’re going to be left behind. So

    413
    01:35:16.490 –> 01:35:19.749
    Betsie Hoyt: get your people together and start.

    414
    01:35:20.820 –> 01:35:26.150
    Julia Nimchinski: Thank you so much. Phenomenal panel. Connie, Maria, Betsy, Daniel!

    415
    01:35:27.520 –> 01:35:28.390
    Julia Nimchinski: Kristen.

Table of contents
Watch. Learn. Practice 1:1
Experience personalized coaching with summit speakers on the HSE marketplace.

    Register now

    To attend our exclusive event, please fill out the details below.







    I want to subscribe to all future HSE AI events

    I agree to the HSE’s Privacy Policy
    and Terms of Use *