Text transcript

How to Build Winning GTM Agents That Learn and Scale

AI Summit held on May 6–8
Disclaimer: This transcript was created using AI
  • 03:27:49.310 –> 03:27:55.759
    Julia Nimchinski: True builders welcome to the show welcome when park

    1195
    03:27:56.330 –> 03:28:00.659
    Julia Nimchinski: CEO and principal analyst at amalgam insights.

    1196
    03:28:01.080 –> 03:28:02.330
    Julia Nimchinski: How are you doing.

    1197
    03:28:02.960 –> 03:28:18.189
    Hyoun Park: Oh, doing. Great that was I was just catching the last few minutes of the last panel, and I think it was a perfect setup for this panel that we’ve got right here and showing how to actually build agents. I don’t want to get ahead of myself, but I’m pretty excited about this.

    1198
    03:28:19.220 –> 03:28:23.820
    Julia Nimchinski: Amazing, super excited. When the stage is yours, take it away.

    1199
    03:28:24.850 –> 03:28:51.210
    Hyoun Park: Terrific. So welcome everybody to this panel on how to build winning, go to market agents, and how to let that learn and scale. I’m pretty excited about this panel, as I think one of the big concerns we sometimes have is being the smartest person in the room. I am not concerned about that right now, because I know I’m not the smartest person in the room. We’re going to have a lot of smart people here in the panel talking about

    1200
    03:28:51.210 –> 03:29:03.549
    Hyoun Park: how to actually create these agents. And I think we have a great distribution of people in this panel who are going to be showing up as we are going to have representation across

    1201
    03:29:03.550 –> 03:29:27.100
    Hyoun Park: developers, marketers, salespeople people who are building startups, people who are at the Enterprise level, making the case to fortune, 500 C-level executives, and we have a lot of great experience here, as the people who are working here have all have graduate degrees, they

    1202
    03:29:27.100 –> 03:29:46.250
    Hyoun Park: all have worked in enterprise and large organizational settings. Everybody here. When you look at these resumes, I would just take a moment if you are in this watching this panel right now, literally write down these names and follow these people. Follow up with these people afterwards because they’ve got resumes that go across

    1203
    03:29:46.330 –> 03:30:10.669
    Hyoun Park: adobe. Ibm just like all the big companies that you would want to work with, or they have dealt with this at a nonprofit level or at an organizational level governmental level. We’ve got some of the biggest schools and greatest schools represented here as well. People have been to Mit. People have done the work and dug in deeply. So with that just a quick

    1204
    03:30:10.700 –> 03:30:23.990
    Hyoun Park: introduction to our panel who is now shown up here. Darren Patterson, Dr. Tuba, Duraze, Sean Harris, Ed Corno, Omar, haar! Welcome to the panel.

    1205
    03:30:23.990 –> 03:30:24.950
    Shawn Harris: Hello! Hello!

    1206
    03:30:25.680 –> 03:30:26.150
    Darin Patterson: Thank you.

    1207
    03:30:26.150 –> 03:30:27.040
    Hyoun Park: Yeah. Yeah.

    1208
    03:30:27.510 –> 03:30:36.682
    Hyoun Park: So before we dig in, let’s do, can we do like a quick call it 20 inch. Second introduction of each person? I’ll just start with Darren.

    1209
    03:30:37.360 –> 03:30:38.453
    Hyoun Park: Introduce yourself.

    1210
    03:30:39.000 –> 03:31:03.740
    Darin Patterson: Yeah, fantastic. It’s great to be with you all today I get the pleasure of leading our market strategy at a company called Make, which is all about automation and integration across your organization. And of course, that means for the last several years that means incorporating AI into core business processes. I’ve been lucky enough to work with amazing go to market leaders across small organizations as well as

    1211
    03:31:03.740 –> 03:31:10.499
    Darin Patterson: large enterprise leaders like adobe and excited to talk about the future of work and what it looks like in an agentic world.

    1212
    03:31:11.860 –> 03:31:13.619
    Hyoun Park: Great Dr. Duraze.

    1213
    03:31:14.070 –> 03:31:39.529
    Tooba Durraze: Thanks so much. It’s so great to be here. So I’m the CEO and founder of Amoeba AI, where newest symbolic AI company, looking at business intelligence use cases. Prior to that I was qualified working as a Vp of product. Prior to that, the World Economic Forum. So all that is to say, I’ve had exposure to the hype and then now where we are in terms of AI, and excited to dig into it with everyone.

    1214
    03:31:40.330 –> 03:31:42.429
    Hyoun Park: All right, Sean Harris.

    1215
    03:31:42.430 –> 03:32:09.710
    Shawn Harris: Hi. My name is Sean Harris. I’m the CEO and co-founder of company called Coworked, where we have created a agentic project manager that we’ve named Harmony, and so I’m excited to be here. I am as being in a startup. I’m responsible for a lot with respects to really the go to market strategy and how we engage with our customers, and I’ll share more here. Briefly.

    1216
    03:32:10.450 –> 03:32:12.189
    Hyoun Park: Terrific. Ed.

    1217
    03:32:12.810 –> 03:32:19.749
    Shawn Harris: Yeah. Hi, everybody. It’s great to be here, you know. It’s kind of like, where do I start? I’ve been doing this for roughly 30 years.

    1218
    03:32:19.930 –> 03:32:23.049
    edward Corno: Started out really early on in the Internet

    1219
    03:32:23.110 –> 03:32:28.089
    edward Corno: and work with some large tech companies from HP. To Ibm.

    1220
    03:32:28.150 –> 03:32:55.899
    edward Corno: Ernst and Young, the consulting side. And it’s been a great experience. I’ve seen these different technologies kind of like go to wave pattern. And what’s interesting, and I can’t wait to talk about it all. And I’ve been involved with not only just like the Internet, but that blockchain when it was really in its heyday. And now with AI, which I think I kind of label it as AI 3.0. And now we’re at 3.1 with the genic AI. I look forward to talk to everybody about it.

    1221
    03:32:57.520 –> 03:32:59.009
    Hyoun Park: Thanks, and Omar.

    1222
    03:32:59.420 –> 03:33:26.739
    Omer Har: Sure. Hi, everyone. I’m Omar. I’m the CEO and co-founder of explorium explorium is a data aggregation specifically around AI. We aggregate data for multiple providers and provide that to agentic AI specifically around, go to market use cases. Before that, I worked in iron source. And before that in Microsoft research for about 5 or 6 years, doing research specifically for machine learning and intersectional machine learning and economics is a fascinating field.

    1223
    03:33:28.330 –> 03:33:31.540
    Omer Har: I’m very excited to be here, you know it’s a pleasure.

    1224
    03:33:32.950 –> 03:33:54.349
    Hyoun Park: Thanks. And with that, let’s just dig in to the 1st question. So, as you can tell already, the resumes here are amazing when it comes to agentic development and deployment. So we’ve all been thinking about. Go to market agents. Everybody’s played with agents. Everybody’s played with generative AI, and we’ve all heard the hype

    1225
    03:33:54.350 –> 03:34:20.620
    Hyoun Park: around AI, and how this is going to change the way we do business. But as all of you have practical experience dealing with agents. What do you feel like is a fundamental shift in the technology of agents that most organizations are not yet fully preparing for. And I’m going to just go in reverse order of where we started. So let me start with you, Omar.

    1226
    03:34:21.200 –> 03:34:45.000
    Omer Har: Sure that’s a great question. Actually, one of the things that we’re kind of looking at and kind of thinking about here in explore. So if you think about how most go to market processes are going in today, you’ll see them kind of stopping in 3 stops. So you start with a data provider. You know. Think about the zoom info, the simplest, whatever prospecting data you use

    1227
    03:34:45.000 –> 03:34:53.989
    Omer Har: it then pushed into the Crm and then within the Crm. It will be then pushed into a sequencer that most of kind of the workflow that you run

    1228
    03:34:53.990 –> 03:35:16.479
    Omer Har: with go to market will go to those 3 stops, and I think that agentic overall AI and agentic workflow will change that dramatically in a sense that not necessarily. You now need all of the data to sit in every one of those in every one of those stops. You don’t need all of the data to sit in the Crm. Maybe you need a list of your Icp those customers that you

    1229
    03:35:16.480 –> 03:35:40.580
    Omer Har: you’re a focus on. But definitely, you don’t need news about those companies. You know, a refresh on a daily basis in the Crm. Instead, you’ll have agents that will be right in the point of activation where you want to send the email where you want to engage where you have an Ae, having, you know, research before he meeting with a specific customer where the agent will pull all of the data

    1230
    03:35:40.580 –> 03:36:04.399
    Omer Har: from 3rd party for second party, from 1st party. We’ll harmonize everything into a conclusion, giving your, you know, prompts and questions, and then we’ll be able to activate that on the fly, either by writing an email, by maybe creating another outreach in Linkedin, or by creating battle cards within your zoom call. When you meet those customers. And I think that

    1231
    03:36:04.690 –> 03:36:14.950
    Omer Har: this is a fundamental shift in the way that data flows within the organization, especially go to market data. And I think that most companies doesn’t think about that already.

    1232
    03:36:15.360 –> 03:36:17.129
    Omer Har: Definitely not prepared to. It.

    1233
    03:36:18.620 –> 03:36:28.759
    Hyoun Park: Interesting. Ed, I I know you have a lot of experience bringing innovative new technologies and presenting to the sea level. I’m curious what you thought of that, and what challenges you see as well.

    1234
    03:36:28.760 –> 03:36:44.280
    edward Corno: I mean, right now, we’re just kind of getting our big clients, which we are mostly big clients, because it’s Ibm right, really kind of interested in what we’re trying to achieve and do in the past. Large language models seem pretty constrained.

    1235
    03:36:44.410 –> 03:36:49.099
    edward Corno: And now, you know, we’re making the the small language models work with agents

    1236
    03:36:49.230 –> 03:36:54.520
    edward Corno: which makes it even more efficient and more practical for an organization.

    1237
    03:36:54.610 –> 03:37:13.760
    edward Corno: Because if you look at, say, machine learning and Rpa robotic process automation. It’s like the next super layer up from that to then create these genetic ais. And of course, it’s all using the same type of tools that build them out, you know, python, tensorflow, etc. pytorch, to make it happen.

    1238
    03:37:13.810 –> 03:37:31.860
    edward Corno: which is what you really want, right? And so it’s still AI, in a sense. And a lot of people think, what is it really going to do? Well, it kind of allows it to be autonomous. You then can actually then let it go. Do what it wants to do. You can train that model, and then it keeps learning and learning, learning and getting better and better all the time.

    1239
    03:37:31.860 –> 03:37:44.749
    edward Corno: so by making it an agent, it makes it more relevant, makes it smaller, makes it easier for companies across the board to to adopt. I mean, that’s what we run into all the time. It’s like, Ibm.

    1240
    03:37:44.860 –> 03:38:02.600
    edward Corno: you guys are revolutionary. You got great research. You’ve done fantastic for us as thought leaders. But then, how do we take what you’re building and what’s new, and really apply it to what they do every day? So I mean, that’s what I run into is seeing how it actually applies to their business, and can make a difference.

    1241
    03:38:03.740 –> 03:38:11.653
    Hyoun Park: Great thanks. And, Shawn, I’m curious, though. What what challenges do you see? When you’re getting customers started? What are what are they ignoring.

    1242
    03:38:11.970 –> 03:38:20.529
    Shawn Harris: Yeah, no, I think that we shouldn’t lose sight. We’re moving into a world where we’re having software as a user.

    1243
    03:38:20.530 –> 03:38:49.320
    Shawn Harris: right? And so what I’m running into is, I’ll give you an example. Like as we were setting up our platform in a company, the original setup of the company ended up landing Harmony into the Company’s Hr. System, where Harmony was going to be collecting benefits, accruing Pto. And being put on a performance review because of the nature of the access that it needs.

    1244
    03:38:49.320 –> 03:39:14.329
    Shawn Harris: They went down this path of getting constructed as a contract, as a consultant within the environment which opened up a whole conversation amongst the business users who wanted to leverage the technology and it around. What are we doing here? What is going on? What is this going to do? And so I think

    1245
    03:39:14.330 –> 03:39:21.759
    Shawn Harris: thinking about software now, being a user versus just being like something you install and other people use

    1246
    03:39:21.880 –> 03:39:30.689
    Shawn Harris: is is a paradigm shift. And then the application of security against that user, I think, is going to be something that folks aren’t quite

    1247
    03:39:30.840 –> 03:39:32.329
    Shawn Harris: prepared for.

    1248
    03:39:32.450 –> 03:39:36.430
    Shawn Harris: That’d be one thing that I know I’m running into in the real world.

    1249
    03:39:37.060 –> 03:39:52.490
    Hyoun Park: Oh, really interesting. I’m gonna come back to you on that one, because I think that’s a question that resonates across the board. Tuba. I know I’ve been calling you Drazi, because I know if I wrote a Phd. Thesis I would be telling everybody to call me doctor forever.

    1250
    03:39:52.490 –> 03:40:00.519
    Tooba Durraze: There’s some joke in there about being called Dr. T. And then me moving away from it because too close to the city. So

    1251
    03:40:00.750 –> 03:40:09.400
    Tooba Durraze: I I think my my opinion on this might be a little bit spicier and a little bit more future centered, but I think we’re going to see

    1252
    03:40:09.520 –> 03:40:19.139
    Tooba Durraze: folks moving towards a more neurosymbolic architecture from transformer architecture. I think some of the things that some of the problems that we’re facing now

    1253
    03:40:19.490 –> 03:40:41.220
    Tooba Durraze: fall flat when it comes to neurosymbolic architecture, right with transformers like you’re thinking about prompts. You’re thinking about everything deterministic, every kind of business nuance. All of that kind of goes away on neurosymbolic. So I think that’s where I see folks moving. And then on the data side, there’s a huge trend around generating schemas on the fly. So you’re sitting on top of this giant data layer.

    1254
    03:40:41.220 –> 03:40:55.330
    Tooba Durraze: So generating those schemas on the fly, and then organizations essentially managing that semantic layer as like their second brain. That’s where a lot of your efforts are going to go versus right now. A lot of efforts are going into.

    1255
    03:40:55.440 –> 03:41:14.390
    Tooba Durraze: Can we get, you know all the data together, amalgamation of data. So I think you’re going to see trends in like folks moving up in that direction which ideally means that there’s a whole new era of like data jobs opening within companies as well. Folks who maintain this semantically and manage it for any kind of agentic flows.

    1256
    03:41:14.950 –> 03:41:22.030
    Hyoun Park: Sure, Tuba, just a quick follow up there. Could you describe what neurosymbolic means as it might be new to some of our audience?

    1257
    03:41:22.350 –> 03:41:32.449
    Tooba Durraze: Yeah. So neurosymbolic is the easiest way to describe it. It’s basically it’s symbolic AI, which is mostly like deterministic AI without you having to kind of put rules there

    1258
    03:41:32.510 –> 03:41:55.609
    Tooba Durraze: and then a neural net. So in the case of amoeba, as an example, we use a liquid neural net, and the reason that is really interesting is, neural. Nets are generally adaptive in nature. So anytime a new data point gets introduced. You don’t have to have the lag of like going back and kind of retraining the entire system, it’s able to self evaluate. So now think about that as like a living organism which is

    1259
    03:41:55.610 –> 03:42:24.060
    Tooba Durraze: kind of the structure that your agents will sit on top of, and then your agents themselves will also probably be on neurosymbolic models. So again, like, I think we’re making a lot of headway in terms of what you can get out of transformer models right now with prompts and with a lot of like rules that we set up, etc. But I think the future is like what happens when we kind of get out of the way and let let it do what it’s supposed to let the math math, as they say right.

    1260
    03:42:24.690 –> 03:42:26.790
    Shawn Harris: You’re using. You’re using liquid. AI.

    1261
    03:42:27.360 –> 03:42:28.689
    Tooba Durraze: I’m using liquid neural nets. Yeah.

    1262
    03:42:28.690 –> 03:42:30.750
    Shawn Harris: Yeah, yeah.

    1263
    03:42:31.450 –> 03:42:42.969
    Hyoun Park: Great. And then, Darren, what? What do you? So you’re dealing with the strategy of building an agentic product. And what what is going into that strategy that you’re trying to build. That’s new and different.

    1264
    03:42:43.240 –> 03:43:03.189
    Darin Patterson: Yeah, absolutely. It’s gonna be hard to follow up on those very deep thoughts about what the future looks like. But one of the things I I think companies are struggling with, and when they think about how to apply AI and AI agents into their organization is clear approaches to testing and performing and performance monitoring.

    1265
    03:43:03.190 –> 03:43:18.780
    Darin Patterson: If we take the word, let’s back up to the fundamentals agent. It’s a Latin based word. It means to to take action. And, of course, for the last 100 years we’ve done a good job of understanding. How do we evaluate the performance of humans in an organization when they take action.

    1266
    03:43:18.780 –> 03:43:38.609
    Darin Patterson: But it’s just as important, if not way more critical for us to have infrastructure and systematic approaches to understanding and monitoring AI agents, and that is a whole structure around evaluations and being able to detect drift in a particular business process that you’ve got set up.

    1267
    03:43:38.610 –> 03:43:53.059
    Darin Patterson: And so it’s going to be very important for organizations to think through specific strategies, to identify any particular AI agent applied in my organization. Even if it’s working right now, how do I ensure it continues to produce the same exact results.

    1268
    03:43:53.498 –> 03:44:17.999
    Darin Patterson: And that includes all sorts of infrastructure that’s super critical things like non production, replication of systems that these agents might interact with. How do I know that it’s going to consistently produce the same results? And so that’s 1 of the most important things, I think organizations need to be thinking through is, how do I ensure consistency over the long time? Especially in a very fast changing environment, where these foundational models are changing all the time.

    1269
    03:44:19.420 –> 03:44:20.230
    Hyoun Park: Great.

    1270
    03:44:20.902 –> 03:44:34.110
    Hyoun Park: Next, I’m actually gonna go back to a kind of sean’s mention about the idea of software as a user as you’re thinking about how your agent starts being used

    1271
    03:44:34.110 –> 03:44:51.879
    Hyoun Park: in a working environment and how how that software has to be a user. How is that different from how software has traditionally been set up. I’m going to toss it back to you to start with Sean, but feel feel free to chime in everybody else.

    1272
    03:44:52.310 –> 03:45:11.829
    Shawn Harris: Yeah, like, so classically, you know, software has been installed and or, you know, accessed. And a individual clicks around and does their thing right. I think that there are a lot of now, if you will agentic solutions that are

    1273
    03:45:11.900 –> 03:45:30.719
    Shawn Harris: still providing some ui where you know, maybe instead of 2 clicks. It’s 1 click, because the agent did something or one click, and the agent will run away and do something and come back. That that is one approach, or you know, for what it’s worth. There’s an approach where the software exists in the environment like

    1274
    03:45:30.840 –> 03:45:31.950
    Shawn Harris: a coworker

    1275
    03:45:32.741 –> 03:45:41.740
    Shawn Harris: and in existing in the environment like a coworker, you know. I don’t click on my coworker like my coworker engages with me.

    1276
    03:45:42.290 –> 03:46:05.009
    Shawn Harris: Yeah, exactly. They don’t engage with me. They engage with me in the places and spaces where we work right? And so you know, we have a thesis that that is how you know true agentic solutions should be delivered, and that’s through a mechanism that feels for what it’s worth. And the world we live in, that the agent is a remote colleague.

    1277
    03:46:05.010 –> 03:46:31.300
    Shawn Harris: who I always like to joke and say they’re somewhere in Boulder. But they’re sitting on silicon right? And so it’s just a different mindset about user experience. The ux of it is not centered around the path you take through clicking, but it’s the tone of the response. It’s how quickly or how slowly it gets back to you in certain situations. It’s maybe the channel it chooses to communicate to you on.

    1278
    03:46:31.300 –> 03:46:35.518
    Shawn Harris: It’s maybe a decision not to respond like there’s just

    1279
    03:46:36.360 –> 03:46:47.150
    Shawn Harris: a different way to think about it. If you’re truly trying to achieve something that is going to sit aside people in an environment or a situation, I,

    1280
    03:46:47.300 –> 03:46:52.080
    Shawn Harris: where we’re building such that it feels like another colleague, another user. Truly.

    1281
    03:46:52.570 –> 03:46:53.830
    Omer Har: It’s a good point.

    1282
    03:46:54.390 –> 03:46:55.150
    edward Corno: I like to make it.

    1283
    03:46:55.150 –> 03:46:55.939
    edward Corno: I think you brought that up.

    1284
    03:46:55.940 –> 03:47:02.159
    edward Corno: It’s 1 of those things that sorry, Dr. T. But it’s 1 of those things where

    1285
    03:47:02.350 –> 03:47:08.240
    edward Corno: you can position and do the analogy. That’s like, what are your workers or coworkers? I mean, ultimately.

    1286
    03:47:08.500 –> 03:47:32.350
    edward Corno: and in a way, it can behave that way, because basically, it’ll go off and do whatever it wants. So it’s like, how do you control that agent? A genic agent where it’s going, what it’s doing, because a lot of it is like goal oriented and wants to do what it wants after it learns. So I mean, that’s the challenge that we’re facing right now is, how do you get these self directed type of decision making agents to behave the way they should?

    1287
    03:47:32.460 –> 03:47:39.760
    edward Corno: And so that’s, I think, a challenge that, like the second phase is, we roll out hygienic agents across the board in a business.

    1288
    03:47:40.430 –> 03:47:47.680
    Tooba Durraze: Yeah, I was going to say my comment was going to be on just the observability layer becoming more and more important. I think, Darren, you mentioned that.

    1289
    03:47:47.880 –> 03:47:48.250
    edward Corno: And.

    1290
    03:47:48.640 –> 03:48:04.770
    Tooba Durraze: Sort of system is coming up. There was a panel earlier with with Brett, where we talked about agent to agent selling in B, 2 B, like, how that changes as well. And yeah. So I think it’s it’s an interesting move. I I wonder if they’re

    1291
    03:48:05.110 –> 03:48:20.319
    Tooba Durraze: an individual like employee is sort of a cluster of agents, and not necessarily even a singular agent. And then the person who manages them, how matrix does that become? Where does that sit, I think, would be really interesting to watch.

    1292
    03:48:20.320 –> 03:48:36.798
    Shawn Harris: Yeah, the frameworks framework matters. You know, when you’re doing this, like the the framework that is the underlying mechanism for helping with keeping appropriate guardrails that has to deal with the orchestration. Because you’re right. Tuba that

    1293
    03:48:37.310 –> 03:49:02.669
    Shawn Harris: when it’s all said and done. Some of these agents I know, especially mine shouldn’t be called an agent in a singular way, like it is an agency of right. Now. We are in probably close to 500 agents, if you will, that are running within this platform. So the the framework allows for orchestration in a in a broad way, and the the framework helps to control for the planning and the execution at.

    1294
    03:49:04.270 –> 03:49:29.789
    edward Corno: Yeah. And to your point, John, I I think it’s a situation where you’re gonna have agents that control other agents right? Put them together, and I think that’s gonna be the best route to go forward with versus saying an end user like us monitoring it. It’s hard to keep track of all the different agents out there ourselves. Right? So it’s gonna have to be automated agents that have that guy rail type role.

    1295
    03:49:30.160 –> 03:49:39.689
    Omer Har: Right. But I think that the question is, I think that the question is more about, think about the tooling and overall, how Sas and software were optimized in the last.

    1296
    03:49:40.080 –> 03:49:57.409
    Omer Har: I would say 30 or 40 years. Now, like, as Sean said, it’s not about clicks anymore. So if you think about Google. It’s not about going into Google, and kind of look for a query and see what the response is. Actually by asking what you want to know, and there’s agent that run around

    1297
    03:49:57.660 –> 03:50:20.259
    Omer Har: all the information and provide you with summarization a good summarization of what you need to know. And I think that’s kind of where, when you say, first, st I love software software as a user, I think that’s a really good way to capture it. And I definitely agree that many of the tools that currently are serving humans. And if we’re talking about, go to market tool, think about those prospecting tools

    1298
    03:50:20.662 –> 03:50:40.339
    Omer Har: which are designed for humans which are no longer needed in from that not the human needed the process itself. The tool will not be needed to be, you know. Click, click, click. You just write down what you want, and an agent or agencies, as you said will pull the data that you need or do whatever workflow you want to.

    1299
    03:50:40.890 –> 03:50:41.690
    Shawn Harris: Right.

    1300
    03:50:42.740 –> 03:50:43.699
    edward Corno: Yeah, I’d agree.

    1301
    03:50:44.170 –> 03:50:44.960
    Hyoun Park: Excellent

    1302
    03:50:45.536 –> 03:51:09.929
    Hyoun Park: on that note thinking about how you’re designing these agents. You know, we’re gonna start high level and start getting deeper throughout the hour. But still staying a little high level. For right now, as you’re thinking about how to deploy these agents, and and, as Sean said, create an agency of agents. I I love that. By the way, you know, what are you thinking about from? And

  • 1303
    03:51:09.930 –> 03:51:19.499
    Hyoun Park: kind of an ethical perspective, and in terms of the governance issues of bringing agents into an enterprise. And Darren, I’ll start with you.

    1304
    03:51:20.230 –> 03:51:42.489
    Darin Patterson: Yeah, I think, and specifically, as we talk about go to market AI agents, it’s going to be really important to think through different elements of transparency. And and one thing I think about actually, a lot is to what extent are agents required to self identify themselves as an AI agent. If we’re talking about outbound prospecting, there’s certainly increased rules

    1305
    03:51:42.630 –> 03:52:07.450
    Darin Patterson: and regulations. Fcc is working on that as we speak to 30, and here in the Us. About whether an AI agent needs to self identify as an agent, and whether I have to opt out or opt in what that process looks like. But I also think about it in a really interesting way inside of a real organization. I I can imagine, in an easy thought experiment 2 years from now.

    1306
    03:52:07.450 –> 03:52:18.699
    Darin Patterson: joining a new company, having a few zoom calls with 3 or 4 colleagues and people on the phone today and then learning a month later that one of them is actually an AI persona

    1307
    03:52:20.120 –> 03:52:22.780
    Darin Patterson: that would be easy to imagine. And I mean.

    1308
    03:52:22.780 –> 03:52:23.130
    Shawn Harris: Sure.

    1309
    03:52:23.270 –> 03:52:29.140
    Darin Patterson: Feel about it when I ask it how its kids are doing now. The weekend was. It’ll be interesting reflection

    1310
    03:52:29.550 –> 03:52:32.340
    edward Corno: On, that.

    1311
    03:52:34.800 –> 03:52:35.649
    Shawn Harris: It’s happening.

    1312
    03:52:35.840 –> 03:52:36.899
    Hyoun Park: Very interesting

    1313
    03:52:37.297 –> 03:52:41.500
    Hyoun Park: Ed. I’ll I’ll pick on you for a little bit when you think about this from end.

    1314
    03:52:41.500 –> 03:52:43.558
    edward Corno: You’re not picking on me.

    1315
    03:52:45.250 –> 03:52:47.469
    Darin Patterson: So my knowledge, my opinion, that’s all.

    1316
    03:52:47.470 –> 03:52:48.810
    Hyoun Park: Yeah, yeah.

    1317
    03:52:48.810 –> 03:52:49.360
    edward Corno: Yeah.

    1318
    03:52:50.160 –> 03:53:01.329
    Hyoun Park: Yeah. But when you think about this from an enterprise, governance, perspective, obviously something you’ve had to think about with a lot of different types of emerging technologies. So where? Where do the weak spots happen?

    1319
    03:53:01.640 –> 03:53:09.230
    edward Corno: I mean, that is one of the strongest areas that Ibm’s been. And I’m not doing an Ibm commercial. I just wanna make sure everybody understands where Ibm’s position is.

    1320
    03:53:09.520 –> 03:53:32.030
    edward Corno: we pretty much took the whole concept of AI back in 1952 with the model 51 80, and actually proved out that AI could exist. And so that’s why I say it’s AI 3 dot o. And now it’s 3.1, and each one of these iterations gets more sophisticated and better based on the modeling everything else that goes along with it.

    1321
    03:53:32.260 –> 03:53:38.219
    edward Corno: One of the biggest, strongest points that Ibm puts out there is our governance side.

    1322
    03:53:38.380 –> 03:53:41.829
    edward Corno: And so we have a product called watsonx.gov.

    1323
    03:53:42.180 –> 03:53:48.419
    edward Corno: Governance is one of the most, I’d say popular type of technologies that we market

    1324
    03:53:48.610 –> 03:54:08.859
    edward Corno: to our businesses and clients clients across the board, because basically, they want to make sure that they can control what’s happening from an elucination standpoint. What biases banks are interested in that. All these different companies are because they want to make sure that those agents communicate or interact with the end user properly.

    1325
    03:54:09.080 –> 03:54:27.420
    edward Corno: And it goes back to the concept of how do you control these agents and where they go and what they do? And that’s like the next level for the governance piece of it, because at this point most companies are looking at governance as a point of view for really specific areas that address law and regulations.

    1326
    03:54:27.650 –> 03:54:55.570
    edward Corno: But the next level is all right. How do you govern these agents that are just kind of running around and and just learning things on their own without any kind of guardrails. And that’s what makes our you know, what’s the next type platform so strong, and why we’re doing so well. And I think if you look at that from that perspective of how you actually create it, engineer it, develop it, and then design it. That has to be a key part of it when you test it out.

    1327
    03:54:55.710 –> 03:55:01.250
    edward Corno: And I think you’re gonna see more governance interjected into the overall creation of a genic. AI.

    1328
    03:55:04.020 –> 03:55:14.960
    Hyoun Park: Yeah, interesting to hear about that, the governance of independent agents, especially as because that’s part of that promise of age, these agents that they are supposed to be able to work independently on their own.

    1329
    03:55:14.960 –> 03:55:30.819
    edward Corno: A lot of people don’t talk about this with AI. It’s really interesting is the data integrity, the data capabilities, the data quality. Because if you don’t have data quality, if the data is not correct automatically, that agent is going to go out and do different things that you shouldn’t be doing

    1330
    03:55:31.110 –> 03:55:43.389
    edward Corno: so, you’ve got to go in and cleanse the data and make sure the data is correct. In the 1st place, when you’re building up that model, that’s the importance of having a data scientist on board or data scientist to be able to do that.

    1331
    03:55:44.480 –> 03:55:59.850
    Omer Har: But I think that that’s by the way, it’s true. Not only for AI even, you know, it seems like a long time ago, but even when Ml. Kind of broke into our live, you know, if you have, you know, the the model is as good as the data that you feed into it. That’s a cliche that everyone knows.

    1332
    03:55:59.850 –> 03:56:00.930
    edward Corno: It’s so true.

    1333
    03:56:00.930 –> 03:56:16.589
    Omer Har: It is, it is completely 100% true. But I think that if the data is wrong, then, regardless of what you do, even human will get it wrong. Human will get it wrong. Agent will get it wrong. Model will get it wrong. But once the model, like I think that the data needs to be as clean as possible. It’s never perfect.

    1334
    03:56:17.430 –> 03:56:31.810
    Omer Har: But the question is how you know how Asian can help us. Kind of auto overcome. I think one of the things that Ml. Is not doing very well with while human can is that if they’re seeing, let’s say, a small outlayer in the data

    1335
    03:56:31.920 –> 03:56:50.479
    Omer Har: a human will identify. Those will say, well, this might be an outlier, and not necessarily. I’m going to pull all my attention to it. Ml. Will usually not. It will just spit out whatever that may be with these weights. And I think that AI in many cases can say that something is wrong. In some cases we’re seeing

    1336
    03:56:51.079 –> 03:56:56.529
    Omer Har: for example, we’re using AI extensively in order to do Qa on our data.

    1337
    03:56:56.670 –> 03:57:14.109
    Omer Har: In many cases they, the model will say, you know, it doesn’t seem. For example, if you have an email for a person he’s working in Company A, however, his email, the domain is different. AI will immediately identify. There is something wrong with the email and point that out. That’s just an example.

    1338
    03:57:14.655 –> 03:57:16.480
    Omer Har: To how quality can work.

    1339
    03:57:16.720 –> 03:57:31.690
    Tooba Durraze: I think that’s like the explainability part of it. Right? It’s that’s like, almost goes without saying, when we talk about governance, I think you’re also talking about the flip side of the coin where we just want to know where the answers are coming from, and with what degree of confidence I assume.

    1340
    03:57:32.171 –> 03:57:47.760
    Tooba Durraze: I don’t think data again. Maybe a spicier take. I don’t think data cleanliness is going to be an issue in the sense. I don’t think it’s something humans are going to deal with. I think it’s something that you layer an agentic layer on top of.

    1341
    03:57:47.790 –> 03:58:06.109
    Tooba Durraze: I think, with the advent of like synthetic data, synthetic users. Even, I think all of that is in service to kind of fix this idea of like, humans will never get the data right, it’s always going to be messy in that sense. So in order to best use these platforms like, you have to find a technology solution to that, for sure.

    1342
    03:58:07.510 –> 03:58:25.940
    Hyoun Park: Kuba. That’s a really interesting point, because I feel like a lot of vendors are right now saying AI, especially called legacy vendors are saying, AI is all about the data. Data, cleanliness and data quality are paramount to even getting started with building models and building agents.

    1343
    03:58:26.402 –> 03:58:50.979
    Hyoun Park: But what are the actual biggest challenges of getting started to create an agent in production. You know what what level of call it cleanliness or visibility or observability? Do you really have to get to, to be able to get started? Or is this a different type of technology where it is less, call it deterministic or fixed and more conceptual in nature.

    1344
    03:58:52.840 –> 03:59:11.029
    Tooba Durraze: I think it’s I think it’s there’s gonna be. There’s a mixed bag here. So in the current world that we’re in, I think the best way to kind of productionize an agent is to again, to Ed’s point, like, have some level of governance, have love, some level of observability that a human can interact with. So you force

    1345
    03:59:11.180 –> 03:59:38.920
    Tooba Durraze: some sort of like reinforcement learning some sort of like human in the loop feedback that allows you to get there faster. That’s the world. Now, I think in my head I say 3 to 5 years. Maybe it’s a little bit longer. I do think that there’s this concept of like agency, like evaluators, etc. That kind of help determine when a product is ready to graduate from one level to another level, from like the pilot to actually productionize.

    1346
    03:59:39.110 –> 03:59:46.419
    Tooba Durraze: The one thing I’ll say is increasingly with a lot of technical founders. This concept of, like Mvp. Is

    1347
    03:59:46.610 –> 03:59:54.870
    Tooba Durraze: very, very vague now, and like Mvps, are actually a productionized app like, you’ll see a lot of productionized apps that are just

    1348
    03:59:54.900 –> 04:00:23.780
    Tooba Durraze: still what you would consider from like a bigger Saas Company’s perspective. It’s like tiny little pilots that they’re running. And I think that it’s good to look at that and make sure that we’re not conflating technical limitations with like roi limitations in the sense like they might just pilot things longer to make sure the right kind of value is proven out versus taking a long time to deploy something in production at a massive scale, because, like technically, it can or cannot work. So I think they’re like 2 lenses. There.

    1349
    04:00:24.930 –> 04:00:28.363
    Hyoun Park: Yeah, interesting, Omar. I know you have a couple of thoughts here.

    1350
    04:00:28.650 –> 04:00:29.119
    edward Corno: Yeah.

    1351
    04:00:30.061 –> 04:00:31.668
    Omer Har: No, I think that’s

    1352
    04:00:32.790 –> 04:00:43.239
    Omer Har: specifically, in terms of yeah, I’m going to go back to the ethical point like, how do you think of that from the ground up. And I think that specifically, with ground, a go to market, I think that privacy

    1353
    04:00:44.090 –> 04:01:03.369
    Omer Har: is an issue from that perspective. And and so in the last 5 years or 6 years between Ccpa and Gdpr, you can see kind of a massive, the whole industry kind of focus on how to bring. You know those regulations to life, and actually make sure that you actually withstand those regulations

    1354
    04:01:03.370 –> 04:01:25.979
    Omer Har: that was hard. And now a new standard is coming up with how we interact with AI and how we make sure that Pii is safe within that area. If you think about that, you shouldn’t train data on Pii as an example. But there is a hundred of those. So I think that one of the issues, or one of the difficulties that we currently see. And by the way, we see.

    1355
    04:01:26.850 –> 04:01:29.309
    Omer Har: we see a lot of a lot of companies.

    1356
    04:01:29.330 –> 04:01:46.379
    Omer Har: big and small, not only enterprise already are deep inside and kind of deep into the question of what are they letting their data and their Pii? You know, run on, are you giving, you know, for anyone any AI can run. Can you run deep? Seek with? You know a list of all of your employees.

    1357
    04:01:46.390 –> 04:02:05.069
    Omer Har: for example, as a question, should we do that or not, and I think that this is something that regulation is already working on. But I think that there is one thing of defining the regulation, and then another layer of how to adopt those regulations. Make sure that you actually, you know, meet those criterias that can take a much longer while.

    1358
    04:02:07.140 –> 04:02:24.970
    Hyoun Park: Interesting, excellent, and I know, Darren. You have some thoughts about this, both in your current role at make as well as your prior history, working in automation and across a variety of marketing.

    1359
    04:02:25.220 –> 04:02:26.420
    Darin Patterson: Tasks.

    1360
    04:02:26.680 –> 04:02:54.370
    Darin Patterson: Yeah, I think I’ll pick up on what Tuba mentioned as maybe the hot take of data quality. And so I think so we all agree that data quality matters, that data going into these agentic systems is going to be what actually differentiates one well designed system and agency versus another that has unpredictable outcomes. But I think it’s worth identifying. Why has data quality always been a challenge? If we look at specifically go to market efforts?

    1361
    04:02:54.370 –> 04:03:09.579
    Darin Patterson: We all know that, you know well known, well worn memes about trying to get salespeople to enter data into salesforce or whatever the Crm like. That’s been the push for years and years and years. And so we end up with bad data, and we’re always trying to get them to

    1362
    04:03:09.580 –> 04:03:34.759
    Darin Patterson: their data and turn it into deterministic very fine data points. But I think that’s 1 of the greatest opportunities with AI generally just generative. AI is very good at capturing calls call notes from a sales call, identifying the relevant data points. And of course, using those in a much more qualitative manner than a highly deterministic. You always have to have the data in the exact same structure. So

    1363
    04:03:34.760 –> 04:03:54.219
    Darin Patterson: in some ways, I think the the data quality thing is is going to be less of a challenge for organizations to nail really effectively. And AI agent systems are actually very good at dealing with this more kind of nuanced data as opposed to this kind of, you know, old school data structures. You got to get everything in the exact, same format and perfect quality.

    1364
    04:03:55.920 –> 04:04:00.390
    Hyoun Park: Excellent. And so, as you’re dealing with these kind of less

    1365
    04:04:00.620 –> 04:04:24.339
    Hyoun Park: per call, less nuanced, more conceptual systems we are also dealing with the challenge of having these agents that are trying to conduct multiple steps and multiple workflows and engaging with each other. So, Sean, you brought up earlier the idea of agents working with each other. The idea of this agency of agents. How do you keep agents from

    1366
    04:04:24.610 –> 04:04:40.180
    Hyoun Park: frankly forgetting what they’re trying to do before they get to the endpoint. When we’re talking about these more complex go to market processes, these projects, these multi-layered conceptual capabilities.

    1367
    04:04:40.180 –> 04:05:00.329
    Shawn Harris: Yeah, not not to be to death. I definitely think, like the framework that you use and how you’ve constructed your framework to take advantage of understanding some degree of state knowledge as well as like. How you deal with kind of in context, short term, and long-term memory is super important when dealing with

    1368
    04:05:00.330 –> 04:05:22.260
    Shawn Harris: situations where you may start something on day one and not revisit it until day 15, and then want to reflect on something that happened on day 7, like you have to have in place mechanisms that support this. We certainly take a perspective of this is not just about like an application, like, we’re thinking about a system

    1369
    04:05:22.260 –> 04:05:24.119
    Shawn Harris: right? There’s a lot of

    1370
    04:05:24.492 –> 04:05:40.490
    Shawn Harris: there’s a systems thinking perspective that you need to have about orchestrating this, this environment of agents, and it has taken a lot of a lot of thought. You know, like we, you know, when it comes to things like data like

    1371
    04:05:40.490 –> 04:05:58.230
    Shawn Harris: we have an agent that deals with data quality, right? And if something’s not good, it pushes back and says, You know what this is inadequate. This is not good. This is not what would be considered gold standard to be a part of the input to go into kicking off.

    1372
    04:05:58.230 –> 04:06:22.650
    Shawn Harris: You know the planning and subsequent workflows you have to. And this is where the hundreds of agents come in, because you’re having to kind of think about. Well, what am I trying to accomplish. How am I going to find, if you will, the right place in the models distribution? And of course, like it’s not even just always about the model like we, you can use all sorts of.

    1373
    04:06:22.660 –> 04:06:33.480
    Shawn Harris: you know, algorithms and approaches to getting one of these systems to work as expected. So I think it is around like taking, like a systems

    1374
    04:06:33.780 –> 04:06:53.580
    Shawn Harris: point of view on building these things and giving them their own little like realm to work in. And it takes good design, good design before jumping in with all of this. I you know someone mentioned deep. You know these models aren’t. If you’re using like an

    1375
    04:06:53.680 –> 04:07:21.849
    Shawn Harris: yeah, you know your own hosted version of any model, and nothing is learning these models except for if it’s Tuba, you know, doing liquid stuff. These models are static right? They don’t just learn because it happened. It learns because there’s a system that is keeping track of information, that it references as a part of the context window, and it feels like it learned something. But it’s just referencing something something else. Right? So there’s I have a lot of customers who will talk about. Oh, is this thing going to learn like?

    1376
    04:07:22.620 –> 04:07:38.329
    Shawn Harris: It doesn’t work like that right? And I think that’s a point of clarification, we should make sure is out in the world, you know, learning is a very intentional activity, not something that just happens. Because I said, You know, hello to something, you know.

    1377
    04:07:38.330 –> 04:07:40.263
    Hyoun Park: AI, magic.

    1378
    04:07:41.230 –> 04:07:46.660
    edward Corno: It’s not. It’s like anything else. Like, if you want a computer vision model, you had to train on a ton of stuff.

    1379
    04:07:47.290 –> 04:07:53.300
    Shawn Harris: These large language models are fixed, you know, so, except for the liquid ones.

    1380
    04:07:53.320 –> 04:08:03.220
    Tooba Durraze: I empathize with that point so much, because I feel like every enterprise account that you get into Rfps are really really long. And the number one question.

    1381
    04:08:03.220 –> 04:08:03.910
    Shawn Harris: Number one.

    1382
    04:08:03.960 –> 04:08:16.160
    Tooba Durraze: It’s like, Yes, like, are you training on my data? So I agree with you, there needs to be some clarity around that. I also think, the definition of privacy changes, because I think that

    1383
    04:08:16.630 –> 04:08:37.900
    Tooba Durraze: a lot of times the hangup to like not sharing your data between things is like this other party is getting this unfair advantage when folks start getting, you know, paid for the data that they’re sharing on an individual level on organization level. Then I feel like that. Barrier doesn’t go away, but it changes quite dramatically, so I can see a future state where

    1384
    04:08:38.100 –> 04:08:57.860
    Tooba Durraze: you know I’m incentivized to have better models out there, because as an organization, I function better if there are better models out there. So I’m incentivized to. Then also say, I’ll share some data trade you for, like some amount of dollars against the data that I’m sharing in order for your models to get better. So I think there’s like a new services layer there.

    1385
    04:08:58.400 –> 04:08:58.960
    Shawn Harris: Yeah.

    1386
    04:09:01.480 –> 04:09:24.577
    Hyoun Park: Yeah. So, Ed, I was curious from your perspective, when you think about strategic goals, and maintaining that attention, how? How are you balancing? You know the the need for that personalization and actual goal attainment versus the balance of actually keeping things private.

    1387
    04:09:25.940 –> 04:09:31.259
    edward Corno: So personalization versus keeping them private. I mean, that is a challenge. Because.

    1388
    04:09:31.780 –> 04:09:40.890
    edward Corno: you know, when you look at and and Ibm put that out a while ago that we would back up any kind of liability issues when you’re talking about, you know, copyright, etc.

    1389
    04:09:41.386 –> 04:09:57.729
    edward Corno: But you know, it’s 1 of those things where you know private. How do you define that? And then personalization? It’s like we have to give them permission to whatever entity we’re going in and pulling the data and interacting with that’s kind of kicks it off, starts it off.

    1390
    04:09:58.500 –> 04:10:03.759
    edward Corno: And so I mean, that kind of enables some of the privacy issues that may come up.

    1391
    04:10:03.910 –> 04:10:11.960
    edward Corno: And so I I think of a perspective as more from like the user side when you talk about privacy and then personalization itself.

    1392
    04:10:12.060 –> 04:10:21.349
    edward Corno: and getting back to like agents, learning correct in the sense that they don’t really learn that deep except some of the newer type agents that are being developed.

    1393
    04:10:21.570 –> 04:10:34.760
    edward Corno: But they do look at the task and and see, okay. Ed. Corno went out, and every Friday at 4 Pm. He has a meeting it. Could, that agent would automatically then set up that meeting.

    1394
    04:10:35.020 –> 04:10:50.770
    edward Corno: See what I’m saying. So there’s certain restrictions based on the the tasks that are done. So those agents will analyze that task, come up with a task like type creation, and then execute on it, and then do an assessment, and then adapt from it.

    1395
    04:10:50.770 –> 04:10:56.439
    Shawn Harris: Just not the model. That’s that’s what I was saying, not the model it does that like, but agents with memory and all that.

    1396
    04:10:57.600 –> 04:11:14.510
    edward Corno: And then from like the actual language models. When you take a look at, say, land chain, which is what we use consistently, and we have granted. That’s our large language model at Ibm. But you got Meta, too, of Llama and others right? And you know, deep with Google. But all those

    1397
    04:11:14.600 –> 04:11:38.220
    edward Corno: hall have kind of, I think, issues in the sense of you want to go with the small language models because it doesn’t take up a lot of compute. It’s easier to manage. The data. Quality is better. The agents are easier to come from and actually give birth to. So I mean, there’s a lot of advantage from going that direction. But as far as like privacy versus personalization, that’s a really

    1398
    04:11:38.640 –> 04:11:49.849
    edward Corno: kind of nebulous area, except that you know, the governments of the world, and and laws are coming about and say, especially over in Europe. Privacy is a big concern.

    1399
    04:11:50.600 –> 04:11:56.100
    edward Corno: and so the permission side is easier here in United States than it is

    1400
    04:11:56.450 –> 04:11:58.889
    edward Corno: over in Europe and other countries.

    1401
    04:11:58.890 –> 04:11:59.580
    Tooba Durraze: Hold on!

    1402
    04:11:59.580 –> 04:12:02.800
    edward Corno: But I think that’s gonna change to a certain degree.

    1403
    04:12:03.000 –> 04:12:14.960
    edward Corno: And I think maybe if everybody saw the news recently that basically Google can take your information if you give it permission to do it, to then use and build out any kind of model they want.

    1404
    04:12:15.140 –> 04:12:33.309
    edward Corno: And so that actually was handed down, I guess, a week or so ago in the United States. So I mean, those are the things to think about when you’re going out and giving okay, permission to say, Go on co-pilot or chat. Gpt, you gotta be concerned about all right. Is this going to impact my privacy.

    1405
    04:12:34.780 –> 04:12:35.510
    Hyoun Park: Thanks.

    1406
    04:12:35.730 –> 04:12:44.309
    Hyoun Park: So as we’re talking about bringing models into production. Obviously, this is not just about a proof of concept. This is about

    1407
    04:12:44.310 –> 04:13:07.970
    Hyoun Park: maintaining agents as ongoing systems and in production. And so what needs to be taken into account? Because, you know, this is actually a pretty complex challenge. We’re talking about data access, putting some sort of model on top of that, having some sort of agent interface that you have customized and then regulating the outputs. Right? So we’ve got this kind of

    1408
    04:13:07.970 –> 04:13:16.100
    Hyoun Park: sandwich or stack, or what have you? I know, Omer, you had some thoughts here, so I’ll start with you.

    1409
    04:13:16.100 –> 04:13:28.530
    Omer Har: Sure. I think that again. I’m going back to kind of defer to Shawn Shawn. I I think that what I do most of this discussion. So I apologize for that. But I think that you mentioned kind of it’s not.

    1410
    04:13:28.590 –> 04:13:53.440
    Omer Har: It’s not as it’s just not using. Just a model is, is a system, and I don’t remember who coin coined the the phrase of kind of cognitive system. How do you make those thinking machines thinking system that have their guardrails and have the access to the data, and how you build agent on top. And I think that when you go into production and you’re trying to think about how you move this. How do you

    1411
    04:13:53.783 –> 04:14:03.699
    Omer Har: get? It’s not moving into production. It means getting the value out of the system, getting the roi back of the system. When you think about that, you need to think about how the system

    1412
    04:14:03.920 –> 04:14:21.039
    Omer Har: end to end will work like, how do you do the the guardrails? How do you make sure the data quality is there? How every part of that part of that system works in tandem in order to get to your desire outcome, and then the Roi on the other side of that. So I think that

    1413
    04:14:21.400 –> 04:14:27.410
    Omer Har: you know when we think about moving things into production. We’re actually moving about. We were actually thinking about all of the.

    1414
    04:14:27.810 –> 04:14:44.620
    Omer Har: you know. I don’t want to say little details, but it’s big details in many cases that you that are comprising that you need to do in order to push that into actual production system, you know, how do you make sure that the data is there. How do you make sure that the non-deterministic

    1415
    04:14:45.560 –> 04:14:48.289
    Omer Har: flow will actually work in at least

    1416
    04:14:48.470 –> 04:15:06.429
    Omer Har: X percent of the time you need to define the Kpi. You need to define the Evals to actually run that. So I think that that’s kind of the the main point. When you want to push something to production. Start with thinking about the system, and not, as an Api call it’s not a wrapper over an Api, but actually what the system is in order to get their roi back.

    1417
    04:15:08.350 –> 04:15:12.364
    Hyoun Park: Interesting. Darren, I’ll I’ll ask for your perspective as well.

    1418
    04:15:13.230 –> 04:15:38.759
    Darin Patterson: Yeah, one of the things. We haven’t had a chance to go deeply into it, but it connects to many of the topics we’ve just spent time on again, the whole point of agents deployed in your organization and your existing enterprise systems and your existing workflows is that they have access to tools those tools are are all across your organization. The same tools that you give to your employees when you onboard them, to make sure that they can be effective in their job.

    1419
    04:15:39.065 –> 04:15:59.869
    Darin Patterson: But as we’re particularly, you know, it seems crazy. But we’re still in early days of AI agents. So it’s super important to recognize the controls that you have to manage access to those tools. And we take a lot of this stuff for granted. Those of us that have been in software for a long time, that you know, when I log into hubspot or notion. And I search for a record.

    1420
    04:15:59.870 –> 04:16:23.809
    Darin Patterson: there’s a whole set of permissions getting applied to me and how I interact with that software. And it’s a very clear set of rules about what records I can access, and what actions I can do. And so additional consideration as people are starting to think about how to deploy these into their systematic workflows is thinking about. What access should AI agents actually have to different systems and processes

    1421
    04:16:23.810 –> 04:16:50.219
    Darin Patterson: and making sure that you don’t just carte blanche, give your AI agents access to your entire sales force. There’s a lot of bad things that could ultimately happen, and unintended consequences with that. And so, really, having a thoughtful approach to thinking about what are the right tools and the right other deterministic processes or sets of things that AI agents should be able to interact with. And of course you have to understand your business process exactly

    1422
    04:16:50.220 –> 04:17:06.970
    Darin Patterson: as Omar referred to. It’s so often that people say, Hey, we want you to help us digitally transform our business, but they don’t understand their business. And so it’s hard to apply that to whether it’s deterministic workflows or agentic workflows. You have to understand your business in order to apply AI in it.

    1423
    04:17:07.520 –> 04:17:30.630
    Hyoun Park: Yeah, let me dig a little further into that, and I’ll ask you for a follow up. But I also want this to be an open question. So what does that look like? Because with software, we’re used to having this control plane with all sorts of little clicky boxes that we use to manage governance. But, agents, I I don’t think work like that. Are we writing a contract? Are we figuring out how to do this at a code base basis, you know. How would you think about that, Darren?

    1424
    04:17:31.610 –> 04:17:56.419
    Darin Patterson: Yeah, in in our specific area. And and it makes specifically, we spend a lot of time thinking about how you can visualize agents within your existing workflows and processes, and visualize how agents work together in the kind of agent to agent interactions that Shawn mentioned earlier. So there is a a new skill set, absolutely being developed across lots of different of your traditional roles and responsibilities that

    1425
    04:17:56.420 –> 04:18:01.800
    Darin Patterson: that you do need to have a capability to understand how AI works in a business process.

    1426
    04:18:01.800 –> 04:18:23.270
    Darin Patterson: and very often, whether you want to or not, you’re asking your employees to design and manage a process. And you can do that visually, even if the interactions happen autonomously behind the scenes when you’re not actually working with it. So being able to understand the relationship between tools and agents and the inputs and the outputs, and how to evaluate success.

    1427
    04:18:23.270 –> 04:18:32.449
    Darin Patterson: Those are all the things that we expect are coming closer and closer to actual business operators as opposed to just it, teams that are that are coding these things up in in Python, or wherever else.

    1428
    04:18:32.830 –> 04:18:56.280
    Tooba Durraze: I think if it’s a homebrewed solution. By the way, this I heard this analogy the other day, where, like observing an agent architecture, it’s like, basically, you’re in a cockpit on autopilot. But the pilots still need to be there just in case something goes wrong. If it’s homegrown, like you can imagine that you have pilots within your organization itself, and for folks who are kind of building and selling this

    1429
    04:18:56.280 –> 04:19:11.529
    Tooba Durraze: as well, then you need to kind of create that that connection, if like, if I have that skill set, my customer is not going to be successful unless, like I teach some version of that skill set to them on how to observe it. So the way like observability looks around these agents

    1430
    04:19:11.530 –> 04:19:28.929
    Tooba Durraze: is, I think, going to vary through time. I agree with you, Darren. I think they need to be very visual, because I think people visually adapt to workflows in that way a lot better. But I do think like someone needs to like, really, principally understand, like the architecture, like what’s happening behind the scenes.

    1431
    04:19:29.210 –> 04:19:38.147
    Shawn Harris: Yeah. 1 1 of the things that you know we we have observed, is that so? For a lot of the solutions that

    1432
    04:19:38.650 –> 04:19:43.720
    Shawn Harris: a given agent needs to access. Those solutions will have, you know.

    1433
    04:19:44.010 –> 04:19:52.620
    Shawn Harris: Api integrations that are going to be oauth based that are adhered to whatever rbac

    1434
    04:19:52.840 –> 04:20:10.109
    Shawn Harris: you. You know, put in place. And so for many external systems, you can rely on existing methods for controlling access to, you know, read, write, and delete. And what have you using these

    1435
    04:20:10.130 –> 04:20:18.430
    Shawn Harris: these these integrations? Right with Oauth when it comes to things like, you know, if you want access to

    1436
    04:20:18.450 –> 04:20:31.479
    Shawn Harris: to Gmail or or, you know, M. 365, environment. There are both firms offer, like graph Apis, where you can get access to specific things with very specific rights and privileges

    1437
    04:20:31.480 –> 04:20:54.649
    Shawn Harris: that again adhere to the way that you would control the user. I don’t think, certainly, that everything has been covered for. But when you look at again, whether it be an existing Saas solution, or it be kind of core productivity apps. There are mechanisms now that you can leverage to give your agent the appropriate access. So it’s not just running rampant in in the environment. That’s just not

    1438
    04:20:54.650 –> 04:21:17.690
    Shawn Harris: how this will work. But again, I think even there before you’re going to production. To go back to kind of the inquiry earlier. You have the means and the scale. Get an army of people to red team the thing right. Ask it to ask it to write a haiku about your sales data, you know, and get a joke. But you know, get it to try and do things that you don’t want it to do.

    1439
    04:21:17.690 –> 04:21:26.420
    Shawn Harris: I think, is a critically important step in ensuring its readiness for the real world.

    1440
    04:21:26.590 –> 04:21:44.030
    Omer Har: So there is a just as an anecdote. There is a famous experiment that was run. I don’t know about 3 or 4. It actually was a competition that run about 5 or 6 months ago, that they gave a secret to an Llm. And then they ran a competition. Who can pull the data, who can pull the secret out of it.

    1441
    04:21:44.030 –> 04:22:00.680
    Omer Har: And it was about, you know, not too long after that, the beginning of the competition. Surely before they anticipated, someone were actually were able to crack the code and pull the secret from it. And I think that one of the one of the things that you kind of mentioned is that you know the

    1442
    04:22:01.384 –> 04:22:16.885
    Omer Har: the not authentication, but actually the authorization. Part of of all of that will be part of the kind of the current Api. The current Api authorization that we have. The point is that we usually, when you give a a You know, an employee

    1443
    04:22:17.440 –> 04:22:44.770
    Omer Har: the authorization to actually use a system, you expect him to be honest and expect him to be loyal to the company, and so on. Not necessarily the same traits and the same characteristic can be applied to Llm. So the whole concept of who do you give permission, and what he can do with that permission might be changing. I’m not a security expert myself, but it seems that not necessarily act the same way.

    1444
    04:22:46.280 –> 04:22:52.050
    Hyoun Park: Alright amazingly. We are almost near the top of the hour. This has been an amazing hour.

    1445
    04:22:52.050 –> 04:22:53.480
    Shawn Harris: More time, more time.

    1446
    04:22:53.910 –> 04:23:16.870
    Hyoun Park: No, it’s such an educational conversation. I’m hoping this continues in some way, shape or form afterwards. But I’m gonna go to my final question here, so that we’ve talked a lot in detail about how agents can help optimize go to market efforts and the challenges that we are gonna have in bringing these agents to life?

    1447
    04:23:17.363 –> 04:23:42.050
    Hyoun Park: At the end of the day. How can we ensure that these agents help people to do a better job at the end of the day. How do we help employees to actually get value? Added outputs from all these agents that we are creating right now, Ed, I’ll start.

    1448
    04:23:42.050 –> 04:23:44.590
    edward Corno: Yeah, I just wanna make a quick comment about how

    1449
    04:23:45.050 –> 04:23:53.089
    edward Corno: you know you go to markets. Critical and genetic agents will allow that to happen. But it’s really like focusing on the use cases.

    1450
    04:23:53.430 –> 04:24:15.620
    edward Corno: For instance, we really haven’t touched on it towards the end, or anything but like market analysis and insights. That’s a natural for a genetic. AI is to go out, pull out that information. Market trends kind of insights from the competition, basically how it interacts with consumer behavior. What’s going on there then? Like automated customer engagement, too.

    1451
    04:24:15.630 –> 04:24:39.500
    edward Corno: which we’re seeing from our clients wanting to know more about, which is, how can you look at basically that personalization outreach can work out and how you can really kind of target basically your customer and optimize that feedback from them. Because you as a marketer, want to know how well, whatever your product or service is, and genic agents can allow that to happen. The last one I’ll talk about which is

    1452
    04:24:39.500 –> 04:24:57.019
    edward Corno: kind of dear to my heart, which is basically sales enablement and lead priorities. Because if you can really target someone that’s about ready to buy whatever your product services through a genetic agents that speeds up the whole process and finally get you that close.

    1453
    04:24:57.060 –> 04:25:08.789
    edward Corno: So I mean, that’s really a huge benefit. And then we talked about you have to build off of that and go and reverse from a technology standpoint to be able to get that information out there to whatever client you have.

    1454
    04:25:11.000 –> 04:25:11.779
    Hyoun Park: All right.

    1455
    04:25:12.160 –> 04:25:13.780
    Omer Har: Omer. Same question.

    1456
    04:25:14.320 –> 04:25:14.990
    edward Corno: Yeah.

    1457
    04:25:15.310 –> 04:25:16.760
    Omer Har: Yeah, I think that the

    1458
    04:25:18.100 –> 04:25:41.359
    Omer Har: it’s as as always when you buy anything to the business, the technologies in particular, you start with the business case. And I think that one of the things that I’m that we’re very focused in explorium is, how do we solve a specific business case? What we usually do is working with companies that actually build, go to market agent and go to Market AI on top of our infrastructure. And AI

    1459
    04:25:41.360 –> 04:25:59.329
    Omer Har: and one of the things that is always critical for us is to 1st identify the business case. And then we can talk about technology. The whole company is super kind of technological in heart. So it’s usually fun to do that. But the 1st step that you want to do is to make sure that the business goals. Not only the business goals for the

    1460
    04:25:59.330 –> 04:26:28.420
    Omer Har: decision maker and the kind of the champion that we work with is identify what exactly they want to do. In our case. It might be that the go to market need to be quicker. They want to avoid the hassle of combining 10 different data sets, regardless of what they need to do. First, st you need to identify that. Then you need to go back and start thinking about how you know how agent can help. How agent in your technology specifically can help him do that. And I think that some of those, in some cases, especially

    1461
    04:26:28.420 –> 04:26:39.569
    Omer Har: because things are moving so fast. In many cases you’re getting trapped in the hype and not in the value that you’re seeing. And that’s perfectly fine, especially for a startup, because everything is moving so fast.

    1462
    04:26:39.949 –> 04:26:48.500
    Omer Har: It’s it’s hard to sometime understand what is real and what is not, and what is already proven and what is not. But I think that once, if you get to every business

    1463
    04:26:48.860 –> 04:27:12.200
    Omer Har: issue or problem that you’re trying to solve, starting with, what are you trying to solve? Like, what is the actual business issue? Not in terms of technology. And AI and agent. Actually, you know how the business will work with and without your solution or with the solution that you’re trying to create. Then that’s that’s that. You need to start with that. After that, if you keep eyes on that ball, the rest will follow.

    1464
    04:27:12.200 –> 04:27:13.259
    edward Corno: That way. True.

    1465
    04:27:13.970 –> 04:27:14.910
    Hyoun Park: All right.

    1466
    04:27:16.230 –> 04:27:19.067
    Hyoun Park: Shawn. One last thought.

    1467
    04:27:19.540 –> 04:27:29.860
    Shawn Harris: Yeah, yeah, no. I think like how, how they can help like we, we should never lose sight of the physics of human change right, and that being things.

    1468
    04:27:30.040 –> 04:27:57.240
    Shawn Harris: things need to be more incremental in nature for us, right in terms of how we onboard things so like not forgetting, like there’s a an ideal. I forget the architect or designer. But it was this idea of like Maya. It was like most advanced yet acceptable right? And so it’s it’s looking at. How can you bring forward? And clearly? In understanding a customer on a given workflow?

    1469
    04:27:57.240 –> 04:28:00.200
    Shawn Harris: How can you inject something that is

    1470
    04:28:00.200 –> 04:28:06.349
    Shawn Harris: more advanced, the most advanced yet still adjacent. Enough that you’re able to get.

    1471
    04:28:06.350 –> 04:28:28.339
    Shawn Harris: you know, traction with it. You’re able to get people to change towards it. Change management in this space is, should not be underestimated. There’s a lot of fantastical things that we can do. But if we can, if we forget the idea that human change is required as a part of adoption, it’ll fall flat in its face. So never forget, Maya.

    1472
    04:28:29.000 –> 04:28:31.449
    Hyoun Park: Thank you. And on that note we are at the time.

    1473
    04:28:31.450 –> 04:28:31.840
    edward Corno: Have a good one.

    1474
    04:28:31.840 –> 04:28:38.190
    Hyoun Park: Thank you so much to everybody. I I know there’s so much more we could talk about. But thank you so much.

    1475
    04:28:38.420 –> 04:28:38.650
    Omer Har: Okay.

    1476
    04:28:39.660 –> 04:28:40.410
    Shawn Harris: Thank you.

    1477
    04:28:41.240 –> 04:28:44.689
    Julia Nimchinski: Insightful panel. Thank you so much when again.

    1478
    04:28:44.790 –> 04:28:51.450
    Julia Nimchinski: before we transition to our actionable part, agentec demo part, when what’s the best way to support you.

    1479
    04:28:54.020 –> 04:28:55.850
    Julia Nimchinski: and when is here?

    1480
    04:28:57.600 –> 04:28:58.330
    Julia Nimchinski: Hmm!

    1481
    04:29:04.930 –> 04:29:05.720
    Julia Nimchinski: When.

    1482
    04:29:06.000 –> 04:29:08.178
    Hyoun Park: Oh, I’m sorry. One last question. What was that?

    1483
    04:29:08.420 –> 04:29:11.820
    Julia Nimchinski: Yeah. The question was, what’s the best way to support you?

    1484
    04:29:12.220 –> 04:29:32.220
    Hyoun Park: Oh, simply keep in touch with what I’m doing here at amalgam insights. I’m an industry analyst. I also keep track of these trends you just heard from 5 of the smartest people I’ve ever met. But I I talk about this stuff as well. Let’s keep in track. Touch with me on Linkedin. It’s just a hyun park. You can find my profile easily enough.

    1485
    04:29:33.480 –> 04:29:34.410
    Julia Nimchinski: Awesome.

Table of contents
Watch. Learn. Practice 1:1
Experience personalized coaching with summit speakers on the HSE marketplace.

    Register now

    To attend our exclusive event, please fill out the details below.







    I want to subscribe to all future HSE AI events

    I agree to the HSE’s Privacy Policy and Terms of Use *