-
Julia Nimchinski:
Next up, please welcome Connor Grenin, Chief AI Architect at NYU Stern and CEO of AI Mindset, and Hyun Park, VP at Calero, our favorite analyst. And a community host. Welcome to the show, what a treat. How have you been, and what’s the latest and greatest? Connor, let’s start with yourself.Conor Grennan:
Yeah, thanks for… thanks for having me. I mean, the latest and greatest is just trying to keep up with the news. I… I don’t even know… it used to be that we would say, oh, did you see what came out last week? And… I mean, I’m checking, you know, I’m almost, like, living on a Twitter feed, just to sort of see when everybody’s dropping everything else.
The pace of it, I feel like since 2023, we’ve been seeing… or we’ve been saying, oh, the pace is amazing. I remember specifically in kind of early March. Of 2024, thinking 4 things came out that week, and now it feels like several things every day, especially between especially between Google and Anthropic, I think it almost feels like they’re just trying to outdo each other.
It feels like OpenAI is maybe in focus mode at the moment, but latest and greatest is just trying to keep up and also trying to think about, like, what’s actually applicable to, you know, the big companies we work with.Julia Nimchinski:
And can’t wait to get into it. Jan, take it away.Hyoun Park:
Yeah, and, you know, just to add on to that, I’ve recently started a job working at a software vendor where our day job is doing telecom and mobility system of record, and working with that. And it’s just amazing to see what else is all out there.
Just the open claws suddenly take over the world over the past few weeks, where NVIDIA has to talk about it, and Cisco’s talking about it, you know, just everybody’s saying, here’s my newest, the latest and greatest from an automation perspective. So there’s so much going on right now, and so it’s a pleasure to meet you, Connor. Thank you.
Yeah, and I’m really excited to talk to you about the whole idea of agentic OS, but also knowing your background. Like, I found your early career extremely interesting as well, from a truth-telling and discovering perspective.
Like, understanding that your early career, which we’re not gonna get into super deeply, but, you know, let’s meant talking, you know, about Nepal, looking for the truth, looking for, Figuring out situations where you were trying to trace down people, and human judgment was what really mattered at the end of the day, and you were trying to find what other people were missing, and that seems like such an appropriate starting point for thinking about AI.
You know, how do we think about these systems to either replace or augment that judgment and find what’s missing? So I’m gonna start by digging into a concept that I saw from your work about this retrieval The idea that we are driven right now by the retrieval habit, the idea that we can just go out and ask for something and kind of get the answer, and that’s how we’re trained.
And, you know, how do you think about that in terms of how AI is being built? You know, what… is that a good thing, a bad thing? Does it help us with how we interact with AI?Conor Grennan:
I mean, it’s a great place to start. And thank you for having me. It was an honor to be a part of this, and I was listening in on the earlier sessions, too. I’m like, oh my gosh, this is all just such smart stuff happening here, so I love what everybody here is doing.
Yeah, so, and I mean, I should start by saying, like, you know, I’m not a technical person, and so when I hear, sort of, like, other, you know, developers talking about this, I think it’s fascinating, but that’s not me at all. I’m not… I have no developer background whatsoever. And I think you phrased it, like, really well, which is this kind of, like, retrieval muscle memory.
And I think one of the advantages that I have as kind of, like, the idiot in the room sometimes is just without having a deep computer science background, I kind of came to AI… my background is all in academia, so I was at NYU Stern Business School for, you know, a dozen years. So it’s all in, you know, in business and how business systems work and things like that.
And so to come to AI, And to realize you don’t really need any of that. Now, of course, I’m not talking about, like, actually building the technical side of Agentic systems, that’s extremely complicated. But I just mean… the thing that I’m most fascinated with, which is how do you, like, level up all of humanity?
Because I think one of the things that we can, even, especially in kind of like a, you know, a day like this, and workshops like this. It’s very easy, and you find this too, I’m sure, is that, like, we… it’s what they call, like, the curse of knowledge. Like, we kind of assume that because we know it, everybody knows it.
But that… most of the world, there’s billions of people that… they don’t know what open claw is, they don’t know what clawed code is, they don’t know what any of this stuff is. They are afraid of AI, they’re not really using AI. And that’s where I like to live. I like to live among that group, which is the… by far the majority.
I mean, even OpenAI, with their 900 million weekly active users, it’s, you know, 4% or something really low is… are paying for any of these services, and so if you’re not… if you don’t see the value of paying $20 per month. you can’t really call yourself a power user, right? I mean, like, every day I’m seeing, like, how is Claude only charging me 20… or Anthropic only charging.Hyoun Park:
Right?Conor Grennan:
a month. And obviously, I pay a lot more, but… but it’s that idea of where you sort of, like, started this conversation, which is. I think that we forget that for most of the people out in the world, they have this retrieval mentality. We call it Google Brain, which is the way that they see AI is just sort of a fancier search engine, and why wouldn’t they, right?
I mean, like, all of us have been dealing with Google Search for, I don’t know what it is, 20 years or something like that, and so our brains are just accustomed to… seeing a search bar, and knowing… without thinking, in the same way, if you… the analogy we use sometimes is if you see, like, a… you know, if there was, like, a baby in the room, you wouldn’t… your brain wouldn’t say, like, wait a minute, so how do I… how do I talk to this?
No, you just, like, instantly… instinctively, in the same way, I know how to pick up my.Hyoun Park:
I’m not.Conor Grennan:
pick up my phone, or anything like that, or pick up a pen. Our brains instinctively, muscle memory, this is hardwired neural pathway stuff, look at this, and we treat it like a search engine. Give it a command. And then, as you say, retrieve information. And then you walk away. That’s what’s fascinating to me, is that how do we break people out of that mold? -
Hyoun Park:
Yeah, and it seems like we have to move from simply asking questions to having more of a either journalistic perspective, or maybe an epistemological perspective of, like, seeking what is truth? What am I really looking for here? And so how do we break into that deeper thought process of figuring out what you are actually trying to do.
You know, I know some of us take it for granted, but Like, having consulted on this, and having to have to break through this to a lot of people, you know, what do you recommend for just a starting point on how to take that next step? Think one step deeper.Conor Grennan:
It’s the right… it’s the right question, right? Because, so we have a company, AI Mindset, and we work with, you know, some of the biggest companies in the world on this, and we have to do it at scale, right? So how do you kind of train 10,000, 20,000 people on… kind of on AI, when they’re gonna be complete beginners in that group, and total power users in that group.
So the way we do it is to, first of all, realize that the same kind of, like, I call it training, it’s not exactly training, but, like, the way that you move people along is the exact same for total beginners and power users, and here’s why. For total beginners, it kind of stands to reason, you know, the way that we do it is a behavioral method, we get people, like, realizing.
hey, your brain thinks it’s this, your brain thinks it’s looking at this, it’s actually not. Your brain thinks, you know, you think it’s, like, learning calculus, it’s much more like a treadmill that, you know, it’s also, you know, it’s much more like changing behavior, all that kind of stuff, right? This is even what we do with, like, Microsoft and Google and everybody else.
But then, how does that apply to power users? Like, you and me, like, I’m guessing you use this a million times a day, I don’t, you know, I do as well. Claude went down for a few minutes this morning, I just stopped working, basically. So, like, what does that actually mean for people like us? Well, to me.
the biggest problem, and it’s this… it’s interesting, because it goes back to this whole curse of knowledge study that was done in 1990 that Chip and Dan Heath made popular, and that’s the idea that if you know something really well, you can’t understand why somebody else doesn’t know it, or you just assume that somebody else knows it, right? And that’s a big thing, because for power users.
I’m sure you’ve run into this, too, with your teams, or people that you know that aren’t using it. What do we say? We’re like, guys, just use it! Just try it! Just give it your use cases, and people are like, yeah, but, like, what? Like, how? And you’re like, just talk to it!
That’s all you… you don’t need to know anything, you don’t need to be technical, just go… So, the reason that… well, the work we do, it’s like a course, right, for a big enterprise. The reason that we think it works for power users, and that’s the holy grail, is that it gives power users the understanding of what other people who aren’t using this are actually seeing.
And, like, what they’re seeing, it’s not because they’re lazy, it’s not because they don’t know. We argue there’s no learning curve to this at all. All you have to do is talk to it. So what’s the problem? The problem is. people are treating it as if it’s like a digital transformation.
People are treating it as if it’s a learning curve that you have to learn like you learned, you know, SQL or Excel or whatever it is. But it’s not. It’s much more like a treadmill. And the treadmill, all that matters is you get on that treadmill and you run. But that’s hard, because your brain, your caveman brain, doesn’t want to run on a treadmill.
It prioritizes quick rewards and conserving energy, right? So, that’s why we get off a treadmill. It’s very similar with AI. People don’t know how to use AI, not because there’s a learning gap there, it’s because there’s a behavioral gap.
And so that’s why, when we’re working with Obviously, people who are kind of beginners, but even power users, we’re giving power users the sense of, like, hey, here’s why your team isn’t using it, and it has nothing to do with tech or laziness or anything like that. It’s just that their brain has a hard time looking at a blank box, and somebody says, ask it anything.
That’s not how your brain works. Your brain struggles with that in the same way it struggles with too much choice. It struggles with, you know, sort of like you giving it a blank box that looks like Google and saying, like, hey, you can just talk to this thing. Your brain doesn’t compute that.Hyoun Park:
Yeah, it seems like, I’ve seen this even with extremely expert and technical users, that they look at generative AI and they immediately go to creating Boolean statements, if this, then that, or very SQL-based or conditional types of statements, rather than using natural language and explorative… exploring language.
And I think You know, that… we’re creating… I’m curious on what you think about this, whether we’re creating, like, maybe a new type of social contract or social policy, of how people and machines work together. It’s no longer simply. I make a statement and you tell me this. Like, how do you teach that new type of interaction, and how do you get that established, you know, at scale, going.Conor Grennan:
Yeah.Hyoun Park:
work.Conor Grennan:
so you… I think you’ve just nailed the exact problem, right? Which is… and it’s one of the first things that I… Say, in large groups and at scale, which is… The reason that this is hard is not because you don’t know how to use it, it’s not because people don’t have access, it’s not because they’re lazy, anything like that.
It’s… and this is… we’ve done a ton of research on this, this is why what we teach isn’t tech, it’s completely non-tech. And the reason is that, your brain has a hard time treating software like a person. Because why would you?
In the same way, sort of like, again, if a puppy walked in here, you wouldn’t, you know, you don’t accidentally treat it like a college professor or something like that, like, because your brain has neural pathways. And that’s how your brain works.
And that’s what’s so interesting about it, is that it actually… the tool is incredibly easy, which is why, you know, the OpenAIs and Microsofts and Googles and Anthropics are having a hard time driving really wide-scale adoption. And by the way, when we say adoption. We don’t just mean weekly active users, because if weekly active users are just, like, using it a few times a week.
you and I know, nobody that really uses AI could be confined to using it just twice a week. It would be insane! So… so what does that mean? It means that when we think about what is actually happening, like, why are people sort of, like, struggling, to your point about, like, is it a new kind of social thing? And I would say, yes. Now, I kind of, I want to paint a picture here for a second.
Like, if, you know, if you… so, one of the… one of the kind of, like, analogies we use is, like, if you are sort of, like, planning a trip to Costa Rica, and you go behind one door, and there’s just, like, a Google search engine, you just type in, top 10 things to do in Costa Rica. Then it gives you things, and then you go plan your trip.
But if you walked in, and instead, there was a head of the Costa Rican Tourism Board. You would never say, give me the top 10 things to do in Costa Rica, because he’d be like, well, who’s going? What do you mean, top 10 things? And you’d have a conversation, it’s a very value-add. So what’s the problem? The problem is.
Your brain cannot look at a box and say… it’s any more than you can, like, look at a tree and be like, oh gosh, I wonder if that’s actually a… your brain doesn’t work like that because of neural pathways, and the big research that we’ve done, which I think is interesting, I don’t know if anybody else does, is it works that way because you have to free up your prefrontal cortex.
If you think about when you’re first, like, learning to drive, right, it’s hard, right? My son just learned to drive, so I know this. you’re focused, you’re thinking, like, oh my gosh, you’re paying tons of attention, it’s… you’re using your prefrontal cortex because you’re thinking about every single thing.
But now, when you and I drive, we listen to a podcast in the pouring rain, we have a conversation, because your brain has automated that function, right? So everything… so, in the same way, your brain has automated the function. of looking at a box and thinking, I’m gonna give this command and get a response, and I’m gonna walk away. Same with Excel.
Like, you don’t… with Excel, you don’t say, like, wait, but why does that formula work? Excel’s not gonna say anything back. But if you had an accountant, the accountant said, like, well, Connor, the problem is with this… so, your brain actually has to pivot from treating a software like a software, to treating a software like a human, that’s the giant leap.
So, I think you’re right, I think it is a new way of socially interacting. -
Hyoun Park:
Yeah, the neuroscience of this, when you brought up the prefrontal cortex, you know, one of our things our brain does really well is prioritize. You know, we… we literally ignore 99% of what happens, because it doesn’t matter. Like, you know, the wall in front of me is gray. I expect that will continue to be gray, going forward, so I don’t have to actively think about that.
Whereas in a… pure computing world, you must you know, keep processing over and over again that that wall is gray, and it wastes a whole lot of computing, and a lot of thinking. So, I think we have a similar issue with generative AI, in that we have to now figure out what we can take for granted and what we don’t take for granted.
And it seems that a lot of that has to do with The table stakes of what we know. You know, we’ve talked… if you hear the hype, they’ll say, you know, doctors and lawyers and everybody else is gonna… are completely gonna disappear, but it seems to me that that grounding is actually going to be really important from… from a grounding and prioritization perspective.
I’m wondering what you’ve seen from subject matter experts and how they use generative AI and agentic AI differently.Conor Grennan:
Yeah, it’s a really, really good point, because, and I just got this question a few days ago, working with… we were working with one of the biggest, like, private equity firms in the world, and obviously with private equity, they’re trying to sort of, you know, span a lot of different industries. It’s not just like, oh, we know how we work here at IKEA, or Walmart, or something like that.
It’s like, hey, this is, like, a lot of different things. And they had a question kind of like… kind of like how you were phrasing, and I thought it was interesting, because I was kind of pointing out that I’m like, it’s not really a learning curve. If you’re just good at conversing, just good at managing people, you’re going to be good at this, and you’re certainly going to be good at agents.
And why is that? It’s because you have to sort of, like, assume that this person that you’re working with knows something, and you have to assume that they don’t know something.
And it’s that, you know, when you have a new person come in the building, you’re gonna learn that pretty quickly, and some things they’re just gonna learn, but there’s always gonna be some weaknesses in one of your colleagues. So with that, it’s kind of easier. Now, AI is a little bit different.
In that, for some things, it’s gonna act like a PhD, and for some things, it’s gonna act like a 10-year-old. And so, that’s why, you know, and we’ve sort of, like, kind of gone over this in, you know, previous to this, but it’s all about context layers, too, right?
It’s like… which is one of the things that I think gets tripped up a little bit in something like… like, if people have problems with, like, Copilot, like a Microsoft product. Because there’s a sense that people are treating, like, Copilot, like, just like, kind of like a search engine, like, a good search engine.
Where it’s like, hey, I’m trying to sort of, like, pick out… I need to write a document on this, can you look at… the problem is your SharePoint or whatever, is like 10,000 documents. So it’s much… and you would never do that to a colleague. Say, like, hey, go through… go into our file room and, like, look for something, and yet we think that AI can do that. And to a certain extent, it can.
What AI is missing is what’s important. It doesn’t really know, it doesn’t have that judgment. So the strange thing about, between, like, working with just, like, call it a chatbot versus working with an agent, is that for some things, an agent is really gonna fool us. It’s really gonna be like, oh my gosh, this is amazing, it did… and I work with a lot of law firms.
And the problem with working with law firms, and they’ll tell you this too, is that, like, it can get it right 99 times out of 100. That’s not good enough for a law firm. Literally, law firms have to get everything right. And so when they say, well, should we use an agent for this? I’m like. No, you’re going to want to go to a more deterministic thing. Now, agents can.Hyoun Park:
Nope.Conor Grennan:
brainstorm once you know you have the right information. But don’t you think, like, so much of that is just, like, choosing the right context for it?Hyoun Park:
Yeah, I… that totally makes sense. I… you know, I think we struggle a lot with trying to figure out what is stochastic, what is deterministic, you know, probabilistic versus deterministic at the end of the day. And it’s hard figuring out how to ground these models in the… with the right facts, with the right truth, with the right assumptions to make everything happen.
You know, obviously the models that exist are super powerful right now, but I… I think that’s going to continue to be a problem as we build these agents and try to figure out what to do with the latest and greatest from a model-based perspective. Oh.Conor Grennan:
I agree. Well, that’s why I would sort of, like, turn it over to you, too, on that, which is, like, like, working… like, there’s sort of, like, this sense that, like, working with agents is like working with a person, and in a way, yes, right? For sure.
But it’s… but I think people, at least in my world, and my world is pretty expensive, because we work with a… it feels like every different industry out there, and a lot of different companies, so it’s not just one thing. But I feel like people are just sort of saying, like, just tell me what this is. Just tell… and then we’ll use it. And the hard answer is. It’s not like that.
It’s… there’s certain things it’s gonna be outstanding at, but you still have to check the quality on it, which gets tricky when it’s right so many times. And then with an… again, with sort of like a, you know, with a… with Agentic, in the Agentic world. I don’t know, like, I tread very, very lightly.
Like, every time I’m using, you know, Claude Coworker or something like that, I always duplicate the file, because all it… it happened one time where it deleted, like, 3 files. I’m like, oh my god, like, wait, what? That was in the very, very early days where I didn’t realize it could do that. And, you know, so now I’m always like, just test it.
So I use the value of it, but it’s almost like… you know, you can sort of, like, have somebody that’s great at one thing, right? But you wouldn’t want them to also do this other thing in your organization. And agents are kind of like that. They… they’re very, very good at certain things, and they’re not gonna… but that’s the whole jagged frontier thing.
It’s sometimes hard to understand that, and it only… when people say, well, how do you know? And I would say, well, how do you know with a new colleague? You just keep testing it out and testing it out until you just get… it becomes second nature.
I’m sure you have colleagues that… You just know that you can give them this task, and other ones are brilliant, but you have to guide them a little bit. Do you know what I mean? Like, that’s, to me, like, the tension.Hyoun Park:
So, the funny thing about that is, I feel like, anybody who’s ever worked with databases knows that you have the same problem. A database in production, you never want to delete it, obviously, but I feel like everybody, every database worker at some point has made the mistake of deleting data in production in some way, and agents are actually not very different in this regard.
They will do their job, you know, relentlessly based on their instructions, and if that includes deleting things they think are irrelevant, so be it. Like, you’ve got to put those guardrails in place just as you would for a Two-week, you know, new employee who is working with your, special work data.Conor Grennan:
Yeah, that’s right, and one of the things about that is that even a human will remember that more than AI will, you know, and so that’s, like, the really hard thing.
It’s just all about… you know, and we always say, like, human in the loop, and that’s sort of like a throwaway, you know, thing that just people say, kind of in one ear, out the other, but… but the importance of that actually is really critical in that, you know, my son, who’s 17, as I said, sort of, like, we do a lot of this work together, kind of thinking about, like, the future of work, because, you know.
that’s really at stake, I mean, like, we sort of say, nobody’s coming to save you, do you know what I mean? Like, companies have a fiduciary responsibility, and it is not to you. And so, you know, kind of like the future employees, but what we say a lot is that, like, you know, these tools can… come up with, you know, senior-level output, but not senior-level judgment.
And it’s not just like you can continue to trust them over and over again. Now, I’m a huge trust guy, like, I trust a lot, but that’s also because I… I know that I’m… I have to continuously check the output, and if I get lazy, that’s on me.
That’s the… that’s where… it’s not that it’s different, but there’s certain colleagues that… you just know once they start getting something right, they’re just gonna continue to get it right. But I always think of agents as almost, like, more like a car than a train.
Like, a train is very reliable, it’s gonna get there, all that kind of stuff, but… And a car will break down, and a car will do all these other things, but it will take you into places you never thought possible, too.Hyoun Park:
I thought it was really interesting how you said, like, senior-level out, you know, output, but not necessarily, like, oversight.
You know, one of the things I feel like is different from the AI this time around is that most technologies, like the web or mobile or, a lot of low-code or usability things, have been driven by developers or by low-level employees who just understood how to make Things worked, and then Forced the rest of the company to do it. This time around, it seems like, executives actually have an advantage.
I’ve seen stats that say something like 70-80% of executives are using AI semi-regularly versus, call it.
20% of line-level employees in the workplace, and that’s probably increased by now, that’s probably a few months old, but I think the basic idea still stands, that it seems like executives have an advantage, because AI has been treated, I think this might actually be your term, like, as an, like, an intelligent intern.
To support all sorts of different tasks, and executives have a better idea of How to manage, how to define those guardrails, how to define the outcomes that they want for complex processes, whereas that might not be as strong of a skill set at lower levels in the organization.Conor Grennan:
Yeah, that’s an interesting way to frame it, and I think that’s… you’re on to something there. I’ll tell you sort of, like, what we’ve seen, is the number one thing when you’re… the skills that you’re talking about are all, I think, absolutely accurate to what we’ve seen. However, the only thing I’d probably sort of, like, insert there is, like. it’s the initial behavioral chasm.
So we see a lot of people who I know would be phenomenal at AI. But they haven’t leapt that chasm yet. They haven’t gotten their brain to understand that you can talk to this like a person. We, you know, when we measure power users, we don’t measure weekly active users. It’s a decent proxy, but it’s not a perfect one.
It’s length of conversation in every single kind of… and the reason, as you can imagine, is when people have a long conversation, that means that they are treating it like a person, because you talk to a person over and over and over, you know what I mean? Like, you go back and forth and iterate. You don’t do that with Google. So that, to us, is the unlock.
So once you get over that chasm, then all those skills come into play. One of the things that we did with, we worked with JP Morgan with our private bank, and one of the things that we were seeing was that, like.
some of the older individuals were the big money makers, the big revenue drivers, whereas the younger individuals were great at AI, but you can’t just, like, forget about them, because they’re the ones who own the book of business, they’re the ones that own, like, the… so you can’t just be like, well, they’re not good at AI.
Like, no, no, you have to get them good, so how do you get them over this behavioral gap? So one of the things that we’ve seen, and kind of like what you point to, which is, like, why are senior-level people sometimes good at it? What we actually see is, we see senior-level people very good at it, entry-level people very good at it, and then the giant middle is… tends to not be as good at it.
And the reason that we’ve seen, at least in our research. Is… it’s actually not a, It’s not a sophistication gap, it’s an incentive gap. So, in other words, like, senior leaders know that they have to figure this out, right? They’re like, AI, it’s big, right? So, like, you know, you have your senior leadership team at Walmart or wherever being like, guys, we gotta figure this out.
Entry-level people. and I’m going to put students in this category, too, are extremely incentivized. Why are students so good at it? Because if they use AI to write a better paper on Hamlet in, you know, 5 minutes, their future prospects in life go up overnight, because they’re gonna get into a better college or something. That’s not the case with somebody 5 years into their job.
It’s not like they’re gonna do a great day of work and they’re gonna all of a sudden get promoted to SVP. entry-level people to the same kind of thing, like, what are you trying to do? You’re trying to impress the heck out of everybody, so you’re using every tool you have, you would have a huge incentive.
So what we find is actually it’s not a generational thing, it’s not a demographic thing, and it’s not a hierarchy thing, it’s a incentive thing. And if you can get there, what we do with organizations is, how do you move people from encouragement. to expectation of use, right?
Like, that’s why we do these big senior leadership workshops, too, because all these CEO memos from, like, Duolingo and Fiverr and Shopify, like, hey guys, we gotta use it. It’s like saying, just eat less and exercise, everybody! Come on! Guys, come on, we gotta get… start eating healthier! It doesn’t work!
We know that instinctively, but it doesn’t work with AI either, because it’s a behavioral shift, and just telling people to use it won’t work any better than you say, hey, just start eating more salads.Hyoun Park:
Yes. And… and there’s a… there’s a actual val… you know, there’s a limit to how much value you can get just by eating salad as well.Conor Grennan:
Yeah, it’s fair, yeah, yeah.Hyoun Park:
Yeah. So, how do you… so, I think we have an interesting challenge here, though, when we talk about the youngest people who are learning to use AI, of course. You know, how do we make sure that they don’t just become orchestrators, that they not just… you know, you use… you might use AI to write that, Hamlet paper, but how do you make sure they also know what Hamlet means.Conor Grennan:
Yeah.Hyoun Park:
It matters before they go out into the world.Conor Grennan:
I’ll tell you what, if you figure that out, let me know, because, like, I get that question a ton, like. what is it going to do to critical thinking? Don’t you think it’ll diminish it? I’m like, I do! Yeah. I don’t know what else to say, and I’m a hu… and you know, I’m a huge optimist, all that kind of stuff, but I think it’s going to diminish critical thinking.
And I don’t want to sort of give up on it, but I also think that, you know, if we’re talking about, like, education, I think you have to start incentivizing people. Like, we’re in the MBA program, right? Like, when people are applying to go to Bain or McKinsey or BCG or something like that, what do they do? They case over and over and over again. And Casey isn’t one thing.
Casey is, any question can come at you, because if you’re at McKinsey, you don’t know what industry… like, it’s… that’s why they ask questions like, how many basketballs can fit in a 747? Like, they want to see how you think, no matter if you’re working in insurance, or finance, or ops, or legal, or whatever.
I think we have to incentivize that kind of thinking, because that’s the kind of thinking that gets people sort of, like, really thinking, like. okay, I need to use this critical part of my brain, because that’s who this company is going to hire.
And I hate to sort of, like, put it so crassly, but I don’t know how else we do it, because right now, young people are incentivized by grades, because that’s how we force them to incentivize. But it is… I think you’re hitting on one of the absolute key, things, and I think we have to think about, like, what incentivizes critical thinking, not just, like, fingers crossed, you know?Hyoun Park:
Yeah. you know, I did both a liberal arts degree and an MBA, and, you know, it was really important to learn pattern recognition, and what to do, what not to do, historical trends, you know, all of that understanding of context. So, as we wrap up, you know, the last question I wanted to ask you was.
You know, when we started talking about, yeah, at the beginning, we talked about how human judgment ended up being the key differentiator in a lot of the work that you’ve done in your life.
you know, looking at the world of 2026 and beyond, where the agentic OS is there, data is everywhere, context, can be whatever you put in, you know, what is the biggest judgment instinct that you think leaders are most likely to overlook, with all of the you know, the panifle and the depth and the breadth of resources that are now out there, both from an AI and agentic perspective.Conor Grennan:
Yeah, I think it’s just laziness, you know what I mean? I think it’s just they get… they start believing that… like, again, like, if your calculator gives you a wrong answer, you throw it away, right? I mean, all these determina… like, we’re just so used to deterministic software, if something gives you a wrong answer, you’re like, it doesn’t work.
Or you trust it completely, one or the other, right? That’s the… that’s the big thing. I think people have to get in the mindset of it’s a human. It can make mistakes, you’ve hired somebody, they’re probably gonna make very, very few, but when something’s mission critical, you have to double-check the work. And that’s the sort of new way of thinking.
It’s not… Excel, but it’s also not, like, a human. It’s something in between. That’s the big leap that I think we have to get over.Hyoun Park:
Oh, one quick question from the audience before we leave. How do you motivate students to think, in this… in this world?Conor Grennan:
We think about it all the time. We do this all the time. That’s on the side, I, like, love that. It’s kind of a passion of mine outside of working with giant corporations, because I’m, like, really deeply concerned about the next generation. I think that we have to incentivize them.
And again, I know that kind of sounds a little crass, but I think if we just say, guys, critical thinking’s important, but you really need these A’s to get into this good college, which one are they going to do? They’re obviously going to do this. So instead, we have to say, listen.
If you want to, like, get the next job that you need, what companies are looking for is not just whether you use AI, that’s table space, we already know. It’s can you sort of, like, tackle some critical thinking problems without AI, or how would you bring AI in as a thought partner? You have to incentivize them because, like, and how that’s going to sort of impact their role in everything else.
I think it’s the only way forward. I know, again, it’s kind of crass, it’s not perfect, it’s not liberal arts thinking, I’m a liberal arts person. But I don’t know how else we do it.Julia Nimchinski:
Phenomenal session, Connor and Jan. Thank you so much again, and how can our community best support you? Let’s start with you, Connor.Conor Grennan:
Oh, thank you so much! What a sweet question. Yeah, I mean, you know, we love to sort of, like, really, we engage a ton. Like, I have a very active presence on LinkedIn. I kind of post there every day. Always love when people come and visit and engage with comments. I always reply to all comments, things like that.
So, yeah, if you want to kind of come on LinkedIn, and then obviously, like, if you’re part of an enterprise that needs help. This is what we do with some of the biggest enterprises in the world with my company, AI Mindset.Julia Nimchinski:
Thanks so much, and Hyun, how about yourself?Hyoun Park:
Yeah, so, you know, nowadays I work at a company where we focus on telecom network and SaaS expenses. If you’re interested in digital transformation, I’d love to talk to you about what your strategy is and how we can help you with that. And if you need AI strategy help, you should talk to Connor.Julia Nimchinski:
Awesome. Thanks again.Conor Grennan:
Thanks, everybody. Thanks, Grace.Hyoun Park:
Thanks.