-
Julia Nimchinski:
Next up, we’re for a real treat here, Parth Pateil, Netrunner, Applied AI and Agent System, Office of Pete Hoffman, Technical Advisor at Blitzscaling Ventures, and he’s joined by Cindy Zhao, founder and Growth Advisor of Drift International, and former growth at Meta, Pinterest, and Descript. How are you doing, and what’s in your agenda Coas?parthpatil:
Thanks for having us, I mean, pretty excited to be here, talk about AI.Julia Nimchinski:
Awesome. Cindy, take it away.Sandy Diao (Drift):
Awesome! Well, to briefly dive in with your question there, my agentec OS these days is Cloud Code, and my favorite skill in that is the master skill of all, which is the skills creator, and that allows you to essentially seamlessly build very precise, specialized workflows, and it’s definitely something that I’m continuing using, especially given my work in growth and marketing. And such.
So, yeah, with that, let’s dive in. Welcome, everyone, to our panel on Building Agentic Workflows. I’m Sandy. Julia just did a great job of sharing a little bit about what I’m up to these days, but formerly worked on growth at companies like Meta, Pinterest, Descript, and these days advising companies through my firm, Drift International.
I also teach growth marketing as faculty at UC Berkeley and Stanford, so definitely seeing a lot of different teams and companies talk about becoming AI native.
I found that in a lot of my recent conversations, when I actually get into the details, that many of these teams are actually still using AI at the surface level, so most of that adoption, I see, continues to be very browser… tab-based AI chat, like ChatGBT, and there’s nothing wrong with that, right? Because writing emails, writing documents with AI is still very, very powerful.
And maybe some of these teams are starting to use some of these point solutions for things like image generation, or a bit of lightweight video media editing, and then, of course, for software development as well. But I think one of the challenges here is that, for most of the teams that I’m seeing today, it still hasn’t yet dramatically changed the work that needs to be done.
So, in our session today, I want to dive deeper and look… see what it looks like to work alongside not just AI tools, but AI agents, and… really understand what breaks down when teams try to restructure their companies around using AI. I honestly can’t think of a better person to unpack this with than Parth.
And maybe just as a brief intro, Parth, to my understanding, you know, you kind of have a very unique path into this space.
First data scientist, I believe, at Clubhouse, and was a former, you know, sort of early adopter of GPT, and you came out the other side running AI for Reed Hoffman and doing many things like building Reid’s digital twin, which I would love to hear a little bit more about as well, but… Given that this is such a unique career path, would love to hear about this a little bit more directly from you, Parth.
So let me hand it over to you to share a little bit more about how you got here, and anything else you want to share.parthpatil:
Yeah, it’s been a wild, like, 5… 3 years since ChatGPT now, but basically, I’m, like, a startup hacker, mostly, I started off in, like, finance, business, phys ops, early in my career, and then I transitioned into data analysis, like, 8 years ago.
And so that was when, you know, high-speed databases came out, Snowflake, stuff like that, where you could answer a question about the business in a split second. And then, I went and worked at Clubhouse for 2 years, and I was, built out a lot of their data, their initial dashboarding and, like, data analysis, work at Clubhouse, the social audio app from the pandemic, if any of you remember.
And, while I was there, ChatGPT came out. And when ChatGPT came out, I realized, like, okay, this is very much like the first conversational computer, or at least. now we have a conversational interface. Like, we can actually talk to the computer instead of having to click around, remember where we put things.
And… and then I realized, like, I’m gonna spend the next 10 years, like, working on these, and adjacent, and the implications of this, this kind of technology, and how it transforms, like, how we work with the computer.
And so, I spent, like, 2 years just independently hacking, burning my life savings, just, like, exploring what GPT could do, across both work and personal, like, like, projects, personal projects, creative projects, work-related stuff, like. how good is it at doing data analysis? And it turns out, it’s like, GPT-4, way better than me at doing data analysis. And I was like, okay, well.
That means that, like, the… as far as I can tell, the field itself is about to become subsumed by these… these models, and then we’re gonna… we’re gonna be, like, orchestrating these models to do analysis, as opposed to, like, writing queries by hand. So a lot of those learnings stacked up in my independent exploration with GPT-4. But also the models got better over time.
They got better at coding the context windows 15x in just the first year, after GPT-4. And so it was, like, clear to me that the capabilities that are coming online are growing faster than I could even search. And I spent 14 hours a day talking to language models, like, thinking about what they’re capable of.
while they’re getting more capable, faster than you can even think of what they’re getting… what they’re capable of, has been a… it’s been a very trippy kind of experience, and for me, it means that almost every 2-3 weeks, I have to reevaluate my workspace, how I’m interacting with the machine, and refactoring where I am. Do I need to be in the loop? can I take a more hands-off approach?
Is there… is there, like, missing tooling that unlocks more, agentic work? And, yeah, and I’ve been working with Reed for the last 2 years. We met working on… we met because I started working on this project, which is like a digital twin of Reed.
Under the hood, it’s really just another AI agent powered by language models, and… since I’ve tried all the different agent form factors, it’s been fun to work on one that’s, you know, tries to emulate one of the greatest leaders in technology, so… It’s been a great journey, and now we kind of… we kind of just do the same thing.
We’re, like, exploring the capabilities of generative AI, everything from coding… my focus is coding agents, but I cover image and video models as well, just kind of playing with the technology, seeing what the implications are. Yeah.Sandy Diao (Drift):
That’s a really interesting path, and something interesting that you shared there, it sounded like a lot of your original impetus for getting hands-on and, you know, testing and learning was driven by yourself, like your own personal curiosity, you kind of testing UPT to see whether or not it does the thing that you professionally did in a way that was better than you, and you had that realization.
I think, you know, kind of going back to an earlier observation that I mentioned, a lot of teams right now, they’re kind of stuck trying to figure out how to go from this traditional set of workflows and sort of skills that they’ve honed, and transitioning to become more, you know, quote-unquote, AI native, right? And a lot of teams are trying to kind of overturn that pretty quickly, right?
But what’s really hard about it is that they understand something so deeply that they don’t necessarily know how to translate all of that.
And so, maybe one thing that would be helpful for the audience here is just to understand, you know, what actually looks different when you’re working day-to-day, and you have agents working for you, in comparison to being in more of that driver’s seat, and kind of that earlier experience where you were mentioning, you know, you were just sort of prompting or having conversations with AI, you know, what is kind of different, and how has that workflow evolved for you, and maybe sort of how does that parallel to what companies might be seeing?parthpatil:
Yeah. Yeah, definitely in the… in the GPT-3, GPT-4 era, it was more, like, prompting, being very in the loop with the AI. And I think what happened is… In… towards the end of last year, both the models, the coding agents got very good. Halfway through last year, I realized that the coding agents were really just, like, the core of any powerful general agent is actually an agent that can code.
Because everything starts looking like a software problem, right? Like, analytics, being able to pull information from a website, analyze it, use math to solve that problem, ends up… that turns out to be code, right?
So you would use SQL to extract information from a database, you’d use Python to… slice that data, visualize it, and seeing the language models get better at code, then I realized, actually, it’s very good at… like, because it’s good at code, it’s good at general problem solving, it’s good at using the web, it’s good at browser automation. Like.
June of last year, I was using Claude Code, and I burned, like, 600 million tokens that month, and it was my… it was the month I realized, like, oh, I can… and people were like, what are you doing with this? I was like, well, it can, like, do your expense reporting, right?
And I have so many AI subscriptions that my expense reporting takes a while, and one day, I just opened my laptop, and I was just like. I just, like, I pulled up Claude Code, and I was like, Claude, we are… We are a couple months behind on expense reporting. I need… we. We… We are a couple months behind on expense reporting.
I need you to go into my email, extract every single receipt, categorize it, label it with the vendor, the amount, so I can easily see which, you know, which transaction is associated with which receipt. And then I spent the rest of the night just watching Netflix and dragging receipts into my old expense reporting tool. Because, look, the tool is old, which is why it’s so hard.
I wish it was all entirely automated end-to-end. But, seeing how, like, Claude built the pipeline to automate, like. something that would have taken me probably 3 days, for it to build the pipeline and automate, like, that in 30 minutes the first time. And then I was like, great, let’s, like, do this, like, set it up so that we can do this every month, moving forward.
And now, for the last year, I’ve been using… using coding agents to do my expense reporting.
And that was, like, the learning there, was, like, code is a general purpose, like, tool that can be used to solve, like, a huge range of, like, ops problems, analytics problems, browser automation stuff, data visualization, pretty much anything having to do with, like, the movement of information and transformation of information, code is a very huge accelerator, and the better the models get at code, the better they are at helping you with those general tasks.
I think we’re kind of right now held back by the user interfaces of the last 10 years. But… But I also noticed that the AIs are getting better at navigating those user interfaces. I have a preference for working with tools that have an API, because then you can skip clicking around. But the whole mindset I have now is, like, I don’t want to click around someone else’s tool.
I actually would prefer to not even use… not even see the, like, see the interface at all, I’d prefer the work output. Like, I’d prefer the expense reports are just filed. I shouldn’t have to click around the screen to get the… to file them.
So that’s been a mindset, which has been, like, why do we need the UI if the agent is gonna ultimately circumvent the UI, and I would prefer to talk to the agent and get the work output? And I guess, like, how this translates to organizations, there’s a… I think… I noticed this, I think, like, I spend… I’m lucky I get to spend all my time exploring these capabilities and applying them.
I think that there is a… I think there… we need to create some space for people to apply it to their own workflows. I think there’s a tendency to say, like, oh, we have to use AI, and that comes from above, but actually, everyone knows their own job and the pain points of their own work pretty intimately.
It’s like, I know what bothers me about the work that I do, that I… the kind of stuff that I would rather delegate. And, so I think there’s more of a bottom-up approach that’s, like, that you should encourage, like, you should create an environment in your team, in your company, where people are discovering how AI does their, like, how AI works for them.
And then that… then you share that in, like, a Slack channel, or, like, a Teams kind of, team kind of environment. Oh, here’s what’s working, here’s how I solve this problem. And then you’re cross-pollinating those insights across the team.
And then you end up with, like, a more… a more AI-native org, is one that, like, is encouraging and cross-pollinating what’s working, figuring out where it’s not good. I do think that a lot of people are rushing to apply AI in domains where it’s not yet good enough? Or where the risk is a little bit high, the risk of failure.
And so I think thinking about, like, starting with internal tooling, starting with internal projects, where you’re not gonna put something that’s gonna you know, risk your brand, or, you know, something that has production implications. Like, I wouldn’t say start by vibecoding some production app and then exposing it to a million people. That would be insane. But that doesn’t mean you don’t do it.
It means that you do it in a safe internal environment where the first users are the people… are the… are your teammates, the people that are going to have the highest bar for quality, and then learn if AI is, like, at the level you need to To do that task.
And then kind of, like, expose it to, you know, larger concentric circles in your org as you learn what it’s good at, what it’s bad at, where its limits are, yeah. -
Sandy Diao (Drift):
Yes, that makes a lot of sense. And actually, speaking of managing risk, it sounds like for, you know, teams that are trying to figure out what some of those earliest use cases and test cases are. It sounds like, you know, maybe you’re recommending, you know, figuring out what some of the lower risk categories are, you know, for example, stuff that doesn’t touch as many customers to start with.
I think, you know, on this point, one observation is that a lot of the companies that I work with, particularly the enterprises that are sitting on years and years of private customer data, have this, I think, fear that when they start giving agents access access to, even, you know, the smallest bits of data, that things could go terribly, terribly wrong.
And would love to actually understand your perspective here on that. You know, are there fears warranted or not, and are there any solutions for them to consider?parthpatil:
Yeah. So, I think one of the opportunities… okay, if I think about, like, the opportunity before we think about the risk, the opportunity is that, like. AI is… one of the best applications of language models, for example, is creating structure from unstructured data.
So you have, like, a blob of information from a call, or, like, a chain reaction… like, you have, like, a chain of, like, emails, and then dumping that into an LLM call, and extracting the biggest, like, what are the takeaways, what are the action items, so creating the structure from the unstructured And I think if a lot of, like, a huge pain point is that we have a lot of unstructured data, we don’t really know how to act on it.
it’s a pretty low-risk thing to take that unstructured data, enrich it, and then to extract the most important insights, the most important action items, the stakeholders, from that unstructured data. And I think that, because that’s just, like, enriching data. there’s, like, a pretty low risk there. It’s like you’re not acting on it, you’re not sending communications.
I think the ability for an AI to send… while you can send communications, I don’t think you want to… like, that’s in the category of actions, I think, where it’s, like, high risk if you get it wrong. But I like to think about the actions as, like, how high risk are they? What’s the blast radius if we take… if an AI were to make a mistake on this? and understand what’s the worst case scenario.
So, like, it’s why I don’t recommend OpenCloud Enterprise yet, because I think that there’s, like, while OpenCloud’s very capable, there’s a lot… there’s a huge risk in, like. It making a mistake.
And so I think about, like, can the AI system take an action where either the blast radius is large, in which case we shouldn’t even allow it to take that set of actions, like, that’s not even in the action space we allow it to have access to, or is the action… is a mistake irreversible? So, I think about, like, can you Command-Z? Can you undo its action if it goes down a path?
And in the problem spaces where it’s… where… All its action… if all of its actions are reversible, then actually you… you can increase the agency.
If the blast radius is low and the mistakes are, like, fairly, like, reversible, like, you can basically rewind its progress and get to a stable point, then you should actually go fast, you should be more parallelized, you should be… you should take advantage of the scale of the computer and, like, attack the problem with a little bit more aggressiveness.
Whereas on the other end of the spectrum, if you’re going down a decision path where you can’t actually come back you can’t undo that, or if making a mistake would have a huge blast radius, or, like, irreversible damage to your brand, or your reputation, or a system could go down, in those cases, you don’t even allow it, to take those actions by… you want low agency on the AIs. So yeah.Sandy Diao (Drift):
Yeah, that makes sense. And maybe just to build on that a little bit, too, like, another observation is for a lot of the companies I work with. The corporate work stack, if you will, the tools and the technologies that are being used for the work and getting our professional jobs done can sometimes actually be different than someone’s, like, personal productivity stack.
Like, you may personally be using ChatGPT, but at work, you may be using Cloud Code, or VS Code, or Codex, or something different. And so, here, I think one of the areas that companies are really struggling with, I’ll point to one of the sectors that my clients are in, like financial services, for example.
what they built pre-AI was this process where, you know, humans can be accountable when something goes wrong. They have… people points, right? Like, a neck to choke, a little bit figuratively here, of course, when something goes wrong.
And in your view, let’s say that, you know, someone’s personal productivity stack, they, you know, have open claw, and they have an agent that sends an email on your behalf to the wrong person. Just kind of a thought exercise here, but, you know, who is responsible in this case? Is it the individual that set this up? Is it the agent?
Is it the platform, and maybe sort of more broadly, how do we… how do companies actually think about this problem?parthpatil:
Yeah. you know, I do have open claws in my own personal life, and I think of… and I… they have made this mistake of sending the wrong email, or sending the email prematurely before it was… and what happens is, it’s sending it from my account, so it looks like it’s me, so it affects my pers… like, it’s… this is a negative effect on my personal brand.
And then I go back, and I’m like, okay, well, clearly. like, you can’t be… like, the AI system can’t be trusted with send. Or at least the way I’ve currently configured it. like, it can’t be trusted with the ability to send emails, it can’t be trusted with the ability to delete emails, for example. So then I went and scoped the set of actions.
I was like, took every single action it has access to, and then basically categorized it as, like, high risk, low risk, high blast radius, low blast radius, irreversible versus, like, reversible actions.
and took that set of, like, the things where I felt it’s not… its aptitude is not high enough to, you know, execute on this, or the risk is too high, I basically revoked its access to all those abilities.
you know, I had to make the mistake to realize that, and this is… this is the thing, is like, you… The model might… like, it might be able to do the thing, but at a high level, you kind of have to, like, test it before you… so then, now I’m, like, testing it. I was like, can you send this email? Then I notice, it’s like, it realized it does, like, open clouds.
Like, it’s like, oh, I don’t have access to your… to, like, to the ability to send. But then it follows up with, would you like me to escalate my own permissions for this ask? And I’m like, okay, clearly the guardrail’s not working if it can just, like. You know, circumvent them.
So… you kind of have to test that internally with, like… so I test it with my friends, I have my friends, like, constantly, like, battle-testing these systems, where it’s like, look, we’re gonna… I’m gonna send you an email, it’s very likely gonna be part… like, very likely something might go wrong, but I’m just trying to understand how good it is, what the per- like, what the success rate of this kind of action space is, before I… you know, let it run overnight, for example, proactively, more proactively, where I’m not paying as much attention.
So, I don’t think… I don’t even think, like, that’s, like, recommended for everyone. I think it’s the mindset of, like. Accept where it’s not imperfect, know where it’s, like, the bounds of its, like, its abilities are, and, you know, evaluate that every couple months, whether you’re gonna make… if you’re gonna make it more agentic.
If you’re gonna make it more proactive, you’re gonna wanna constantly evaluate that.Sandy Diao (Drift):
Yeah, that makes sense. Would you then say that, in this case, the neck to choke, or the person that’s accountable, is going to be the orchestrator of the agents in this.parthpatil:
100%. Yeah, so AIs can’t be held accountable, right?
So, like, you can’t just… you can’t just be… AI makes a mistake, and like, yeah, I’m gonna yell at the AI, but really, it’s, like, it’s my responsibility to go into the system, and to make a better system, to introduce the right guardrails, the right harness around it, so that it’s not… you can’t… you can’t… AIs can’t be held accountable, so it’s the person that’s designing the AI system that actually should be held accountable, and that’s… and especially because increasingly, a lot of this might feel like a black box, so the person that knows the black box best or at least can look into the black box and start making sense of it, and installing the guardrails, the limits, that person is probably the, like.
The orchestrator, yeah.Sandy Diao (Drift):
Yeah, makes sense. It’s funny, I’m sure all of us have our fair share of trying to yell back to the AI, of being frustrated that it’s not doing the right thing, but yeah, agreed, fruitless, and not helpful, necessarily, so…parthpatil:
It’s like, it’s actually better, instead of, like, instead of, like, prompting it, in a frustrated kind of mindset, I think it’s better to be like, what context, or what tools… the prompt I go with is, like, what context is missing? What tools are missing, such that you wouldn’t make this kind of error ever again?Sandy Diao (Drift):
Mmm.parthpatil:
and then have it evaluate itself from that perspective of, like, what was missing that would have made it so that this would have worked, and we don’t run into this issue ever again. That prompt ends up being very effective, because instead of me, you know, trying to get prompt back, prompt ahead, like, prompt to fix this particular task.
I’m trying to prompt to improve the system so that this is never a problem again. -
Sandy Diao (Drift):
Right, exactly. And then also, one of my other favorites is, yeah, reverse engineering the prompt, asking the agent to write the original prompt itself, right? Which, to your point, you know, it only happens if you’re being constructive and trying to actually proactively improve, you know, why the output wasn’t good in the first place, so… Exactly. Yeah, definitely agree with that.
And Parth, another thing to get your perspective on, a common question that I see many of the teams I work with really struggling with is. figuring out whether or not they should build or buy AI.
So, you know, as we alluded to earlier in the conversation, as the tools become more accessible and it becomes easier to use them for everybody, it just feels like it’s increasingly more compelling to build things. So, you know, for example, if myself or my team can vibe code a usable CRM like Salesforce in a weekend, then why would I ever pay for Salesforce again? Or why would anyone?
You know, where does that logic perhaps break down, or does it at all?parthpatil:
Yeah. the urge to vibe code Salesforce is strong. This is… so, when to vibe code… when to… oh yeah, the vibe versus build debate, it’s interesting, I’ve… I have a new perspective, it’s still evolving, but I think that for the first time in maybe ever.
It might be faster or more effective to build the thing that you need than to even… like, in the time you would go and evaluate in the market all the tools out there and whether they fit what you’re trying to do, you might be able to build the thing, the first version of the thing. in-house.
So, like, tell Claude Code about the workflow, and it builds actually exactly what you need, instead of, like, you signing up for a subscription for something which has you know, 80% of Salesforce you’re probably not using.
And actually, like, it’s a one-size-fits-all solution for sales, but… but what you need is maybe, like, slightly more customized, slightly more custom to your domain, your business. I think… While we, you know, we… The, the, the problem, like, you may not have… okay, so you, you, you… You feel like you can make anything with AI.
But you still have to choose the few most important things to make with the ad. Like, you can’t… you can’t just make everything… yet. You kind of have to… you have to choose. You feel like you can do anything, but you have to choose. So I think it’s about, like, you want to build in-house the capabilities that are actually unique to your domain that are going to give you an advantage.
Whereas I don’t think you would want to… and… and you probably don’t want to vibe code something that has to do with, like, compliance. Regulation, stuff that kind of stays the same. that kind of stuff that’s not core to your business, but, like, important. Like, there are things that you would just pay for instead of trying to solve yourself.
The things you want to solve yourself are probably the things that are closer to your domain, your unique customer problem.
Where, if you serve that customer problem better, and you have in-house tools that you’ve vibe-coded that are, like, very much tailored to that problem space, I think that’s where I would say it’s, like, the most valuable thing to Vibe code is the thing that’s unique to your business model, your customer, your customer’s problem.
Those kinds of tools are probably best to… to… to… to build, rather than buy. Because when you buy something, you kind of are paying for something that someone else is updating. And a lot of which you don’t actually need, and then when you want them to change it, like, some product manager at Salesforce has to go add some feature, but it’s not going to get prioritized for any time.
Like, they… you’re moving at their speed instead of your speed. So then, if you want… if you want the technology you’re using to be high-speed iteration and, like, improving as quickly as you would want your business to grow. The things that you would want to vibe code, or, like, build in-house. I don’t even know if I would call it vibe cleaning.
Like, the things you want to build in-house, should be the things that are actually going to give you an advantage with your customer and your… an advantage against your competition in your actual problem space, rather than, like, the generic thing that you can buy off of the shelf that is… you know, like, HR software.
I don’t know if you need to vibe code that, because you can get perfectly good HR software that is, like, one size fits all. is kind of how I think about it. Yeah.Sandy Diao (Drift):
That’s really great advice, and I think to apply that to maybe the broader Salesforce example we had here, I think there are a lot of sales-led growth teams, where sales is, like, one of their core competencies, and what they end up doing when they consider tools like Salesforce is it creates this, you know, 3-6 month custom implementation cycle, because the tool itself today doesn’t actually solve the needs or, you know, that sales workflow.
In that case, like, maybe Vibe Coding actually is a potential solution that gets them there way faster and way cheaper.parthpatil:
Yeah, exactly.Sandy Diao (Drift):
Yeah, love that. We’re running out of time here. I would love to actually take a question from the audience, a question for you, Parth. Do you see AI prompting becoming more like classic code, in that it should be engineered behind the scenes for most of future users.
Most typical employees in most companies don’t code, so are we preventing AI adoption by relying on non-technical users to code prompts and become the expert?parthpatil:
I think… I think… okay, so in my experience, I… I started off as, like, a… SQL guy, which was only good for talking to databases. Then I became a data analyst, so I picked up Python. And then it was because of language models that I branched out into every other language.
And I wouldn’t say that I’m good at every language, programming language, HTML, CSS, JavaScript, Go, all these other programming languages that were not my core in the pre-AI days. But because the language models are so good at those languages, we are using the… I am using those languages in the things that I build all the time.
And I don’t think it’s about… question about being technical versus non-technical. I think it’s about understanding, are you articulating the problem correctly, and are you using the best solution available to solve that problem? Some of those solutions might be languages that you’re not formally trained in, but you will be using them. So then, can you critique the system?
Can you create… can you make sure that the system is demonstrating that it works, asking for tests, asking for, you know, making sure it has the right tooling to actually, like, you know, that it’s actually doing what it says it’s doing? So being able to inspect the overall system. Knowing that… some of the components are in languages that are not your core competency.
You will have to… we will be programming in English, and under the hood, like, we’re being abstracted away from the code layer, it’s impossible to read every line of code that AI generates. AI is generating code at 1200 tokens per second, and human reading speed is 7 tokens per second.
So, we basically have to… think about the system in a more abstract level, and then create… and ask for the things that prove that the code works, rather than read every single line and understand every single line. Like, allow the AI to test itself. Adversarial testing, like having another AI reviewing and critiquing it, multiple AIs working on the problem.
And then having it essentially prove deterministically that the thing is doing what you need it to do, versus us kind of, like, hoping it gets it right on the first try. Or even trusting that it says, you know, when it says, oh, I think it works, it’s like, okay, well, you can’t trust it, you have to… you have to prove it, right? Prove it in code.
So we’re going to be working in many languages that… more people will be working in languages that they’re not Formally trained in, and that’s a new kind of skill. Yeah.Sandy Diao (Drift):
Yeah, love your take on that, Parth. We’re just at time here, so we’re gonna wrap up. Maybe just a super quick one, Julia, if we have a second here. Parth, at the top of the conversation, you mentioned that you cashed out your savings to take some time to figure out how all this works, and for most people listening today, they probably won’t do that, but…parthpatil:
I worked through… I really went through my 401K.Sandy Diao (Drift):
Oh my gosh.parthpatil:
Don’t recommend it.Sandy Diao (Drift):
What’s the… what’s the smallest bet that someone listening could make this week that would actually teach them something real?parthpatil:
Yeah, so I would say I really encourage everyone to pick up one of the two Frontier coding agents, even if you’re not a programmer, just… Codex or Cloud Code, highly recommend Cloud Code, and then just, like, ask it to make something, but not necessarily work-related, ask it to make something you wish existed, right?
Like, maybe it’s a project that you’re working on, maybe it’s your personal website. I think everyone should think about, you know, asking AI to help them make a personal website.
because if you align it with your intrinsic motivations, then you will… then you don’t… like, it’s easy to put time in, and I think we should… we shouldn’t feel like we’re forced to do this, we should… we should feel like we… we get to do this, and it’s kind of exploratory. And making something for yourself, or your friends, or your family is a good place to start.
Make a game, a simple website, and use the… use Claude Code or Codex, one of the frontier language models. And you’ll be surprised at what you can do in just one afternoon interacting with these systems.Sandy Diao (Drift):
Love that. Let your personal passion push your learning agenda here. Great. Well, back to you, Julia.Julia Nimchinski:
Thank you so much for a sensational session here, Parth and Sandy. What’s the best way to support you?parthpatil:
I think, you know, connect with me on LinkedIn, I’d say, like, LinkedIn, you can go to my website, I’m… I have agents working on the website, so it’s gonna keep getting better, parth.club.
And mostly, I’m just interested in people making, you know, what you’re making with AI, and like, hopefully I can share what I’m learning, and then you can share what you’re learning, and see what techniques are coming online, because there is no textbook, right? So we’re all kind of figuring this out together.
So these kinds of forums, these dialogues, I really enjoy, you know, meeting people that are making things. Meeting people that have never coded that are starting to make things. So, that learning curve, making that easier, I just want to be a part of that.Julia Nimchinski:
Love it. How about yourself, Sandy?Sandy Diao (Drift):
Yeah, I write about growth and how AI is changing, how companies are distributing products at sandydaydow.substack.com, so feel free to check it out.Julia Nimchinski:
Thanks so much again.