Text transcript

The Autonomous Business OS — Fireside Chat with Amos Bar-Joseph & Jason Napieralski

AI Summit Held March 24–26
Disclaimer: This transcript was created using AI
  • Julia Nimchinski:
    Welcome to the Agenda Glass Summit. We’re chronicling a major shift today and over the next 3 days, from system of record to context graph-based system of agents. We’ve got some top operators, analysts, VCs, and AI-native founders. Speaking of which, welcome to the show, Amos for Joseph. Welcome to the show, Jason Aparawalski. Super excited to feature you, super excited to kick off day one. And we’re starting on a very special note here. What does a business look like when execution is no longer human? Or human constraint. How are you doing? What’s in your agenda? Go ask? My name is Jason.

    Amos Bar Joseph:
    Wow. Thank you so much for having us, Julie. I feel like, you know, it was white t-shirt day, and you didn’t follow.

    Jason Napieralski:
    Yes, sir.

    Julia Nimchinski:
    I’m gonna go off-camera, so it’s gonna be good.

    Amos Bar Joseph:
    Okay. Amazing.

    Jason Napieralski:
    So thank you, Julia, appreciate it. Amos, great to see you again.

    Amos Bar Joseph:
    You too, Jason.

    Jason Napieralski:
    So, what we’re talking about today, I actually, I’m fascinatedly interested in this, and I’ve had, you know, after our conversation, I’ve had a ton of, a ton of questions that have come up, you know, about what’s going on with you, but I think, to get everybody else excited, kind of tell, give us a little bit about your story, about your product, how you got to where you are right now, and then we can kind of, you know, dig in on questions related to that. That would be a great way to start, if you don’t mind.

    Amos Bar Joseph:
    Yeah, amazing. I would love to. So… Yeah, I’m the co-founder and CEO of Swan AI, but it’s not my first rodeo. I’ve actually built and scaled, you know, two B2B startups, before, based on the old unicorn growth at all costs playbook, where, you raise a lot of money before you even know who you’re selling to and what you’re selling to. You get to 30, 40 people before you got to your first million dollars in revenue, and kind of, like, every round. You try to grow the total addressable market and division, rather than the real core metrics of the business, and sooner or later, you realize that you’ve built your entire company on a very sick foundation. It’s very hard to maneuver, you’re not agile, there’s a lot of fat that you need to trim. And, you know, when I came to Swan AI, like, 18 months ago, and I said to myself, you know what, with this notion of AI agents that are just emerging, I felt like it is time to reinvent the playbook of how do you scale a business, how do you build a company from 0 to 1, from 1 to 10, a company that is really designed to scale with intelligence, not with headcount, that is really designed around human-AI collaboration. Rather than human-to-human coordination. And so, Swan is a bold experiment in this journey to figure out how do you get to $10 million ARR per employee, right, by using and harnessing AI agents to make, like, every employee on the team 100x better.

    Jason Napieralski:
    Right. So, what does Swan AI do?

    Amos Bar Joseph:
    That’s a great question. So, Swan AI is like Cloud Code, but for GTM. It’s basically an AI go-to-market engineer. It’s something between a developer and RevOps that works with sales and marketing and founders, and ops to turn any go-to-market process into an agentic workflow in seconds, you know, from prompt The pipeline, so you could really scale that process with intelligence, not with headcount. We’re just 4 people on the team right now. We have more than 200 customers. We generated over $120,000 ARR just in the last 10 days. We’re scaling fast. I’m a single-person GTM department using SWAN to show the world that, you can really 100X yourself just using the right, system thinking, methodologies, having the right foundations and technologies in place.

    Jason Napieralski:
    Awesome. And so, give me just a decent customer story, a short one, of how a customer, you know, they sign up, they use it, what do they do, what’s the… what does the rubber meet the road, and what are their… what are the benefits that they get from it?

    Amos Bar Joseph:
    Yeah, cool. So, I would say there are, like, two different types of stories that you see. There’s, like, the super early stage, and then, there was, like, you know, more mature GTM motions, and that was, like, different challenges, if you look at them, too. So if you look at the more mature, side of things, then what usually, the biggest challenge that we’re seeing right now in GTM has nothing to do with AI, it’s just a regular challenge. It’s just, you know, MQL to SQL We call it. So, marketing does a lot of work. And, you know, a lot of noise, a lot of people are coming, but then you want to translate that noise into qualified pipeline. And that’s a handoff between two departments, they care about two different things that are optimizing for two different things, and there’s just a lot of things just get lost in translation there, basically. And so, I haven’t seen a company that comes to me and tells me, yeah, our MQL to SQL processes are fine, we don’t need help there, right? So usually, the way that it works is that, you know.

    Jason Napieralski:
    So, can you define that, just for the audience, just in case, you know, we’re not all in the right acronyms? Can you define those acronyms for us?

    Amos Bar Joseph:
    Yes, yes, yes, thank you for that, Jason. So, MQL is a marketing qualified lead. It’s basically, the definition of a company to say, this you know, account is worth our attention more than all of the other accounts that are circulating in our orbit, because we had some either one or several touchpoints with them, basically, and it’s time for sales to start to turn that MQL into a deal, start working on them, basically. And SQL is, you know, it’s a sales definition for actually saying, you know what, this account is worth the sales attention to work towards a deal, because there’s actually a fit here, and we could potentially sell to them our services. So, it’s like two different definitions, right, two different departments. The first one is more about, qualified attention, and the second one is more about, you know, qualified interest, real interest in the product.

    Jason Napieralski:
    Got it. So, the, your product is not… I don’t want to say this mean, but it’s not unique in the world, right? It’s solving an existing problem, and it’s using pretty much defined processes that have already existed to solve that problem. Is that correct?

    Amos Bar Joseph:
    100%.

  • Jason Napieralski:

    Okay. So, you know, in your old world, right, of, of standard B2B, my old world too, obviously, you know, you, your, what you were selling was the potential of growth or the potential that some big fish would eat you, right? That was really the sales pitch. And you would get in the ecosystem of venture capital, and they would, in my way, in my opinion, unethically sell that promise up the chain. So you start with series, you know, you start with C, you go to Series A, B, C, but they buy from each other, they already know they’ve got a pipeline, this is the sales that goes on, they could automate that process, because they don’t really look. They’re just like, if Y Combinator bought it, then we’re gonna invest too, and then et cetera, et cetera, up the chain. And the moat there… when that happened was getting somebody to believe in your vision had future value, right? Getting somebody to believe that the net present value of your future vision was something worth something, right? So that was the moat. And that was the only moat. I believe, in the back… in the very short of the past future, right? It’s like, if you could get somebody to give you some money, you could compete against people doing it themselves. So, this is a huge game changer, paradigm shift, right? Because with just a few thousand credits, you now have eliminated that particular moat. You don’t need that venture capital funding, right, to build. You could build on your own, you know, with all day long just sitting by yourself. So, first question. Why did you go back to that well on funding for venture capital? So, if you could do it yourself, why not do it yourself? Keep all the equity.

    Amos Bar Joseph:
    Yeah. So, and first, for those of you who don’t know, we raised, $6 million, to SWAN,

    Jason Napieralski:
    Oh, that’s a great point out, because that was a big miss on my part on the context here.

    Amos Bar Joseph:
    Yeah, yeah, yeah.

    Jason Napieralski:
    So the whole point of our conversation here, just for the audience’s sake, is that we have heard that there is going to be a billion-dollar solopreneur. They’re going to run a billion-dollar company all by themselves. We’ve heard this for a year, or more, since AI started getting really scary and interesting. And while Amos isn’t there, he’s the closest I’ve seen to that, where there’s just a very few people. So it’s very interesting is that as we drop into the trough of disillusionment… do you know the Gartner hype cycle, Amos?

    Amos Bar Joseph:
    Yeah.

    Jason Napieralski:
    So we’re now… we’re way past the peak of Mnet expectations, and now we’re in the trough of disillusionment. So, you know, for you to raise money in the trough of disillusionment is very interesting to me, because that’s usually when VCs start to get scared. Yeah. And the hype is going. So, how did you do it? And why did you do it? Those are my two main questions.

    Amos Bar Joseph:
    Yeah, definitely. So, I’m… even though we’re taking a radical approach, I’m not a person of, kind of, of the extremes. Basically, the alternative model that we’re suggesting to the startup world is kind of to transition from trying to build startups based on the old unicorn model to start building autonomous businesses. But, and there’s a huge but here, these autonomous businesses are not solely built on AI, they’re still built on people, first and foremost, and we’re actually calling them autonomous people because they’re maximizing autonomy of humans in AI agents, both of them, where they’re trying to maximize, and we can dive into that later, Jason. But to your point, what we’re actually trying to prove here is that this new model requires 10x less funding, but is still fundable. And it’s still interesting to build a business with an outsized outcome. So, I’m not just building it for the sake of efficiency. I think that the entire tech scene, because building software got a lot easier, and the barrier of entry just got a lot easier, so the old model doesn’t work, of just pouring so much capital into these businesses and expecting, like, a $10 billion exit out of it, or, like, an IPO. But what if there was an alternative model where, you know, these VCs could actually put less money, they would need less follow-up investments, their risk will be, you know, lower because, you know, these businesses are much more risk-averse. They can operate, they can get to profitability, they can survive, the entire risk dynamics are different, and still you can get an outsized outcome. Maybe you don’t get to the $10 billion, but maybe you can get to $500 million? interesting, okay, I can put just, you know, $2 million. I don’t need follow-up investments, and I can, you know, exit at 500 million for that company, just get, like, a 20X return. That’s interesting, I’m down in that business. So what… it’s really about changing the entire ecosystem here, both how builders and, you know, investors operate.

    Jason Napieralski:
    Now, Amos, are you hands-on keyboard? Are you… you’ve got cursor running, you’ve got agents going, and are you building this too?

    Amos Bar Joseph:
    So… I am technical, but I don’t think that it’s a good use of my time to actually spend time building those agents, so the way that we think that autonomous businesses work is not that every person on the team builds AI agents, it’s not like that. What happens in an autonomous business is R&D kind of, like, transforms from a cost center into a profit center. R&D starts, you know, manifesting AI into the company. and the users, they don’t need to understand how to build the AI, they just need to understand how to interact with the AI that was given to them, which is a much simpler task and much more reasonable task to do, right? So, what happens is that the people that control the logic of how the AI is built, from an engineering perspective, is the R&D. But then, Jason, that’s a big, big, big, big transition, because these agents, and we can talk about how, are adaptable to your methodologies, etc. The responsibility of context engineering actually flows down to the users. So the R&D doesn’t necessarily need to understand the exact processes and workflows for the GTM, or for the support, or for the ops people. They just need to give them the right infrastructure, and then, because it’s just easy to talk to a coding agent, just like Swan, and tell it, you know what, next time that you’re qualifying a lead, make sure that you’re assessing their ACV size. Swan will remember that, and then it will incorporate that into the workflow. So, R&D does system engineering, and the rest of the team just does context engineering. So, I’m responsible for a lot of the context contribution to our agentic ecosystem in the company.

    Jason Napieralski:
    Got it. So, Recently, there’s been the claw ecosystem, right? That has come out, right? It was, clawbot, and then open claw, and then different claws, there’s microclaws, and mini claws, and… And then what really gave that oxygen was, the, the CEO of NVIDIA last week saying that, I think the quote was, every company should have a clause strategy.

    Amos Bar Joseph:
    Exactly, yeah.

    Jason Napieralski:
    Every company, and he said, and as a matter of fact, we’re gonna have one too, it’s called Nemo Claw. Here’s yet another claw.

    Amos Bar Joseph:
    Yeah.

    Jason Napieralski:
    So, the claw world, to me, means it’s the… it’s… you know, and being a watcher of this really closely for a long time, the Manus AI was kind of the first one that came out that was kind of this agentic computer use machine, right, that would work. So the Claw system brings… democratizes that kind of feature. And Claude, obviously, just released yesterday their own kind of more computer-use Claude bot. So, in my simple mind, I think… I picture your product, and looking at it a little bit, as a claw-type system, but is specialized for marketing. So it’s already done the heavy lifting. So I’ve set up about, myself, personally, about a dozen of these claws. Right now, I’m on the microclaw, that one was my favorite. The open claw was a, you know, obviously, a lot of people pointed out security, nightmare, and disaster, but very powerful. But… you know, I have to spend a tremendous amount of time heating it up. Basically, right? For which models to use, which tool calling to do, fixing problems, updating, patching, all those types of things, in order to get it to do literally anything. So I just wanted to… it’s the end of the ski season, I wanted to search for new skis for me that I can buy for next year, and so I have the claw doing that, but it took probably about 7-8 hours of setup. So, the reason I say all this is that My view is that you have built something specialized and domain-based, right in the sales and marketing area. And the value of it has nothing to do with the technology. the value of what you’ve built is the years of your B2B experience, the years of what a marketing qualified lead, and a sales-qualified lead, and how a pipeline works, and all of that. What you’ve built is, you’ve built a claw that has the knowledge that has been gained over years. To do something specialized, and… a user can drop it in and use it today, get value today, versus if you try to do a claw thing, you’ve got a fun battle to fight, and you end up screaming, I have a very unhealthy relationship with my claw at this moment. So, the point being is that right now, where we sit in AI, do you think that technology is what matters, or do you think it’s the domain expertise and the understanding of how business actually gets done, and, you know, how a bill becomes a law? Like, what is the process to get a check? Do you think… which do you think is more important?

    Amos Bar Joseph:
    Yeah. Amazing, love that. So many interesting points there. I would say… first of all, to your question, like, what’s the difference, like, between swan and the claws, right? So, try to, like, paraphrase on, like, a term that is emerging in the, AI scientific industry, and I think it’s interesting that it’s actually emerging from the science, AI science industry. It’s called taste. Okay? So, you know, when Ilya Sastava, the co-founder of OpenAI, previously and now the founder of Super Safe Intelligence, you know, he coined that first, and he said, you know, when we actually try to think about what makes a good human researcher, a good human AI researcher, what makes them good, better than the AI, right? So, they have good taste. Right? You know, they actually… they know how to… which questions to ask, not necessarily, you know, how fast to iterate and get to a response to them, but they just know which questions to ask, and that comes from a lot of knowledge, like you said, Jason, an expertise and point of view. And so when you’re working with Swan, say you get, like, a claw version that has just our taste, right? And our taste is shaped by two things, by the way, Jason. is, you know, my previous experience working in a building and scaling to successful B2B startups, but also from building, like, an AI-native business today that tries to kind of understand how the businesses of tomorrow would look like, and then try to translate that backwards to, like, the Swan to help you go through that transformation as well, right? So, that’s the taste that you’re getting. That’s, like, the opinionated product that you’re getting, and that’s basically, by the way, a lot of the moat that we’re going to see in companies in the future is that taste has a compounding effect. Taste manifests into thousands and hundreds of thousands of decisions during a lifetime of a company, and you can’t really mimic that, and that will eventually separate, you know, these claws from these ones and claud codes, etc. And, another point, sorry for just going a little bit chronologically about what you just mentioned, because that was super interesting, about Jensen Wang. So, the CEO of NVIDIA said that every CEO needs to have, like, an open-claw strategy, right? I actually… I reject that. And I think that Jensen has a very strong incentive to say that, because OpenClaw, for those of you who don’t know, is an open source project, and in a world where every company adopts an open source agent, that creates a lot of competition in the market that actually buys all these tokens from NVIDIA, basically, and so there’s not any kind of, like, vendor lock-in, no one above the supply chain, above NVIDIA, can start controlling the token consumption to the population, because it’s just democratized. So, Jensen actually has a very, very strong motivation to open source a lot of the AI applications out there, and that’s the main reason why he’s saying that. What we’re actually seeing here, Jason, I think that’s the fundamental shift that we’re seeing. And that will get to your point about why, you know, what’s preventing it from happening right now. I still remember your question, and sorry for not getting to that.

    Jason Napieralski:
    Yeah, no.

    Amos Bar Joseph:
    So, what we saw isn’t that open claw is the future. What we saw is that the natural habitat for an AI agent is a coding agent with a command line interface and a file system. If you give these three you know, components. Basically, you create a harness, what it’s called, kind of like around that, to wrap that in something that a user could just speak natural language into this harness, and it will start, you know, doing its own magic, then that constitutes what you could call the natural habitat of an AI agent. What does it mean? It means that our first attempt of building agents was wrong. We actually thought that we should Talk to them in natural language, and they should talk in natural language as well to all of their environments, so when they would actually want to work with something, they would also speak. And we will try to do all these abstractions on top of it, but what OpenClaw did beautifully, it abstracted away all these abstractions, created a very beautiful environment for the agent to work in. And then, finally, Jason, to your point, the only thing that left in an environment like that Is how to do stuff properly. That’s the only thing left that is separating a coding agent with a file system like OpenClaw, and executing at a 100x scale is methodologies, and the foundations, and how you should actually work. And then, when you see a company that tries to pin an open claw strategy on top of bad processes, they just scale bad processes. And the ones that can identify exactly, we have an MQL to SQL challenge, and if we just change that specific bid here, we would actually make a 10x return, then applying AI there will actually be successful.

  • Jason Napieralski:
    Got it. And, you know, one of the things that I struggle with here is that a lot of these stories are really interesting from a small business and founder perspective. And we think that a small business has a lower risk profile, so they’re willing to adopt it, versus Home Depot, for example, right? And, really, the money… to be made is in these enterprises adopting this stuff, because they aren’t. They aren’t doing it. They’re doing it in different ways, they’re doing it on the side, they’re having a lot… I think it was the last stat I saw was 94% failure. So, you know, it’s not working. From their perspective. So, learning what you’ve learned, and taking your experience, well, while you’re not at the top enterprise yet, what… what are the things that you’ve done that have let founders and other people that have bought your software Trust it. And the reason I say that is that, you know, yesterday, I was driving on the road, and I was in the middle of town, and I knew there was a Chipotle nearby. I asked Gemini, where’s the nearest Chipotle? It told me that the nearest Chipotle was in my last address in Seattle. And that… I’m in Utah. So it’s… it… it messes up, and it messes up weirdly, and even Opus 4.6 messes up. and they mess up big. And enterprises can’t afford to have, you know, the AI customer service bot decide to offer a discount of 30% off of everything that they own for a full day, right? So, in that regard. One, for helping our enterprise listeners here understand how to take and adopt your lessons to their business, and two, when do enterprises trust a service like yours? How are you putting in the anti-hallucination and all the other stuff that makes it so it’s not, for lack of a better term, just purely stupid?

    Amos Bar Joseph:
    Yeah, amazing. So, I think it’s important to actually separate here the consumer use case from the enterprise use case. It actually would be helpful for this, you know, demonstration, because, a consumer, you know, they’re just asking Gemini, is it… Gemini either go online and maybe just get the wrong information online, or just try to get it from its pre-trained data and give you kind of, like, a response from the pre-trained data. It’s, like, very hard to control these hallucinations from the consumer perspective. But when you’re actually operating in an enterprise environment, these are not the hallucinations that you’re referring to, basically. What the hallucination in enterprise settings is more like, look, we do things X, and the AI did things differently. So, for example, we can take a support ticket, someone asked the AI how to do something, and we have a very clear, you know, answer for that in our docs, right? But it somehow didn’t fetch that, you know, specific information. Or, more importantly, in GTM, Jason. That part is called enablement. Okay? So, in GTM, there’s something that’s called enablement. A lot of companies don’t invest in that too much. Usually, enterprises have fairly more investment in enablement, and enablement is the layer that prevents hallucinations from humans, okay? And what you should understand when you’re implementing AI, you need to count for two things. One? is… You need to build a system that is designed for adaptation, not for perfection, from day one. Okay, what does that mean? It means that you’re not trying to put, like, an autonomous SDR that will close you all your pipelines for day one. You’re actually trying to fix a human-AI collaboration workflow in an iterative process, and you need to make sure that you have the ability to iterate on the workflow itself rapidly, because you don’t know what should be the ideal outcome at the end, when you start it, because there’s humans and AI here together, and you don’t really understand both of these technologies yet. So you need to adapt. The second thing, Jason.

    Jason Napieralski:
    Hold on, let me… can I pause you on that one? I just want to ask a question on that. So, is it… do you think, given your experience with large-scale agentic, is it better to have a lots of dumb bots, or one uber-smart bot? And again, it’s the million monkeys on million typewriters kind of analogy, right? Is it better to have the million monkeys, or is it better to have Albert Einstein?

    Amos Bar Joseph:
    Yeah, it’s a good question. So, The answer, first of all, is that it’s better to have one agent, rather than multi-agent architecture, okay? Which is actually in contrast to a lot of what people think. If you look at Clawbot, the Claw, it’s one agent and a file system. And so the way that we found that it works better for humans and for AIs, you know, simultaneously, is that to have one agent that could access a file system, and the way that you organize The enablement layer in GTM is the same thing like you did in your Google Drive, or in your Notion, is just, you know, in that file system of that agent, right? And you will start seeing a lot of these agents popping up, so, like, Claude, etc. And then the second thing is that it’s not just easier for the agent to actually, you know, reconstruct the context and kind of, like, make itself specialized in each task. It’s also easier for humans to organize information in a file system. We did that for, you know, our entire, you know, IT age. We organized information in a file system. It actually mimics the same thing. So the way that people work with Swan, Jason, is that they move their enablement layer into a similar file system, but the only difference is that the AI can, in real time. always access it, and it just, they try to give it directions on, you should pull this now, you should pull that later, that’s how we do qualification, that’s an MQL, that’s how we define SQL, etc.

    Jason Napieralski:
    So, I think we’re out of time here, that’s a good answer. Now, Julia, I think that we’ve barely scratched the surface here, and we need to go on to your next topic, but I’m wondering if the audience was interested if we could do an HSC kind of second session here, in the future for those who want to deepen and maybe go a lot longer. I think I’m probably one… I’m one, one paragraph into my questions that I wanted to ask Amos here, so there’s a lot… a lot to unpack, and I think, I think that this is a very interesting case that, I think your audience might… might be interested in, but just wanted to offer that maybe for the future.

    Julia Nimchinski:
    Thank you so much, what a strong opening. We got so many questions, I tried to share them in the chat. But yeah, we definitely should do a follow-up session, and for now, what’s the best way for the community to support you? Where should our people go, Jason and Amos?

    Jason Napieralski:
    Well, for me, just reach out to LinkedIn if you want to have any conversations about the things that I’m talking about and writing about, and look at my blogs and things that I’m writing on LinkedIn. I am… I am an AI practitioner that’s also a skeptic. So, I am, I’m doing it, I’m deep in it, I am… I’ve drunk the Kool-Aid, I’m in the cult, but I’m, I’m kind of standing in the corner judging everybody at the same time. So, if you want that kind of perspective, I can provide it.

    Amos Bar Joseph:
    Amazing. Jason, I share the same value, you know, it’s important to ask why, and, you know, don’t follow blindly. If you want to join, kind of like, my ride in building an autonomous business, so you can follow my LinkedIn profile, I have more than 40,000 followers and generate a lot of cool stuff there. Then I have my newsletter called The Autonomous Age. So if you want to kind of get, like, a behind-the-seat view of how we’re building this autonomous business, the wins, the losses, the challenges, so you can just subscribe and have a front-row seat.

    Jason Napieralski:
    Thanks, everybody.

    Julia Nimchinski:
    Thanks again, and now let’s dive into the next trillion dollar category, context graph.

Table of contents
Watch. Learn. Practice 1:1
Experience personalized coaching with summit speakers on the HSE marketplace.

    Register now

    To attend our exclusive event, please fill out the details below.







    I want to subscribe to all future HSE AI events

    I agree to the HSE’s Privacy Policy and Terms of Use *