Text transcript

The AI Collaboration Index 2026 — Fireside Chat with Ashley Faus & Tooba Durraze

AI Summit held on Dec 9–11
Disclaimer: This transcript was created using AI
  • Mark Organ:
    Thank you. Welcome to the show, Ashley and Tuba!
    Julia Nimchinski:
    Excited to have you. Ashley Faus is Head of Lifecycle Marketing at Atlassian, and Dr. Tuva Duraz. She’s the founder and CEO of Amy by AI. Super excited, how are you doing, and what’s your top GTM AI prediction for 2026?
    Tooba Durraze:
    Ashley, you go. I let you go first.
    Ashley Faus:
    Oh man, top… so, I’ll, you know, preview what we’re gonna talk about in our fireside chat, but I think it is gonna move beyond the focus on personal productivity, and really move into team-based collaboration. I think right now, everybody’s… everybody’s been super focused on how do individuals you know, learn to prompt, learn to build an agent, and I think we’re gonna really start to see it become more of a team-based and a collaboration-based focus in 2026.
    Julia Nimchinski:
    Love it. Tuba.
    Tooba Durraze:
    Yeah, I think I’ll kind of piggyback on that, on Ashley’s prediction. It’s like, a lot of the activities are about, kind of, individual consumption these days, and moving just from where the technology is headed, where go-to-market is headed in general, I think that we’re going to see a ton more in collaboratively kind of uplifting the use of AI within organizations. So, as Ashley said, perfect time for this conversation right now, honestly.
    Julia Nimchinski:
    Awesome. Let’s get into it.
    Tooba Durraze:
    Okay, well, thanks for having us, and, you know, we got a chance to do a brief intro, but as a data geek, actually, I’ll say first off, like. going into your report, the AI Collaboration Index report, which we can link in the comments here, for a data junkie like me was amazing. So, I have a couple of questions, and I think they’ll spark some conversation within us, but… I’ll start with, I would say, a data question, because certain stats really stood out to me, and I’d love your take on them. One of the stats that I read was, only 4% of organizations achieve any kind of real AI transformation. When we at Amoeba are working with go-to-market teams, we’re also realizing that the barrier to, kind of, access isn’t the technology itself, but it’s kind of that cross-functional collaboration. From your perspective, and in your experience, what do you think is the… is the root cause for this transformation gap?
    Ashley Faus:
    Yeah, that stat, so just to give a little bit of background on this report, we have a team, of researchers. They are, you know, actual scientists, they have PhDs, and so I love the rigor that they bring to this research. It’s not meant to just be a thinly veiled sales pitch, like, it’s actually trying to figure out. what the sentiment is, what the usage is, and what the capabilities are, and that 4% stat also stood out to me. I was like, wow, everybody is talking about it. We’ve got this, like, insane hype curve, but I think what you’re seeing is that gap with the adoption curve. And as I said in the predictions, it is that hyper-focus on productivity, and so a lot of leaders are basically mandating to their teams You have to use AI, you have to be proficient with AI, don’t come to me and ask me for new headcount unless you show me that a person can do this better than AI. And the problem with that is it puts the onus on every single person in the organization to figure out the tools, to figure out how to use them, to figure out their workflows, and so people are just off in their own little silos, not even in their teams, like, in their own personal workflows, trying to figure out, how do I use this so that I can show value so that I am working on the right things, right? So that I look good when it comes time for promotions, or when it comes time to make the business case for more budget or more resources, right? And so, that language of, like, you need to figure this out, and you need to do this. all of that is basically reinforcing it’s on the individual, and it’s not thinking about it from a system-wide, integrated approach of how the organization and teams shift into this new world. It’s still 100% On an individual to just figure it out themselves.
    Tooba Durraze:
    Yeah, and actually, you bring up a really good point, because if you take the word AI out of the equation, we went through this transformation where we needed to move from, kind of, individual thinking to systems thinking. Right? That was, like, all the hype, I would say, like, early 2000s. It’s like, the digital transformation was around, like, what does the system need to do, instead of, like, how an individual would kind of contribute to as a system, so… This should not be, to the folks listening in the room, this should not be new to us. We’ve done this before, but reinventing that with AI, I think, is a big thing. The… and again, and by the way, to Ashley’s point about this being a really good and vetted report, I have a PhD in data science as well, and I thoroughly vetted the report, so I will say, if that means anything. Here’s another MIT PhD saying, it’s a really great report, you should, you should all… and I don’t actually stand behind even all MIT reports, so I will say this is a great report for folks to read through. I think in the report that kind of… another thing that stood out to me was this idea of, output-driven intelligence. So a lot of our interaction model with these systems right now is conversational, so everyone’s focused on, like, what are the things I could find answers to, etc. But more answers doesn’t necessarily mean more, like, intelligence or applicable intelligence. So there’s a phrase that is in the report that’s called Intelligence Without Collaboration, where the volume of outputs increases, but the collective intelligence might decrease, because there’s not enough collaboration happening. What are, in your experience, some of the earlier Signs that a company can watch out for, that they might be hitting this breaking point?
    Ashley Faus:
    So, this is kind of counterintuitive, but it actually shows up in the human interaction. So, you show up to a meeting, or you show up to present, let’s say, a QBR or campaign results.
    Tooba Durraze:
    And you start to present the results, and you’re like, I used AI to tell me.
    Ashley Faus:
    this information, or to pull this data for me. And somebody else speaks up, and they’re like. I actually asked the same question, and I got different results, right? And so what you’re seeing is either the way that they’re asking the questions, or… the sources that their particular AI tool has access to are maybe siloed into a different place, right? We see this a lot, again, you’ve got the data science background, where, like, data science has a lot more access to all of the raw data, versus most marketing teams, it’s coming to us in some sort of, like. business intelligence or, like, business visualization tools, so if, let’s say, I only have access to Tableau as the marketer, but you as the data scientist or data engineer have access to the full data lake, and so when you do AI things on the data versus when I do AI things on the data, we’re not actually working from the same data set. Like, in theory, we are, right? Because Tableau should be Pulling from the underlying whatever, data information, but if we… if we both show up and our numbers don’t match, or our information doesn’t match, it likely means that somehow our systems are not connected, both from a human perspective, in terms of what I think is the source of truth, and from a data perspective. So that’s one thing. The other big thing I’ll say is, the gap in terms of power users and capabilities and just overall usage. So, this is something that I’m really thinking about as a manager. I’ve got a couple of power users on my team. I would actually say I am… I am not necessarily one of those power users, right? I’ve got a couple of folks on my team who… Yeah. they’re using it all the time, they’re constantly testing, they’re sharing with the rest of the team, and I have a couple of other folks on my team who don’t default to AI, they don’t use it all the time. They’re constantly having to ask questions of these power users, right? When you see that across the organization, that you’ve got folks who say, I use AI, maybe once in a while, I don’t… it’s not part of my workload, and then you’ve got other folks who are like, oh yeah, I’ve got 20 custom agents doing all these, you know, and we’ve got automation, and I have all of these excellent chat prompts, like, that gap also starts to show up as, okay, you have not actually Done a great job, you know, connecting the people.
    Tooba Durraze:
    from a training perspective, from an empowerment and experimentation perspective. So, those would be the two things I would say. If you’ve got people showing up saying, like, well, AI told me this.
    Ashley Faus:
    And you’re getting answers. or you show up and you’ve got this, really big spread across the organization between consistent, you know, power users and inconsistent, you know, less effective usage. That, I think, is a gap that shows you’ve got intelligence, but you don’t have that collaboration.
    Tooba Durraze:
    Yeah, no, absolutely. And for, like, again, folks listening in the room, trust but verify when it comes to AI outputs. If you are a software vendor who is building and selling AI products, it should be a part of your product thinking, your product design as well, because like, for us at Amoeba, we sit on top of a ton of data, and we’re bridging that gap between data scientists and marketers, essentially, like, not having baby data scientists on their team, but the biggest piece still remains, like, the source of Truth and the unified truth needs to be, like, the same, or agreed upon, I should say. You touched on something that I think a lot of folks in this room might be interested in hearing about, which is, in this transformation, what is the role of a manager and just basically a leader in the organization, helping this become better? In your report, there was a stat that says if your leaders are using AI, or at least, like, thinking about using AI in the right way and bringing up those conversations, there is a 4X multiplier effect in terms of behaviors that your associates can model. So AI ends up becoming a team sport instead of just, like, I’m an individual building, like, 50 agents on the side, because I want my life to be super productive. How have you thought about that? How have you thought about that as a manager, as a leader, embedding that into, kind of, the natural slow state of you and your team?
    Ashley Faus:
    So there’s three things I’ll say, I think. One, around the modeling, I’ll talk about that. The second thing around the timelines and how you think about empowering folks to actually test and learn. And then the third thing is around transparency. So, first, talking about modeling it as a manager, we have a really strong internal blogging culture and internal loom recording culture at Atlassian. So, it’s very common for folks to go you know, go to a conference, speak at a conference, whatever, do a little write-up of that as a blog. If you have built an agent or you’ve tested a workflow to make a little Loom video and share that across, you know, Slack, Confluence, etc. And so, I make time to waste time testing tools, right? And I think that that permission to fail, where I say, I tested these 5 tools for this use case. this tool completely failed, like, and then the reality is, oh, I probably shouldn’t be using that tool to try to.
    Tooba Durraze:
    Yeah.
    Ashley Faus:
    solve this problem, right? As I’ve learned. I’ve actually learned that I chose the wrong tool to solve this problem. So, I actually have a whole write-up that I did. It’s been a couple months now, but I was testing, basically, slide generation tools, and so I tested Gamma, and I was testing the integration with Canva, and I even tried, like, trying to get ChatGPT to do it. ChatGPT is the completely wrong tool to do that, but I was early enough in my journey that I didn’t quite realize that, and so once I did the research, I was like. call out. Maybe pick the right tools first, right? Like, don’t pick the wrong tools. And so me being willing to say. I wasted 2 or 3 hours playing around with these tools, I wasted an hour writing up what I found, and I admitted that I didn’t know enough to choose the right tool the first time, and as part of this research, I learned something about the overall space. That signals to my team. okay, my manager is testing, she doesn’t know everything, she found, you know, this or that, she took the time to write this up, that means that I can now take the time to test, I can write things up, and I don’t have to have the answers. So that’s the first thing. Second thing is incremental in your timelines, building in that buffer. So what frequently happens is companies will hire, you know, a transformation expert, an AI expert to come in and Look at every single workflow and replace every single thing, right? And the problem with that is, you know, A, you learn as you go, right? And so, you might find that the old ways of working don’t make any sense in AI, so it’s not… the goal is not to say, here’s… let’s document every single thing we’re currently doing, and just try to AI-ify it. Like, that’s not how you want to think about it.
    Tooba Durraze:
    That’s true.
    Ashley Faus:
    what’s the right way to work, you know, given these new tools? And so, what I try to do with my team is build in extra time, and I’ll ask them, I’ll say, I want you to try… it feels like AI could help us with this. Why don’t… our tool at Atlassian is called Rovo, and so it’s like, it feels like Rovo could help with this. Why don’t you do this? How long would it take, maybe a couple of hours, for you to play around with, you know, rovoChat and rovo Agents?
    Tooba Durraze:
    Yeah.
    Ashley Faus:
    play around with it, send me the output, tell me your assessment of this output, if you’re like, hey, this is great, but I need an additional day to play with it, or if you’re like. listen, it’s gonna take me a month to get this figured out, you know, by the time… I actually… it turns out I have to build an agent, but it turns out I need different access, right? Okay, let’s talk about that from a timeline perspective, and then we can make the go or no-go decision on, yep, take the time to play with it with AI, or… Timeline’s too tight, you’ve spent some time playing around with it, let’s go ahead and use your traditional skills to get this done. And then, say, but let’s run a little mini retro where you say, here’s why it didn’t work. the first time, or here’s what I would have done differently, so that the rest of the team can learn. Building in those timelines incrementally gives my team the confidence to say, I want to test this thing, but I need a couple extra days on the deadline. Can I do that? And for me, obviously, as a manager, 100%, you know, in most cases, I guess I should say 98% of the time, I’m like, yes. It’s unreasonable to ask your team, in the same timeline that you would normally ask them to do something with skills they already have, to find the right tool, learn that tool, do the thing in the tool, fix whatever the output is, as you said, trust but verify. And then also tell the team or teach the team about why it worked. Like, that’s 5 extra steps in there that you wouldn’t normally.
    Tooba Durraze:
    Yeah.
    Ashley Faus:
    So build in that timeline. And then the last thing is transparency. So this is a super easy thing that I do when I use Rovo to create things. I put a little box at the top of Confluence, or I’ll say it as part of the script in a Loom video, and I’ll say, I used Rovo to help me generate this. I used it for the initial draft, and then I edited these things. Or I used Rovo to help me format this page. Here’s the resources that I gave to robo, and I asked it to summarize it for me. Here’s the stuff that I added that’s new. And that just tells anybody who lands on the page, AI was involved in this, here’s how the AI was involved, that way you know with confidence, okay, a human reviewed this, or hey, this is a draft generated exclusively by AI. You need to use more of your human critical thinking as you read this, versus, this is a final draft, I, as the human, paired with AI. So, that transparency in conversations and on your pages is really helpful for the rest of the team to be like, okay, she’s using AI, and here’s how she’s using it. That also gets them in the mindset of. It’s a thing we’re doing, you know, we’ve got that space to play.
    Tooba Durraze:
    No, absolutely. I think the two themes I picked up from what you were saying is, one, documentation is important and necessary, but not to just stop at documentation. Giving room to experiment, giving room to fail as a leader is really important, because a lot of the stuff we’re going to try is probably going to fail in the first iteration. So the gains that we’re asking about, or efficiency gains that we’re thinking about, probably come after you’ve tried and tested a bunch of different approaches, and that needs to be a collective effort versus an individual. So this idea of, kind of, experimentation versus just documentation, because obviously people also think in very different ways. So, I love that, by the way, as a leader, and just notes for other folks in the room, other leaders in the room, give your folks permission to fail. We don’t give our folks enough permission to fail, to kind of experiment, to try, and humans are wired that way. When we experiment, when we try, we learn better, we, like, implement things better. Okay, so we talked a lot about tools, we talked about kind of this collective mindset, creating a transformation in the company, The one other thing that keeps coming up, and it comes up for us at Amoeba as well, is this idea of, you know, your outputs are only as good as your inputs, and if your knowledge is fragmented, you brought up Tableau and kind of connecting into warehouses and stuff. and your definitions of these metrics are, like, not similar, a lot of what you’re trying to get as outputs would fail. Your report, the index calls that out directly, so I’d love your thoughts on connected knowledge. what it looks like when it’s done right, essentially. And then just in thinking about, like, how should companies think about redefining their, like, knowledge architecture in a way that, you know, you’re starting from the right spot, instead of just applying tools on top of a bunch of fragmented sources.

  • Ashley Faus:
    One handy thing in this whole shift to having connected knowledge, you know, all of your systems integrated, we can actually learn a lot from the shift to remote work and global teams. So, one of the big issues when that initial shift happened, obviously it was happening before COVID, but with 2020, that was accelerated. there was basically this gap between the work that happened online and the work that happened in person. And so that knowledge transfer you can basically track everything that’s happening digitally, but you can’t track everything that’s happening in the hallway or in the kitchen, right? Or in a conference room. And so, as we shifted to remote work and more global teams and asynchronous work. We also have shifted our practices to be more digital first. So, for example, at Atlassian, a super practical thing that we all do, we invite our Loom note-takers to every single meeting. That’s just by default. the meeting is recorded, and there are Loom note takers. That means that every conversation is automatically getting fed back into the data lake. Same thing with Slack. We’ve got… Rovo has an app, it’s connected in Slack, and so it can get all of that context from those conversations in Slack. The other big thing is having that rigor around your goal setting, around your project posters and project kickoffs. So for us.
    Tooba Durraze:
    Yeah.
    Ashley Faus:
    all of that happens in Confluence and Jira, and everything is specifically labeled and called out. And so you’ve got a specific goal, and then that goal in JIRA is included in the project poster on Confluence. And then we have good discipline around labeling things as, you know, draft, in progress, decision or change logs. you know, final draft, retro, etc. And so… Yeah. That helps if you then go to Rovo, and you’re like, hey, find me the latest information about, you know. pricing, or what do we decide about pricing? And it can go, it knows to go to the changelog for this tool, and literally go to the section that says pricing changelog, if we’ve decided this is going to be free, or standard, or premium, if it’s going to be, you know, license-based, or consumption-based, or if it’s going to be a feature gate versus a user gate, right? All of that information is captured in the changelog. instead of all of that happening and just being decided and relying on a person to kind of circulate the notes via email later, all of that stuff happens on digital whiteboards in Confluence, on our pages, goals in Jira, in our Slack channels, on our Loom note takers, and so just… naturally having everything become recorded from a digital perspective, that means that it’s not on the humans to make sure the data gets put into the lake, right? We’re not having to, like, dump everything in the lake, the lake just exists, and the right information gets put in there. really good rituals around culling, and archiving pages, you know, sunset.
    Tooba Durraze:
    I was gonna bring that up. Archiving is a really big one.
    Ashley Faus:
    Yes.
    Tooba Durraze:
    archive, because the number one… And the worst example I’ve seen is people with chatbots on their sites, but older blogs or older pricing wasn’t archived, and it’s just referencing all of that information, so I can’t stress that enough. Archive your older materials.
    Ashley Faus:
    The governance, I think that’s the… that is… when you think about the life cycle of your content, your data, your assets, that’s a huge piece of it, and then you mentioned it earlier around shared definitions and shared terms. We see this a lot, right? We have… there’s a really handy tool in Confluence that’s powered by Robo that’s AI definitions, and so you can basically just highlight something, and when you do that, it’ll give a little pop-up that’s like, you know, do you want to leave a comment? And you can just click Define. So… the acronyms are not defined somewhere already, it’s gonna be really hard for the AI to pull that, or if people are using things in different ways, it’s really hard for it to have the context Even if it’s pulling from a bunch of different places, right? So, that full life cycle, it’s not just what goes into it, it’s also how long things stay, and making things… making sure things are up to date, and then having that tagging around, hey, this is just a draft, it’s not approved, this is now approved, here’s the changelog, etc. So, that’s how we think of it. It really is that shift to making it easy for everything to get in, and not having these gaps where decisions are made offline, or decisions are made in the dark, and they are never brought to light for AI to then continuously shine that light on those, you know, decisions, information, etc.
    Tooba Durraze:
    Absolutely. The one thing I’ll pick up from what you spoke about is there is a… and we used to always think about semantic layer on top of, like, data lakes and ontologies, but there is a newer thing at play, which is basically your symbolic contract, which is, like, your business contract, which is… the definitions of how you, in a business, define certain things, that sits outside of standard data definitions, and again. everything needs a process. It can’t… it’s not… none of these things are one and done. You need to have a process around revisiting those definitions, etc, especially when you get in the world of automating and orchestration. These things need to be in place, everyone kind of has a shared theme. I know we’re coming up on time, but I think this one piece that you speak about that’s super interesting to bring up, which is around taste. being a differentiator in the AI era when we’re creating, you know, all this information, all this noise. How should teams think about dividing the work between AI and humans, so then that taste, judgment, and strategy is, like, essentially not lost, or not, like, kind of over-indexed by the AI?
    Ashley Faus:
    I think the biggest thing when we talk about this taste, discernment, etc, is actually being willing to have the discussion about what work should be done by humans, and what work should be delegated to AI. I think right now there’s still a little bit of fear that if you talk too much about AI, that somehow you’re actually cutting out the humans. And I don’t think that’s the case. I think having that transparent conversation about, here’s where the humans shine, here’s where AI and or automation and or machine learning, right, can shine, let’s split up the work. And so thinking about delegating to it the exact same way that you would delegate to any other teammate, I think that actual transparency helps you think about. okay, the humans are going to decide what to build, why we should build it, what outcomes we’re trying to drive, and then we start to say, okay, AI might help With the more repetitive tasks, or the admin work. It might help us with the research, it might help us with, you know, the copywriting or the copy, you know, grammar editing, right? Like, there’s tools that are great at that. And then, after that, the human will come in and make sure that it doesn’t sound too stilted, that everything is, you know, actually aligned to the goals that were originally set. So, I think it’s on the humans to decide you know, building the right thing to drive the right outcomes, and what gets delegated to AI, that’s how you really keep that taste and discernment by having the humans in the full loop, but being transparent and honest about what makes sense to delegate to AI.
    Tooba Durraze:
    No, great. I see… I see Amanda’s face keep popping up, Amanda Kohler. I know she’s on the panel after this, but one of the things that she brings up a lot is about, like, superhumans, but the idea is that humans still need… you get in a car, you still need to tell the car where to go. So, like, that’s the outcome that you want to drive. to where it still needs to come out of the human, and then, you know, superhuman AIs, or super AI is essentially a pathway to kind of getting us there. Okay. Last question, if companies change just one thing in Q1 to keep with whatever this acceleration curve is, and in order to drive this transformation within the organization, what should that be?
    Ashley Faus:
    I know I’ve said it multiple times, but I will reiterate again, because it is that important. Shift the language that you use from you must all use AI to how do we as a company, we as a team, think holistically about how we use AI? That shift from personal productivity, hyper-focused on the individual, to systemic thinking, a holistic approach, building it into the workflows incrementally, that is the big shift. And so, in terms of the actual change, shift your language to reflect the shift in the mindset and the goals and the ways of working, and then that obviously is underlying that shift in the foundations that needs to happen over time.
    Tooba Durraze:
    Yeah, great. So you heard it here. It’s not… the conversation is not I, it’s we, and for leaders in the room, you’re leading a transformation, right? So you’re not just going to wherever and buying a hammer. The hammer is a tool, but if you have nothing to build, what are you using the hammer for? So think about things in terms of what you’re trying to drive, what systems you’re trying to change. Thanks so much, Ashley, I feel like this was a great conversation.
    Ashley Faus:
    Yeah, thanks so much for, chatting with me and bringing your PhD-level knowledge and validation to the report as well. It was great to chat.
    Julia Nimchinski:
    What a phenomenal session. Thank you so much, Ashley. Thank you, Tuba. How can our community best support you?
    Tooba Durraze:
    Find me, connect with me on LinkedIn, and then, yeah, like, if you want PhD data science-level insights on your data, book a demo with us, and we’re always happy to show you how the world’s first newer symbolic AI platform can sit on top of your data and get you all the best things out of it.
    Julia Nimchinski:
    Awesome. Ashley, how about yourself?
    Ashley Faus:
    So, highly biased, as everyone else is here, connect with me on LinkedIn. I also have a book out, it’s called Human-Centered Marketing, How to Connect with Audiences in the Age of AI, so if you want to dig in more to the human side, I have that. And then, obviously, check out the free report. It’s ungated, you can get it with no friction at all, the Atlassian AI Collaboration Report.
    Julia Nimchinski:
    Love it.

Table of contents
Watch. Learn. Practice 1:1
Experience personalized coaching with summit speakers on the HSE marketplace.

    Register now

    To attend our exclusive event, please fill out the details below.







    I want to subscribe to all future HSE AI events

    I agree to the HSE’s Privacy Policy and Terms of Use *