Text transcript

Context Graphs: The Next Trillion-Dollar AI Category — Fireside Chat with Daniel Davis & Kelly Hopping

AI Summit Held March 24–26
Disclaimer: This transcript was created using AI
  • Julia Nimchinski:
    Kelly Hopping joins us to lead the discussion, CMO of Sixth Sense, previously Demandbase, and one of the sharpest operators in B2B marketing transformation, and she’s joined by Danielle Davis, co-founder of Trustcrafts. And the author of the Context Graph Manifesto, pleasure to host you. Can’t help but ask you, what’s in your agenda to go ask, Daniel and Kelly?

    Daniel Davis:
    What’s in MyAgentic OS?

    Julia Nimchinski:
    Yep. What’s your favorite tool lately?

    Daniel Davis:
    Well, trustcraft, of course.

    Julia Nimchinski:
    I knew that. Kelly, how about yourself?

    Kelly Hopping:
    Well, you know, I’m not super creative. I do… I mean, I’m digging in more and more in Claude. My company uses Copilot, and I’m immediately… using my, like, secret phone, versions of Claude and everything else so that I can, feed those in, but I’m not doing a lot in the world of, we’re building agents. We just did workshop the other day for an hour and a half to figure out how to build agents for all of our common workflows. So, we’re in the middle of that.

    Julia Nimchinski:
    Awesome. Great to have you. Excited to dive in. Kelly, take it away.

    Kelly Hopping:
    Awesome! Well, thank you, Julia. I appreciate you having me. It’s fun to see Amos here. I think I hosted him on the last one a few months ago, so it’s fun to see him back. But yes, I am excited to host today’s conversation with Daniel Davis. As Julia said, he’s the author of the Context Graph Manifesto, which is really just a deep exploration why the next evolution in AI depends not on bigger models, but really on better context. So, I’m excited to dive in with him. We previewed a lot of this, and I got excited going through all of it, so I’m looking forward to seeing how this, how our full story comes to life today. So let’s set the stage. Let’s start at the beginning. So what’s kind of the origin? You wrote… you write that trust graph emerged after more than two years of work, like, grounded in the idea that graph structures are essential to really unlocking LLM potential. So what first led you to recognize graphs as the missing piece?

    Daniel Davis:
    Well, I think a lot of this comes from just our experience with graph technologies, and knowing that This was… the vision, going back all the way 60 years, you can actually see the first papers written on semantic networks and semantic graphs, actually written in the 1960s, with a lot of this stuff, really iterated on in the 70s and the 80s and the 90s. And weirdly, in the 2000s. It seemed to taper off. My theory is that as software began to get a little bit more mature and more capable, and machine learning started to take off. people just really started focusing on the building aspect and the mechanical writing the software and building the software. And those ideas of graph structures and semantic networks, which really falls under the neurosymbolic AI camp, started to kind of get pushed to the wayside, and with the Jeffrey Hinton Deep Learning Camp really winning the battle, the first battle, in the LLM wars, if you want to call it that. neurosymbolic AI and graphs really kind of got pushed aside, but because of our experience in various, hmm, how shall I say, government endeavors, we know the potential for structuring information in these richer structures. So, we were always looking at this going, you know, we’re more strong proponents of the neurosymbolic AI approach, and we think if gonna come full circle and come back around to that, and that’s kind of exactly what we’re seeing right now. We’re now seeing people saying, you know, people like Jan LeCan, you know, very, very vehemently saying, LLMs, this is… this is it, this is what you get, this is its full capabilities, because you’ve just… The theory was, you throw… Mountains of data at these neural nets, and if it’s enough data, it will just kind of magically work out what is fact and fiction, what is the right response, what the next token should be. And those that followed the neurosymbolic AI approach you know, always disagreed with that. And now that we’re starting to see those limitations of LLMs as people start trying to do more and more complex tasks. People are kind of coming around to, hmm, wait a minute, maybe… there’s a combination of approaches needed here, and that the neurosymbolic AI approach does have value, and it seems like so much of that happened at the end of 2025, that all of a sudden. the hive mind engaged, and so many people, seemingly simultaneously. And I’ve had so many people from so many organizations say that around November and December, all of a sudden, the word context just was everywhere. We need more context, we need better context. And over the holidays, The term context graph just… went viral. It actually had a viral moment, and the history of that term in itself is also very interesting.

    Kelly Hopping:
    Yeah, well, I mean, I think… yeah, I think there’s… there’s a lot to it, and I think that is… that’s that pivot we’re in right now, that’s that reinvention seems to be happening like crazy. So your… your manifesto actually pushes back hard on how the industry frames this space. You said you’re not actually particularly fond of terms like rag or graph rag, because they oversimplify the challenge. Like, what do you think these labels do to limit how how these teams are thinking about AI context. Is there a limitation? Is RAG not enough? Like, what is the term that should be there? What… what actually… accurately represents? Is that the context graph? Is that… is that the term you think that fully gets there, or do you think we’re just only part of the way through there?

    Daniel Davis:
    I think we’re only part of the way through there. My argument is kind of evolved to say that This falls under context engineering, which to me really is a parallel approach. I saw a great chart not too long ago that I think Bessemer, the venture fund, had actually created called the Data 3.0 era, the lake house era, and it had all the categories and all the products that enterprises use today, and it’s an absolute eyesore. It is… A full page with hundreds of different technologies. Stacked into categories that are ill-fitting. And what it makes you realize is that Almost all enterprises, not only do they have, like, one of those in each category, sometimes they have two and three of those products, and these things are all stitched together in ways that Even the data teams themselves don’t understand. And… Most importantly, all those technologies are built on row-columnar data. And now we’re talking about semantic structures, not just key-value pairs and numbers and a label, and… those structures don’t really fit anywhere in that eyesore of a chart. It’s square peg, round hole, and that’s why I think we’re going to be looking at recreating eu… Potentially all those different categories, because all those categories of products are there for a reason. in a more context-focused approach designed to work specifically with AI. And I do believe it’s going to be something that’s going to happen kind of in parallel, because if you’ve ever tried to sell a data product into enterprises today. It isn’t just, like, plug and play, and just magically connects to all their other products in their estate. It’s pretty much an impossible sell.

    Kelly Hopping:
    Yeah. Yeah, so I don’t know if… how folks are feeling on the call, if this feels sort of overwhelming, like, to me, I feel like this goes super deep and technical, and I think, okay, but what is it we’re trying to solve? So, let’s think about… let’s shift a little bit from abstract…

    Daniel Davis:
    It’s overwhelming.

    Kelly Hopping:
    It is!

    Daniel Davis:
    Liz, And I think some people have… there’s some intentional overwhelming of the topic. That’s why we’ve tried to create a lot of content to level set and tried to simplify what context graphs even are. I’ve seen that term I’ve tried to create a very simple definition for it, and that is graph structures that are optimized for usage with AI technologies. That is a simple… that is a gross simplification, but I think that’s… that’s the way that we think about it, in that it still falls under the umbrella of knowledge graphs, which… which is a graph structure, and there are going to be some people that disagree with that, but… when you look at it mathematically, when you look at it structurally, these things are all graph structures, whether it’s knowledge graphs, whether it’s DAGs, you know, directed acyclic graphs, which the data people work with all day long. These are just graph structures, they’re sets, they’re sets of information, it’s how those sets are connected, it’s which way the arrows point. They’re all really talking about the same thing, and unfortunately, when we go back to what I was talking about, that eyesore of a data chart that Bessemer created. You have a lot of products that have been very opinionated about how they Do what they do, and it seems like they’ve all done the same thing in different ways, which does make it very, very complicated.

  • Kelly Hopping:
    Yeah, yeah, for sure. Well, let’s think about the benefits. So let’s, like, to sort of dumb it down to, like, the simplest of… of what this means. So you highlight that knowledge graphs, they often kind of overwhelm LLMs. They’ve got excess data, they’ve got verbosity, they’ve got irrelevant information, whereas these context graphs actually bring relevance, bring relevance scoring, they bring conciseness, they bring, semantic clarity. Like, you brought in some of these. What are really Like, in layman’s terms, what are the really practical benefits that stand out from moving from the sort of overwhelming knowledge graphs to specifically these context graphs?

    Daniel Davis:
    Well, this is, again, where I probably have a different opinion than many other people in the space, is that we actually advocate for what we call the monograph, in that you would have a single graph structure. That’s really the power of graphs, or when you’re able to get lots of information in the graph, and you’re able to find these unique relationships that you didn’t know existed, that discovery process. At query time, you’re able to extract subgraphs, you know, smaller sets of graphs. Those are ephemeral, so I… I don’t necessarily… some people say those are context graphs. We don’t agree with that. We would say the graph system as a whole is a context graph. Again, we’re trying to take a more simple approach to this. To answer your question, though. there’s still a lot of work to be done in this space. One of the reasons we started going down this road over 2 years ago is that we noticed that if you give an LLM information, like, in a graph structure with triples, triple notation, you know, whether it’s cipher or RDF, The responses change dramatically from if you just give it in a sentence. because we actually did a lot of experiments with this. We would extract information from the graph, and, well, how do we structure it? Do we try to rewrite it as a paragraph? Do we rewrite it as a sentence? Do we do, like, bullet points? Do we do, like, arrows and all sorts of things like that? And we found that actually putting in the graph structure, the code graph structure, it generated the best responses. And we started really diving into this, and we realized. there’s to an LLM, there’s information encoded in that graph structure, because that’s machine language, and NLMs speak machine language just as well as they speak human language, you know, code. And it interprets that and sees it as, well, this is… RDF, or this is cipher, and this structure itself has meaning. This means this is nodes and edges in a graph, this means these things are related, whereas a human wouldn’t necessarily read it that way. So… there’s still so much work to even be done on what are the optimal ways to structure the info… to structure the context for the LLMs when we actually even extract it from the graph itself.

    Kelly Hopping:
    Gotcha. Yeah, I mean, I think… I think that’s where… to me, that’s where a lot of this, confusion lies is, right? Is, is really how do you adapt? You talked about, like, trying different formats, tying different things. Like, it’s been open source, what, for… 18 months or something,

    Daniel Davis:
    Oh, I guess we open-sourced Trustcraft in, 2024, I believe it was?

    Kelly Hopping:
    Yeah, okay.

    Daniel Davis:
    Twice.

    Kelly Hopping:
    Yeah, a year and a half, maybe? So what’s, like, what have you learned on some of that? Because I think we have the balance of, kind of, like, how users are applying this in, kind of, real-life deployments, but then also, how do enterprises adopt, like, these… like, is there resistance on the enterprise side to adopt these kind of, different, data structures and things. Like, what is… we want to make it… we want to kind of democratize it through open source, but we also have to drive enterprise change management. Like, what are your thoughts on some of that mindset, company investment, how we should use this?

    Daniel Davis:
    Right now, there’s a lot of resistance in enterprise to a little bit of everything, so… One of the things that happened over the last 5 to 7 years is most enterprises did very large data transformation projects. Remember that whole lake house era, Data 3.0? Well, they’ve spent the last 5 to 7 years getting there. And so, now, here are all the people in AI are coming, saying. Whoa. to leverage, to get this magical result from AI, we need to transform your data again. And the enterprise is going, nope, nope, nope, I’m out, nope, nope, not gonna do it, not gonna spend any more money on this, nope. And… So that’s resistance, that’s point of resistance number one. Point of resistance number two for any enterprise is as anybody who’s worked with enterprise data products know, who are the people that are really the gatekeepers? And it’s… it’s not going to be the product team, it’s not going to be the data team, it’s going to be those pesky lawyers and compliance people that have all those data privacy requirements, and data retention, and data cataloging, and GDPR, and CCPA. And… All of a sudden, they’re looking at you know, when you just all of a sudden stitch LLMs together, they’re going, like, we can’t comply with any of this, and now we’re going to have to do graph structures, do we have products? You know, are our data cataloging progress going to work with this? And the answer to that is. Sort of? And this is kind of my point on… Most all the existing data products in enterprise, in the data state, are designed around row, column, or data. So, when you start introducing graph structures. there’s a lot of other things that have to come with it, and you’re probably thinking, but wait a minute, I know enterprises use graphs. why don’t they have all these other products? They tend to be used in very siloed applications in most enterprises. Like, what are the big use cases? Fraud detection. KYC, AML, you know, those are your big graph use cases. And those teams tend to be very siloed in what they do. The other big use cases are cybersecurity. A lot of cybersecurity event detection, anomaly detection. Again, very siloed organizations in the enterprise that aren’t necessarily dealing with customer data and PII necessarily in a direct way. So… they’ve… these products have kind of existed in these silos that didn’t necessarily have to adhere to all the other data rules, but now, for us to be able, in an enterprise, to roll out AI, Across the entire data estate, That eyesore of a chart with all those categories of products. That applies.

  • Kelly Hopping:
    Yeah, yeah, for sure. And I’m sure this is going to continue, but I guess it’s kind of the evolution that AI’s been through all along, is enterprise adoption, open it up, consumers adopt it first, and then goes into the enterprise. So, it’ll be interesting to see how this continues to evolve. One of the other things you said in your article, or in your manifesto, is that, the biggest concerns for AI teams today is reliability, especially hallucinations, and I love that term, hallucinations. I know that context graphs use, you know, mechanisms like token efficiency, provenance tracking. Relevance scoring to reduce these hallucinations, but… like, which of these have actually had real-world impact? Like, where have you actually seen the needle move on reducing hallucinations in that way?

    Daniel Davis:
    I think the provenance tracking is the one that most people are focusing on right now, is… there have been a lot of people working on AI reliability and explainability for many years, so we can’t… you know, a lot of people have come before us on this as well, and Because people were still really just getting the ball rolling, they didn’t really seem to focus on that part so much now. Now that people have kind of run into these problems, they’re thinking, oh wait, this reliability and explainability part, yeah. maybe that’s what we’re looking for. So… These ideas now seem to have kind of come full circle in a lot of ways, and that’s the one that we’re seeing a lot of people really excited about, and using ontologies to do that, being able to track how did we come to this resp… you know, I asked the question, how did I get this response? We actually just did a video on YouTube, we just debuted this weekend called Context Graphs in Action, with the Trust Graph co-founder, Mark Adams. And we asked two really simple questions. We loaded some data into a context graph for hubs and event spaces in the London area. And we asked the question. Where can we drink craft beer? And then we asked. what pub serves craft beer? And… to us, you know, if I were just asking Mark, if I was standing beside him and asked those questions, that’s the same question in my mind. I’m asking the same thing. And, you know, I would expect the same answer. But when you really dissect it semantically, they’re very different questions, because the question that I ask, where can we drink craft beer, in our demo, we actually show This flags about 18 different concepts, whereas if I ask where in a pub can I drink craft beer, it only flags about 2, because I’ve dramatically limited it to just pubs, because if I don’t limit it to pubs, all of a sudden, the graph starts extracting things like beer festivals, beer gardens, random events, restaurants, hotel bars, all sorts of things, whereas.

    Kelly Hopping:
    Yeah.

    Daniel Davis:
    Right, that… I wouldn’t even have thought about, but when I look at it that way, with the explainability, I go, well, those are… that’s true, those are valid. I just didn’t think about it that way. So, this seemingly innocuously similar question, two questions. actually have very different explainability paths when you really start dissecting it down in the hard, cold semantics of the words I used.

    Kelly Hopping:
    Yeah, yeah, yeah, that makes sense. Yeah, I mean, again, back to, like, even just core AI and its evolution, you realize the more granular you get, the very different context that you set. makes a big difference. So, you’ve talked about that you think that this is, an inflection point for… for, I would say for con… for, for context graphs, in general, but one where actually, like, the… the mainstream founders, investors, the market is starting to notice. They… you said, I think, you know, you even referenced it could be as much as, like, a trillion dollar industry. What makes you believe that the market today is actually ready for this? Can actually, like, the… the average investor out there could capitalize on this?

    Daniel Davis:
    Well, because we built an open source technology that does that? Is that too crass? Is that too crass of an answer? But we do! We built this as open source for this very reason. We built this open source technology, which, in a lot of ways, is its own operating system. It has all the tools that you need to do this. So… I guess that, as the old expression goes, you know, the proof is in the pudding. You know, we made TrustGraph open source for exactly to answer that question. You know, anybody that wants to Try to implement and use context graphs and see explainability. It’s there, anybody can take it, it’s production grade. It can be deployed on any environment, any cloud, bare metal, NVIDIA, AMD, even Intel, Gaudi. So, we’ve designed this and open-sourced it to be as transparent as possible, and, you know, answer these questions for people. This is why we do, you know, our YouTube videos, which I can… I’d be happy, I can share the link, somewhere else, or I can share it, anybody can ask me for it. To show people that, How you can use it, how anybody can use it, and really try to make it as simple as possible, because Honestly. We’ve worked with graphs for a long time, and we will be the first people to tell you, graphs are not easy to work with, and that’s one of the reasons we’ve been working on this two years, is to solve a lot of these problems that we just kind of had ourselves. We’ve been, in a lot of ways, TrustGraph, we’ve been very, selfish in our… how we’ve designed TrustGraph, because We’ve just been trying to solve our own problems of what it takes to work with these systems. We really have. Most, most everything… most everything of value in Trustgraph has come from me complaining about something. I mean, you can ask Mark to… he’ll probably tell you this is why he needs to ask questions about pubs and things, because of all my complaining. Because I run into something, I go, well. this is hard! This… I… this should be easier. Like, why do we have to do it this way? And I complain and complain and complain, and we go, well, maybe there is a solution. And that’s kind of been the last two years of developing TrustGraph, is… Is looking at all the ways that working with graphs has been hard in the past. And, as you said, like, democratizing it, making it simple, so that it’s… that it’s seamless, so that, you know, you could deploy an entire agentic system with Trustgraph, and not really know that it is a context graph system under the hood. You could just be… Wowed by the magical outputs.

    Kelly Hopping:
    I mean, that’s probably the best way to build something, is to actually say, what could it solve for me? And if it’s… if I’m having this challenge, so is probably a thousand other people. Like, this… these are not unique problems, the things you’re trying to solve. So I think that’s probably what makes a better open source product, so I think that’s, that’s great. There’s a… there’s some questions in the chat related to people wanting to find out more. How exactly to work with it? Are there additional resources? Like, are there great books out there to read, other than your manifesto, that you would recommend? You were just recommended to write a book called Trust Graph for Dummies, so that people can learn it, but is there, like, a… is there a Trust Graph book somewhere, or something that people.

    Daniel Davis:
    No, no, no, no.

    Kelly Hopping:
    Reference.

    Daniel Davis:
    No, I think Mark would say that he should write Trust Graph for Dummies as a letter to me. But anyway… Anyway… I digress. So… Learning about grass is not… is not easy. If I didn’t have Mark, I actually would have not been able to learn the topic. There is so much… Graph theory, graph structure, semantic is a very tribal discipline. If you don’t happen to know somebody who has that experience and expertise, it’s a very hard domain to learn. It’s hard to visualize, it’s hard to do demos, The, the, the context graph and action video, I can promise you was months in the making to get it to that point, to be able to present something in a way that Quickly digestible and understandable. So, that’s actually one of the things that we do a lot with TrustGraph is, you know, YouTube videos, our guides on our website, we’ve tried to democratize this and be educational. Matter of fact. Years ago, I had… was really struggling to understand RDF, which is, our graphs are based on RDF, that’s the semantic web version, kind of the, the counterpart to Cypher. And… and Mark actually wrote this really silly guide using his cat, Fred. I keep… I’m gonna make Fred famous, Mark. I’m going to. And it’s about how… it’s three statements. Fred is a cat, Fred lives with Hope, Hope’s another cat, by the way, and Fred has 4 legs. And it’s… Like, pages and pages of how you take those three statements, and that becomes a graph structure. And if it weren’t for that guide. I never would have understood this stuff. And we actually have it published on our documentation site for anybody else that is interested, and… because if you just go to the RDF, the W3C’s, page on RDF and start reading, you’re just gonna go. What? And not only that, is there’s so many different… styles of RDF, and there’s RDFXML, there’s n-tripples, there’s… there’s JSON-LD, there’s about 7 or 8 others, and then there’s the one that’s the most human-readable called Turtle, yes, Turtle, as in a turtle. And that one is actually almost impossible to find good sources on the correct syntax. So… it’s challenging to learn. We’ve tried to publish as much as we can to try to help people understand these topics, because, as we said. you know, graphs, context graphs are going to be instrumental in realizing the promise of neurosymbolic AI to complement deep learning, and we want people to really understand these topics well, and realize that They are complicated, but… Maybe not as complicated as some people make it out to be.

    Kelly Hopping:
    Gotcha. Well, a few thoughts on that. One, when we’re done, I would love for you to put the YouTube link for… that you referenced earlier, into the chat, and I think Julia can get it over to everyone in the audience. There’s been a few people that have asked for that. And then I would love… I think there’s a great idea, I couldn’t tell if you were saying yes, but I think that Fred the Cat. scenario seems like… I’m a marketer, right? So I like a good architecture infographic, anything that sort of dumbs it down into pictures. So I think that would be a great sort of you know, infographic or an animated video on how to convert, on how to understand these trust graphs is a really simple way. You could actually show, like, it becoming three sentences down into a graph, and how that informs, your AI. So, I think that’s, that’s a great one. But as we transition…

    Daniel Davis:
    Well, I think we can actually get… I’ll get Mark to get Fred to actually be in the video.

    Kelly Hopping:
    There you go!

    Daniel Davis:
    Fred actually, if you’ve ever been on a call with Mark, Fred loves to actually jump on his keyboard in the middle of meetings.

    Kelly Hopping:
    Awesome. I love it. Okay, one last question. I know we’re coming up on the end here. If you were to revisit this manifesto two years from now, what do you hope will have evolved? Whether it’s tooling, industry understanding, context, whatever, like, what do you hope a year from now? Like, in 30 seconds or less, what is… what does that look like in 2 years?

    Daniel Davis:
    I would hope it’s the understanding that it’s all graphs. It’s… whether it’s DAGs, whether it’s knowledge graphs, whether it’s context graphs, these are all just graph structures, and… They’re all very, very similar in their own ways, and we don’t need to implement them 27 different ways. We can make them work together much more harmoniously.

    Kelly Hopping:
    Awesome. Thank you so much, Dana. I love the conversation. You, like, set my mind reeling on so many different directions that I didn’t even know that I needed to, so I appreciate that. Thank you for, for your, your research and your build here, and I’ll hand it to Julia.

    Julia Nimchinski:
    Thank you so much, Kelly. Thank you, Daniel, receiving a lot of messages from the community, and what’s the best way to support you? Lots of people are asking Daniel about your website, where should they go, where should they go to learn more?

    Daniel Davis:
    Well, we hope everything is trustgraph.ai. All the links should be there. There’ll be links to our GitHub page. which everything Trust Graph is open source. There are links to our YouTube page, which is just Trust Graph AI on YouTube, where I have a video called, What is a Context Graph? and then we have our latest video, Context Graphs in Action, which is I made sure it delivered. The first two minutes are literally just showing you a graph, and how we’re actually querying it and showing the graph structure. So it delivers on the promise.

    Julia Nimchinski:
    Perfect. Callie, what’s the latest and greatest with Sixth Sense? What’s your agentic offering?

    Kelly Hopping:
    Yeah, I mean, everything that we’re doing these days is really about automating the data and insights to help customers be able to find the right people faster, and then building agentic workflows to help them automate their go-to-market so that it becomes less button-pushing and more outcome-driven, so that’s the goal.

    Julia Nimchinski:
    Awesome. Thank you so much again.

    Kelly Hopping:
    And… Awesome.

Table of contents
Watch. Learn. Practice 1:1
Experience personalized coaching with summit speakers on the HSE marketplace.

    Register now

    To attend our exclusive event, please fill out the details below.







    I want to subscribe to all future HSE AI events

    I agree to the HSE’s Privacy Policy and Terms of Use *