“The technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people.” –Marshall Kirkpatrick About Marshall Kirkpatrick Marshall Kirkpatrick is founder of sustainabilty consultancy Earth Catalyst and AI thinking tool What’s Up With That. His many previous roles include founder of influence network analysis tool Little Bird, which was acquired by Sprinklr, where he was last Vice President Market Research. Website: whatsupwiththat.app LinkedIn Profile: Marshall Kirkpatrick What you will learn How generative AI transforms cognitive tools and lowers barriers to advanced thinking Techniques to combine human and AI-powered sensemaking for richer insights Practical strategies for filtering and extracting value from infinite information The importance and application of diverse mental models in modern decision-making Methods to balance manual cognitive work with AI assistance for optimal outcomes The role of adaptive interfaces in enhancing individual cognitive capacity Metacognitive approaches to networks and how AI can foster organizational awareness Ethical and societal implications of democratizing access to AI-powered cognitive enhancements Episode Resources Transcript Ross Dawson: Marshall, it is awesome to have you back on the show. Marshall Kirkpatrick: Oh, thank you, Ross. It’s such a pleasure to be reconnecting with you here. Thanks for having me on. Ross Dawson: So back you were very, very early on in the podcast when it was Thriving on Overload, and it was interviews with the book, and you got incorporated—some of the wonderful things you were doing in Thriving on Overload. So I think today, in this world of generative AI, which has transformed everything, including the way in which we think, the Thriving on Overload themes are still super, super relevant, and in a way, we need to be talking about them more. That theme at the time was finite cognition, infinite information. How do we work well with it? I don’t know if our cognition has become more finite, but the information has become more infinite, and there’s just more and more. But also, it cuts two ways, as in, what is the source of all the information? AI is also a tool. So anyway, let’s segue from some of your cognitive thinking tools, technology-enabled cognitive thinking tools and so on, which we looked at. So how do you—where are we? 2026, what do you think about human cognition in our current universe? Marshall Kirkpatrick: Well, especially when you frame it up in Thriving on Overload terms. I mean, those were four, five long years ago that we last spoke, and the book that came out of it was just fantastic. I think it has some timeless qualities, and I think that the technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people. That’s what I hope. I think that, yeah, between individuals and organizations, there’s so much that, historically, someone like you or me or the people closest in our networks were willing and able to do and excited to do, that many other people said, “That sounds like a lot of work.” The bar is lower now, because a lot of just the raw cognitive processing can be outsourced into a technology that serves as a lever. Ross Dawson: Well, I mean, that idea of levers for these cognitive tools is interesting. I guess, the very crude way of saying it is, we’ve got inputs into our human brain, and then we are processing information. I’m just thinking out loud a bit here, but it’s like, okay, we have tools to be able to filter, to present, to find what is most relevant, to present it to us in the ways which are most useful—very obvious, like summarization, visualization. Then as we are processing it ourselves, we have dialog, or we can have interlocutors who we can engage with and be able to refine and help our thinking. Does that sort of make sense, or how would you flesh that out? Marshall Kirkpatrick: Yeah, I mean, when you put it that way, it makes me think about Harold Jarche and his Seek, Sense, Share model, right? I think that AI, especially when connected to things like search and syndication and other traditional technologies, can impact all three of those stages. It can hypercharge our search. I think the archetypal example of that, on some level, feels like the combinatorial drug research being done, where just an otherwise cognitively uncontainable quantity of combinatorial possibilities between molecules can be sought out and experimented with for a desirable reaction. And then that sensing, or the pattern recognition that AI is so good at, is something that we do as humans—some of us better than others—and it’s a lifelong muscle to build and what have you. But the AI is really, really good at it, and so it’s a ladder to climb up in some of that sensing. And then the sharing component becomes so much easier with the rewriting capabilities—turn A into B, reformat something into a summary or a set of bullet points, or ideas and words into code. AI is just so excellent for that translation that makes new levels of sharing possible. Ross Dawson: That’s fantastic. Yeah, I had Harold on the show again in the Thriving on Overload days. But you’re right, that’s extremely relevant. Let’s dig into that. I love that you brought up that combinatorial search, which is so important. As opposed to going into Perplexity to do a search, it’s far more interesting to find the uncovered connections between things, which are relevant to what you’re doing. And that’s— Marshall Kirkpatrick: Absolutely. I remember reading, years ago, Dan Pink’s book “A Whole New Mind,” which preceded the generative AI era. But he said, if your kind of work is something that’s easily reproducible by computers, good luck to you. You really are going to need uniquely human practices in the future, and what exactly those are, I’m not sure, because the one that he identified, I don’t think has proven to be uniquely human. But I really appreciated learning about it from him, and that was what he called symphonic thinking, or the ability to draw connections between seemingly unconnected phenomena. So for many years, I have been doing a personal exercise with pen and paper that I call triangle thinking, where I’ll take three different phenomena—maybe that’s the owl outside my window, one of the notes that I’ve taken on paper, and something I come upon on the internet, or maybe it’s three very deliberately related things. I label them A, B, and C, and I ask, what might A have to say about B? What might B offer to A, and vice versa? I write out the six unidirectional connections between those things. And without fail, one, two, or three of those end up being real keepers, where I say, “Aha, that’s a really interesting idea. I’m going to take action on that.” And now, by the time I’ve got the letter B written out, an AI has done that ten times over. I like to do it both ways—still both AI and with my naked brain—but that combinatorial ideation, the generative combinatorial ideation, is, yeah. I’m curious what your thoughts and experience and hope for that might be. Ross Dawson: Well, there’s a prompt I use called “Apply Diverse Thinking,” where it generates extremely diverse perspectives on a topic—who might those very unusual people to think about something be, and then what would they think about this particular situation? Of course, there are a whole array of different thinking tools. There’s Marshall McLuhan’s tetrad, which is a little bit similar to your thing where, again, you can and should do it—well, not manually. What’s the manual equivalent of brain? Marshall Kirkpatrick: Thoughtfully, perhaps. Yeah, good one—deliberately, manually. I mean, Azeem Azhar over at Exponential View uses a fountain pen and paper and will sometimes have his team come online and they’ll do two-hour thinking sessions with no AI allowed. They just get on, I believe, Zoom, and just think through things with pen and paper, individually and together. And then they’ll kick off OpenAI or what have you, and use all the tools afterwards. Ross Dawson: Yeah, well, a couple of things. Actually, research has shown that in brainstorming, it is better for everyone to ideate individually before doing it collectively. And of course, that’s unaided. I think there are analogs there where—actually, one of the frameworks I just released last week was basically to say, think it through for yourself before you ask the AI, because then you have a reference point. If not, you don’t have a reference point to say, “Well, what am I expecting it to do? Let me think it through for myself,” even if it’s just a little bit, as opposed to just going in blank—”All right, give me an answer.” Just that simple thing of thinking through for yourself first is enormous. What it does is, obviously, give you a reference point for that. And I’m going on a lot about appropriate trust at the moment—as in, trust the AI enough, but not too much, which I think is absolutely critical capability. And part of it is being able to say, “Well, this is what I think it should be giving me.” Now you have a reference point for what it give