Humans + AI

Ross Dawson

Exploring and unlocking the potential of AI for individuals, organizations, and humanity

  1. Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency

    9 HR AGO

    Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency

    “Freedom no longer exists outside the systems, and it depends on the design. Coming back to the design, it’s about understanding that we need to distinguish between intelligent systems and agency.” –Dr Michael Gebert About Dr Michael Gebert Dr Michael Gebert is Chairman of the European Blockchain Association and co-founder of AI Expert Forum. He works at the intersection of artificial intelligence, digital sovereignty, and institutional responsibility. His book 2079 – Designing Freedom is just out. Website: 2079.life LinkedIn Profile: Dr Michael Gebert What you will learn How the concept of freedom extends beyond politics and economics to personal agency in an AI-driven world Why cognitive sovereignty is essential for maintaining individual responsibility and accountability as intelligent systems become more pervasive The shift from making decisions ourselves to designing the frameworks and conditions for decision-making with AI involvement How to distinguish optimization from true human empowerment when integrating AI tools into personal and organizational life Practical routines and metacognitive strategies for individuals to retain agency when collaborating with large language models and intelligent systems Why organizational leaders must prioritize cognitive sovereignty and human potential early in AI deployment, not just technical efficiency Insights into the challenges and importance of embedding frameworks for freedom and cognitive sovereignty within corporate, governmental, and policy structures The critical need for ambassadors of freedom within institutions to promote reflection, ongoing discussion, and the integration of responsible AI practices across all levels Episode Resources Transcript Ross Dawson: Michael. It is awesome to have you on the show. Michael Gebert: Hey, great to be on the show. Thanks for having me. Ross Dawson: So we connected first, probably around 15 years ago, and we were both involved in crowds, creating value from many people. And I think, you know, there’s one of the interesting points now is, I guess, you know, we still live in a world of many people. We’re trying to create collective value. AI is laid over that. So it’s interesting to see that journey from where we’ve come to where we are today. Michael Gebert: Absolutely, and I really remember visually when we first had contact about this very exciting topic of crowdsourcing and empowerment of the crowd, and really making people believe, not only in themselves, but really in communities. And therefore, not only strengths in terms of crowdfunding, crowd investing, their financial gains, but also being empowered in what they do. And this is a very fundamental, I would say, even a right for humanity to reflect on and do that. I think the methodology and technology back then helped a lot. And to be honest, I’m still partly involved in some of those efforts. Even the big crowdfunding platforms, also here in Europe and in Germany, are vital and really active. Of course, not in that dramatic media shift hype that we experienced, but they’re still there, and it proves that it’s a concept that should stay. Ross Dawson: Yep, absolutely. You know, there’s obviously collective intelligence, amongst other facets. But this goes to, I think, the frame of your new book, 2079, Designing Freedom. So freedom is an interesting word, and something which I hope we all aspire to. Michael Gebert: Yeah, you know, freedom, of course, is one of those very multifaceted words, right? It could be translated in a political context. It could be translated in an economic concept, meaning monetary-wise. It could be translated—and this is my translation—in a very personal, one-to-one reflection about how do I as a human being see myself in that surrounding, bombarded not only by information but by intelligent systems, basically AI as we describe them, and all that is behind those systems. Ross Dawson: So there’s a few things I want to dig into here. And I guess there’s another word there: designing. Obviously, at a societal infrastructure layer, we want to be able to design the systems whereby we can all individually have that freedom of choice in how we live our lives. Michael Gebert: Yeah, and not always, I would say, looking at the world geopolitically, of course, there is sometimes no choice. And if you are able to generate those choices, first of all by understanding how to design them, that’s a very good first step. So when I wrote the book, the prior part was basically a research paper I did, a small research paper also on ResearchGate. This is the foundation where I started thinking and reflecting. Basically, the core there is about a question that I think is becoming unavoidable now and for the future. The question is: if more and more cognition or judgment and action are delegated to intelligent systems, what has to be true for human beings in order to remain genuinely free? So the book is really about freedom, agency, responsibility, and at the end, about belonging in a world of increasingly disruptive intelligence. Ross Dawson: Yeah, yeah. So the word agency is obviously very much of the moment, in lots of ways. But I think human agency is absolutely critical. One of the central things you lay out in the paper, which I think is really, as you were saying a moment ago, is on everyone’s minds. You’re saying this idea of agency used to be about making decisions, whereas now, as you describe it, agency is shifting to authoring the conditions for decision making. So we’re not necessarily making the decisions ourselves, but we do control and guide the conditions, the context, or the structures for decisions so that we retain responsibility and accountability, and those decisions are the ones we would want. So how do we do that? Michael Gebert: Yeah, you know, the question before asking how is really to understand under what conditions do human beings remain authors of their lives when more and more of those decisions are shaped by, as you say, agency systems or whatever name they go by, whether fancy, new, or already existent. So the how—and it’s not about lifting a secret—is about going back to cognition and having that cognitive intelligence and cognitive roots, which are in us, but which, over the years—and you reflected on the last 15 years, especially the generation after 2008, meaning after the iPhone—have lost large parts of that ability, which is very human. So it’s not really a reshaping or something new. It’s also not a book advising how to; it is really a finger going up and saying, people, please remember that the deeper question is under what conditions do human beings remain genuinely free when more and more cognition, judgment, and action is to be owned back and not delegated to the systems. This is, of course, very formal in the need and in the demand, but especially, as you mentioned, when laying it out into organizations or government structures, it is hardcore policy and hardcore principle. You can write a lot of things in your genuine AI policies, but what I see right now is that in reality, first of all, nobody’s really reading them in depth. Secondly, there is really no reflection point on this cognition, judgment, and delegation. Therefore, this is really prior before any interest in how-to in terms of technology and what LLM to choose. This is really prior—it’s day zero—when you think about what’s going on, and when you think about how to position yourself, your company, and your team in there. Then this is the next step of thinking. Ross Dawson: So I want to come back to that, but I think one of the phrases you use is cognitive sovereignty, and this is in a context where one of the most shared papers recently is around cognitive surrender. Cognitive sovereignty is the opposite of cognitive surrender. But the reality is that in interacting with LLMs, it does change our cognition. Michael Gebert: As long as we, yeah, as long as we delegate cognition, basically. The auto effect is— Ross Dawson: Conversation with a human changes our cognition too, and I think we need to recognize that. So it’s not just conversing with LLMs. Conversing with a human changes the way we think, which is a good thing because we’re getting more diverse opinions. But obviously, LLMs are not humans, and while possibly that interaction could enhance our thinking, if we get some great ideas and different perspectives from an LLM, then we’re still retaining cognitive sovereignty. So let’s frame this: how do we as individuals get to cognitive sovereignty? What does that look like? Michael Gebert: Yeah. So first of all, I think we need to understand that when we delegate cognition to an AI, we redesign responsibility. This is undisputably non-negotiable. This is a fact. When you compare it to a human interaction, there is no default responsibility redesign necessary. It’s a reflection point, it’s a discussion. If it’s a good conversation, it’s uplifting for both ends. You go out of this conversation and you have, yeah, uplifted cognition. Surrendering cognition, as you said, is a very factual statement that brings a lot of views, but it’s basically raising the white flag and saying, I surrender. What I say is, no, it’s not time to surrender. It’s time to appreciate, and it is time to understand that freedom n

    38 min
  2. Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning

    8 APR

    Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning

    “The technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people.” –Marshall Kirkpatrick About Marshall Kirkpatrick Marshall Kirkpatrick is founder of sustainabilty consultancy Earth Catalyst and AI thinking tool What’s Up With That. His many previous roles include founder of influence network analysis tool Little Bird, which was acquired by Sprinklr, where he was last Vice President Market Research. Website: whatsupwiththat.app LinkedIn Profile: Marshall Kirkpatrick What you will learn How generative AI transforms cognitive tools and lowers barriers to advanced thinking Techniques to combine human and AI-powered sensemaking for richer insights Practical strategies for filtering and extracting value from infinite information The importance and application of diverse mental models in modern decision-making Methods to balance manual cognitive work with AI assistance for optimal outcomes The role of adaptive interfaces in enhancing individual cognitive capacity Metacognitive approaches to networks and how AI can foster organizational awareness Ethical and societal implications of democratizing access to AI-powered cognitive enhancements Episode Resources Transcript Ross Dawson: Marshall, it is awesome to have you back on the show. Marshall Kirkpatrick: Oh, thank you, Ross. It’s such a pleasure to be reconnecting with you here. Thanks for having me on. Ross Dawson: So back you were very, very early on in the podcast when it was Thriving on Overload, and it was interviews with the book, and you got incorporated—some of the wonderful things you were doing in Thriving on Overload. So I think today, in this world of generative AI, which has transformed everything, including the way in which we think, the Thriving on Overload themes are still super, super relevant, and in a way, we need to be talking about them more. That theme at the time was finite cognition, infinite information. How do we work well with it? I don’t know if our cognition has become more finite, but the information has become more infinite, and there’s just more and more. But also, it cuts two ways, as in, what is the source of all the information? AI is also a tool. So anyway, let’s segue from some of your cognitive thinking tools, technology-enabled cognitive thinking tools and so on, which we looked at. So how do you—where are we? 2026, what do you think about human cognition in our current universe? Marshall Kirkpatrick: Well, especially when you frame it up in Thriving on Overload terms. I mean, those were four, five long years ago that we last spoke, and the book that came out of it was just fantastic. I think it has some timeless qualities, and I think that the technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people. That’s what I hope. I think that, yeah, between individuals and organizations, there’s so much that, historically, someone like you or me or the people closest in our networks were willing and able to do and excited to do, that many other people said, “That sounds like a lot of work.” The bar is lower now, because a lot of just the raw cognitive processing can be outsourced into a technology that serves as a lever. Ross Dawson: Well, I mean, that idea of levers for these cognitive tools is interesting. I guess, the very crude way of saying it is, we’ve got inputs into our human brain, and then we are processing information. I’m just thinking out loud a bit here, but it’s like, okay, we have tools to be able to filter, to present, to find what is most relevant, to present it to us in the ways which are most useful—very obvious, like summarization, visualization. Then as we are processing it ourselves, we have dialog, or we can have interlocutors who we can engage with and be able to refine and help our thinking. Does that sort of make sense, or how would you flesh that out? Marshall Kirkpatrick: Yeah, I mean, when you put it that way, it makes me think about Harold Jarche and his Seek, Sense, Share model, right? I think that AI, especially when connected to things like search and syndication and other traditional technologies, can impact all three of those stages. It can hypercharge our search. I think the archetypal example of that, on some level, feels like the combinatorial drug research being done, where just an otherwise cognitively uncontainable quantity of combinatorial possibilities between molecules can be sought out and experimented with for a desirable reaction. And then that sensing, or the pattern recognition that AI is so good at, is something that we do as humans—some of us better than others—and it’s a lifelong muscle to build and what have you. But the AI is really, really good at it, and so it’s a ladder to climb up in some of that sensing. And then the sharing component becomes so much easier with the rewriting capabilities—turn A into B, reformat something into a summary or a set of bullet points, or ideas and words into code. AI is just so excellent for that translation that makes new levels of sharing possible. Ross Dawson: That’s fantastic. Yeah, I had Harold on the show again in the Thriving on Overload days. But you’re right, that’s extremely relevant. Let’s dig into that. I love that you brought up that combinatorial search, which is so important. As opposed to going into Perplexity to do a search, it’s far more interesting to find the uncovered connections between things, which are relevant to what you’re doing. And that’s— Marshall Kirkpatrick: Absolutely. I remember reading, years ago, Dan Pink’s book “A Whole New Mind,” which preceded the generative AI era. But he said, if your kind of work is something that’s easily reproducible by computers, good luck to you. You really are going to need uniquely human practices in the future, and what exactly those are, I’m not sure, because the one that he identified, I don’t think has proven to be uniquely human. But I really appreciated learning about it from him, and that was what he called symphonic thinking, or the ability to draw connections between seemingly unconnected phenomena. So for many years, I have been doing a personal exercise with pen and paper that I call triangle thinking, where I’ll take three different phenomena—maybe that’s the owl outside my window, one of the notes that I’ve taken on paper, and something I come upon on the internet, or maybe it’s three very deliberately related things. I label them A, B, and C, and I ask, what might A have to say about B? What might B offer to A, and vice versa? I write out the six unidirectional connections between those things. And without fail, one, two, or three of those end up being real keepers, where I say, “Aha, that’s a really interesting idea. I’m going to take action on that.” And now, by the time I’ve got the letter B written out, an AI has done that ten times over. I like to do it both ways—still both AI and with my naked brain—but that combinatorial ideation, the generative combinatorial ideation, is, yeah. I’m curious what your thoughts and experience and hope for that might be. Ross Dawson: Well, there’s a prompt I use called “Apply Diverse Thinking,” where it generates extremely diverse perspectives on a topic—who might those very unusual people to think about something be, and then what would they think about this particular situation? Of course, there are a whole array of different thinking tools. There’s Marshall McLuhan’s tetrad, which is a little bit similar to your thing where, again, you can and should do it—well, not manually. What’s the manual equivalent of brain? Marshall Kirkpatrick: Thoughtfully, perhaps. Yeah, good one—deliberately, manually. I mean, Azeem Azhar over at Exponential View uses a fountain pen and paper and will sometimes have his team come online and they’ll do two-hour thinking sessions with no AI allowed. They just get on, I believe, Zoom, and just think through things with pen and paper, individually and together. And then they’ll kick off OpenAI or what have you, and use all the tools afterwards. Ross Dawson: Yeah, well, a couple of things. Actually, research has shown that in brainstorming, it is better for everyone to ideate individually before doing it collectively. And of course, that’s unaided. I think there are analogs there where—actually, one of the frameworks I just released last week was basically to say, think it through for yourself before you ask the AI, because then you have a reference point. If not, you don’t have a reference point to say, “Well, what am I expecting it to do? Let me think it through for myself,” even if it’s just a little bit, as opposed to just going in blank—”All right, give me an answer.” Just that simple thing of thinking through for yourself first is enormous. What it does is, obviously, give you a reference point for that. And I’m going on a lot about appropriate trust at the moment—as in, trust the AI enough, but not too much, which I think is absolutely critical capability. And part of it is being able to say, “Well, this is what I think it should be giving me.” Now you have a reference point for what it give

    40 min
  3. Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension

    1 APR

    Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension

    “Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.” –Nina Begus About Nina Begus Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI. Website: ninabegus.com LinkedIn Profile: Nina Begus  Book: Artificial Humanities   What you will learn How ancient myths and archetypes influence our understanding and design of AI Why the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systems The dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginaries How metaphors shape our interactions with AI products and the user experiences companies choose to enable The challenges and possibilities of imagining forms of machine intelligence and language beyond human templates Why collaboration between technical experts and humanists opens new frontiers for creativity and responsible technology What makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulses Practical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI Episode Resources Transcript Ross Dawson: Nina, it is wonderful to have you on the show. Nina Begus: Thank you for having me. Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities? Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed. I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it. Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts. Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them. Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space. The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on. So I think this kind of story is actually telling us a lot about how humans are not rational, how we have some very deeply embedded behaviors in us, and one of them is that we anthropomorphize everything, including machines.So I think this was a really important takeaway that we got already from the early days of AI with the first chatbot, Eliza. We’ve learned that that will be a feature of us relating to machines. Ross Dawson: So Joseph Campbell called the hero’s journey the monomyth, as in, there is a single myth. And I guess what you are doing here is—well, if you agree with that, which I’d be interested in—is that there are facets. The classic hero’s journey is quite simple, but there are facets of that monomyth, or something intrinsic to who we are, that is around this creation. And in this case, as you say, this relation we have with what we have created. Would you relate that at all to Joseph Campbell’s work? Nina Begus: I haven’t thought about it in this way, because I thought about myth and myths more and less of a storytelling issue, which here is definitely happening—the hero goes on a task, returns back changed, and maybe changes something in the community. The myths that I was looking into and the metaphors that I was exploring, primarily this huge metaphor of AI as a human mind, as an artificial reason—I think it works differently. It’s less of a narrative; it’s more of an imaginary of how or towards what we are building. I think this is a big problem, actually, because the imaginary around AI is very poor. What you get is mostly imagining machine intelligence on human terms, and a lot of people are bothered by that in the AI discourse—right, when you say the machine thinks, or the machine learns, or it has a mind, and some people go as far as to say it has consciousness. I think this kind of debate is actually not that productive. I think it’s more important to see how all these different AI products that we’ve created—and mostly when we talk about AI, people think of language models now—are very much designed as a sort of character, almost as an artificial human that, in literature, authors have been creating for a long time. So I think in that case, we can get back to a hero’s journey. But I think what I was looking at was actually more on the surface level of what kind of shortcuts we are using with these metaphors that we’re employing when building and using AI. I think the book makes a really good case showing that, yes, this is actually a very cultural technology. It’s very much informed by our imaginaries. One surprising part of it was really how hard it was to break out of this human mold. It was pretty much impossible to find examples of machines that are not exclusively human-like. I think Stanislaw Lem is one of the rare writers who can consistently deliver this kind of imaginary. Even looking at more recent works, like popular films such as Hollywood’s Ex Machina or Her, you can see how the technologists themselves would say, “Oh, we were influenced by this film,” in a way that it affirmed their product development trajectory. You can see it now, at this moment, with OpenAI launching companionship. So in many ways, not a lot has changed. Ross Dawson: Yeah, there’s a lot to dig into there. I just want to go back—in a sense, Pygmalion is a metaphor, but it’s also a myth. It is a story: creates a woman, and then falls in love with her, and then whatever happens from there. There is this, something happens, and then something else happens. That’s what a story is. I think that can impact the implicit metaphor, but coming back to the metaphor—so George Lakoff wrote the beautiful book Metaphors We Live By. I think the way the brain works is in metaphors and analogies to a very large degree. Some of those are enabling metaphors, and some of those are not very useful metaphors. I think part of your point is that some of the metaphors that we have for thinking about AI and machines are not useful. There may be, or we could create, some metaphors that are more useful. So, what are some of the most disabling metaphors, and what are some of the ones which could be more constructive? Nina Begus: Yes, So I think this main metaphor that I’ve mentioned—of AI as a human mind—is very limiting. I think it really limits the machinic potential to actually do something good with it. The fact that we’re still using the criteria that were made for humans, like different criteria developed on human language—the T

    35 min
  4. Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution

    25 MAR

    Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution

    “The center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adopt, adapt to skills, and adjust to an environment.” –Henrik von Scheel About Henrik von Scheel Henrik von Scheel is Co-Founder of advisory firm Strategic Intelligence, Chairman of the Climate Asset Trust, Vice Chairman of Regulatory Intelligence Committee, and Professor of Strategy, Arthur Lok Jack School of Business, among other roles. He is best known as originator of Industry 4.0, with many awards and extensive global recognition of his work. Website: von-scheel.com LinkedIn Profile: Henrik von Scheel   What you will learn Why human-centered AI is crucial for widespread societal prosperity The impact of AI hype cycles, media narratives, and the realities of technology adoption How equitable wealth distribution and capital allocation in AI can shape economic outcomes Risks around data ownership, privacy, and the importance of controlling your own data in the AI era Divergent approaches to AI regulation in the US, EU, and China, and the implications for global AI leadership The importance of trust calibration and intentional human-AI collaboration in practical applications How education and lifelong learning can be reshaped by AI to support individualized growth and mistake-enabled reasoning Opportunities for AI to amplify individual talents, address educational gaps, and enable more specialized and innovative skills Episode Resources Transcript Ross Dawson: Henrik, it is wonderful to have you on the show. Henrik von Scheel: Thank you very much for having me, Ross. Ross Dawson: So I think we’re pretty aligned in believing that we need to approach AI from a human-centered perspective and how it can bring us prosperity. So I’d just love to start with, how do you think about how we should be thinking about AI? Henrik von Scheel: Well, I think, like every technology that comes into play, it brings a lot of changes to us. But I think the center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adapt, adapt to skills, and adjust to an environment. So technology is something that we apply, but it’s the strategy on how we adapt with it that makes a difference. It’s never the technology itself. So I’m excited. It’s one of the most exciting periods for the industry and for us as people. Ross Dawson: There’s a phrase which I’ve heard you say more than once around AI should make us smarter, healthier, and wealthier. So if that’s the case, how do we frame it? How do we start to get on that journey? Henrik von Scheel: So I think what people experience today in AI is that they experience a lot of media hype—large language models, ChatGPT, and all of this—and they consume it from the media. So there’s a big hype around it, and I believe that AI is about to crash fundamentally, but crashing in technology is not bad, right? There are a lot of promises and then an inability to deliver, and then it crashes. What you hear in the media today is very much driven by a story of them raising funds because it’s so expensive, and so they are promising the world of everything and nothing, and the reality looks a little bit better. The world that they are presenting is that you will be replaced, and you will be happy, and you’ll be served by everything else. And somehow it will work out. We don’t know how, but it will work out. And that’s not a future that is really a real future. The future must include that everybody gets smarter, wealthier, and healthier. And when I say everybody, I mean not only the guys that have money, that they become more rich, or the middle class. It’s like everybody in society should get smarter from AI. That means part of the things that they need to learn or how human evolution works should be better, and it should make us healthier people and wealthier people. So it should not only be that we sacrifice our convenience with our freedom, with our privacy, with our environment, or any other things that we put on the table to get convenience back. That exchange we have done a couple of times, and it’s not working really well for humans, and it’s not a good trade for us, right? Ross Dawson: Yeah, I love that. And since it’s quite simple, you know, you can say it, it’s clear, it sounds good, and it is a really clear direction. But you’re actually pointing in a couple of ways there to capital allocation. So obviously, if you’re looking at the AI economic story, this is around this diversion of capital from other places to AI model development, data centers, deployment, and so on. But also, when you’re saying wealth here, this is around the distribution of wealth—where we’re allocating capital to AI development, but also from the way in which AI is developed, there will be creation of wealth. There is the real potential for productivity improvement. But then it’s about finding, how do we have the mechanisms for allocation of wealth or capital from that which is allocated? Let’s call it equitably. Henrik von Scheel: I’m a firm believer that this year, 35 to 45% of the money invested in AI will evaporate. Companies that have invested—they’re the early adopters—they have this format, so they’re rushing to it. From a company perspective, you always adapt the best practices. When it goes beyond the hype, and the performance curve and adoption curve is low. For example, for AI, the simple version is there. You heard that Deloitte and McKinsey talked 10 years ago about robotic process automation like God’s gift to mankind in AI. Today, you don’t hear them talking about it, because you can download it for free—for HR, for forecasting, planning, budgeting, and so on, you can save 20 or 30%, and as an organization, you can do it yourself. You download two, three models, you test it, and you run it. Good, okay, so that’s when you apply best practices. Then you have industry practices, like AI agents. So when you have AI agents for manufacturing, for industrial sectors, for energy sectors, they are nothing else than workflow optimization. You use robotic process optimization, you do a visualization on it, so it’s far more practical at a level, because you use the data they already have in the organizations under a simple line on the process flow, on the safety, security—it’s very much down at the level where they can apply it and use it. So this version of large language models, where you have this magic powder you spread over the organization and it’s totally working—it’s not really there. And then there’s the third leg that companies are quite aware of. It’s called Shadow AI, right? Shadow AI is because AI is the biggest infringement on intellectual capital within organizations. The reason why normal people are not allowed to look at pornography at their work is because of cybersecurity. It’s not that your boss doesn’t like you to look at pornography; it’s because of cybersecurity. It’s the same reason with AI—you should not be allowed to use Copilot latest version or large language models as a CFO or as a worker, because you’re exporting your own information outside. Copilot takes, every five seconds, a screenshot for the large language models’ learning. So as a corporate point of view, that’s the first thing—you should actually protect your own data so you can monetize your data in the future. From an economic point of view, if you go two, three steps behind this, you ask, okay, what is it that makes sense in this? There’s something really, really strange in this. Australia was built by building railways—they take 100 years to build, they also last 100 years. The infrastructure that lasts. So there’s a return on investment. You build streets, you build education systems—everything we build as humans, as society, has a lasting element to it. Now, we build data centers that last three months until the chips need to be returned, or six months. So there’s no sense in that we are building data centers around the world where we capture all data. It has a volume of hundreds of trillions of dollars, and we need to exchange them at a rate between three to six months to maintain the data. And then you say, wow. And you do that via license models of large language models—the data can never, in its entire life cycle, be that much worth. So there’s a very strange element, because most of the entrepreneurs that go to large language models and use their solutions on Gemini and ChatGPT and so on, you say, okay, you are building your solution on large language models, but you don’t own the model. You don’t own the data. You don’t own your own data. So what are you doing? Ross Dawson: You have architectural choices, to a point, as to— Henrik von Scheel: That’s Architectural choices, but you are limiting yourself. So the first element you always say, if my value is customizing a solution, your value is actually the data. So you must have a way to keep and maintain the data yourself. We can take another call to say how you apply AI and what the future of AI looks like, because AI today is very much focused on language models, and language models are the most limited version of AI science of all. It has the least data, but i

    47 min
  5. Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation

    18 MAR

    Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation

    “Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through.” –Joanna Michalska About Dr Joanna Michalska Dr Joanna Michalska is Founder of Ethica Group Ltd., Co-Founder of The Strategic Centre, and an advisor to boards on AI risk, ethics, and governance. She holds a PhD in Strategic Enterprise Risk Management and has twenty years’ experience leading enterprise risk, strategy and transformation at J.P. Morgan and HSBC. Website: ethicagroup.ai LinkedIn Profile: Dr Joanna Michalska   What you will learn How boards and executives can rethink governance and accountability in the age of AI The importance of embedding governance into organizational ecosystems for agile, responsible AI adoption How to map and assign human accountability for both automated and hybrid AI-human decisions The decision architecture needed for scalable oversight, intervention, and escalation pathways Practical examples of effective AI oversight in areas like fraud detection and exception handling Steps for complying with new regulations like the EU AI Act, including inventorying AI systems and risk tiering Why human qualities like emotional intelligence, psychological safety, and honest communication are critical in AI-driven organizations How leaders can foster organizational resilience and help teams adapt by building AI literacy, retraining, and supporting personal growth Episode Resources Transcript Ross Dawson: Joanna, it’s a delight to have you on the show. Joanna Michalska: Well, thank you for having me, Ross. Ross Dawson: So, AI is wonderful, but it also brings us into a whole lot of new territory where we have to be careful in various ways. I’d love to just hear, first of all, the big framing around how boards and executive teams need to be thinking about governance and accountability as AI is incorporated more and more into work and organizations. Joanna Michalska: I think we’re all very excited about the capability that exists today to help us enhance our performance and the way we think about strategic execution for our organizations. It has multidimensional consequences for how we adapt it. What’s very important right now is, as executives and boards think about accelerating their ambitions and growth plans, there needs to be awareness of two components. First, how do we as leaders, as humans, need to adapt to that new environment? There are new conditions, or perhaps existing conditions that really need to be enhanced. They’re very important to exist in order to be able to adapt and to scale. Second, do we actually have the right systems in place to enable that scale? I think it’s important to recognize that, yes, governance has always existed, but the way it existed was more as external supporting scaffolding, rather than being built into an organizational ecosystem. We also need to have the right leadership in place to ensure that decisions are made in the right way and the organization is designed in a much more robust, agile way. These two conditions are critical for not only increasing adoption, but also doing so in a safe and responsible way, especially as we expand our ambitions for the future. It’s exciting, but there’s also a lot of caution and a lot of questions being asked by executives at this time. Ross Dawson: Yes and I guess the more we can address those concerns upfront, the more it enables us to do. I have this idea of minimum viable governance—at least having some governance in place so we don’t go too badly astray. But I always think of governance for transformation as: how do you set governance not as a brake to slow you, but in fact to accelerate you, because you have confidence in how you’re going about it? Joanna Michalska: Absolutely! I think the mindset shift is very important, because governance, to your point, has always been seen as a compliance-driven thing that we must do because regulators require us to, and we need to demonstrate we have these policies and procedures in place and the right people in the right positions. Now, what the new environment is requiring of us—as executives, even board members—is a different set of responsibilities that really cannot be assumed as pre-existing. In this accelerated environment—let’s call it that, rather than just “AI,” because it’s so overused and can mean so many different things—where the automation rate is fast and overtaking everything, governance needs to change. It can’t be an afterthought or something we designed at one point in the past and now just try to fit into what’s happening. It really needs to become a well-designed, living organism. It needs to organically evolve. It needs to have the right people with the right accountability that is well understood. Accountability that was designed in the past needs to be looked at, discussed, and understood by all executives and across the organization, cross-functionally, to really work. Another important thing is to make sure executives have the right level of ownership and responsibility to ensure the conditions exist to enable that system to work. That’s a very difficult thing to do, because now you’re talking about having designed human oversight that doesn’t just become a “human in the loop,” but the right human in the right loop. By “right,” I mean: does this person, or these people, understand exactly what the output of the automated system is? How has this decision been made? Is there the right level of executive oversight when that decision is already made? How confident are we that we can say, with a level of certainty, “I’m comfortable with this, and this is not going to create negative consequences I’m not willing to accept”? That’s not an easy thing to do—to create those conditions of trust and safety. Ross Dawson: Particularly when there are so many decisions and outputs throughout the organization. Let’s go into decision making. I’ve built a little framework around going from humans-only through to AI-only decisions. Hopefully, there are no purely human decisions anymore; at least you can ask an AI, “Am I crazy or not?” even if it’s a human decision. Some decisions are already fully automated, but they still need oversight. You can bring in exceptions, conditional things, humans in the loop for approval, humans in the process, or build an explainability layer. There’s a whole array of different things. For every decision, you need to create the right way to implement it. In an organization with that profusion of different decisions and possible approaches, how can you actually make that happen? Joanna Michalska: Yeah, it’s a great question. Decisions are at the center of everything, and the quality of those decisions—and the whole architecture, how it’s designed for decisions to be made—is really important. It doesn’t stay static; it evolves as the organizational structure evolves. Questions like accountability—what does it look like, and what is the governance around accountability—are critical. Intervention capability is also very important, because with this level of automation, the whole design of how automated decisions are made raises multiple questions. Are these decisions made by old algorithms that are very simple, where the risk is determined by a set of rules? Is there clarity around who actually has the decision intervention rights in the organization, and how does that roll up to an executive layer? Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through. The quality of human decision-making, and determining when a human is able to review decisions made by complex systems—whether agentic or whatever structure the organization has—is critical at any level, whether it’s middle management, executive management, or board. There are different layers of how the architecture requires design and measurement. Escalation pathways are another one. People will not naturally escalate if they fear negative consequences, retaliation, or any type of fear created because there isn’t psychological safety or trust within the organization. Even if there is an escalation protocol in place within the decision architecture, how do we know that people will raise the problem? Ross Dawson: The accountability. Of course, only humans are accountable. Ultimately, the board and their executives are accountable. But what you’re suggesting, it sounds like, is that for every decision, there is somebody where you can say, “That person is accountable.” Obviously, it cascades up to who they’re reporting to, but there is human accountability for every decision made, even if it’s a thousand decisions where somebody has oversight and responsibility that those are the right decisions. I want to talk about escalation and how that might happen, but perhaps we can ground this with a couple of examples. What are some examples of decisions made in organizations—hopefully well-designed, or perhaps not so well-designed and haven’t worked out? Joanna Michalska: Yes, I have a couple of good examples where an automated system allows review of multiple false positives, where a hum

    34 min
  6. Cornelia Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone

    12 MAR

    Cornelia Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone

    “You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Website: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way it goes. Right now, we consider AI like this big phenomenon that is happening to us. It is not—it is happening with, amongst, and because of us. I think that is the big change that needs to happen in our minds, which is that AI is neutral at the end of the day. It’s a means to an end, not an end in itself. We have an opportunity to shift from the old saying—which I think still holds true—garbage in, garbage out, towards values in, values out. But for that, we need to start offline and think: what are the values that we stand for? What is the world that we want to live in and leave behind? As you know, I’m a big defender of pro social AI, which refers to AI systems that are deliberately tailored, trained, tested, and targeted to bring out the best in and for people and planet. Ross: So again, lots of angles to dig into, but I just want to come back to that agency decay. I created a framework around the cognitive impact of AI, going from, at the bottom, cognitive corruption and cognitive erosion, through to neutral aspects, to the potential for cognitive augmentation. There are some individuals, of course, who are getting their thinking corrupted or eroded, as you’ve suggested; others are using it well and in ways which are potentially enhancing their cognition. So, there is what individuals can do to be able to do that. There’s also what institutions, including education and employers, can do to provide the conditions where people are more likely to have a positive impact on cognition. But more broadly, the question is, again, how can we tip that more in the positive direction? Because absolutely, not just the potential, but the reality of cognitive erosion—or agency decay, as you describe it, which I think is a great phrase. So are there things we can do to move away from the widespread agency decay, which we are in danger of? Cornelia: Yeah, I think maybe we could marry our two frameworks, because the scale of agency decay that I have developed looks at experience, experimentation, integration, reliance, and addiction. I would say we have now passed the stage of experimentation, and most of us are very deeply into the field of integration. That means we’re just half a step away from reliance, where all of a sudden it becomes nearly unthinkable to write that email yourself, to do that calendar scheduling yourself, or to write that report from scratch. But that means we’re just one step away from full-blown addiction. At least now, we still have the possibility to compare the before and after, which comes back to us as an analog generation. Now is the time to invest in what I would call double literacy—a holistic understanding of our NI, our natural intelligence, but also our algorithmic, our AI. That requires a double literacy—not just AI literacy or digital literacy, but the complementarity of these two intelligences and their mutual influence, because none of them happens in a vacuum anymore. Ross: Absolutely, So what you described—experiment, integration, reliance, addiction—sounds like a slippery slope. So, what are the things we can do to mitigate or push back against that, to use AI without being over-reliant, and where that experiment leads to integration in a positive way? What can we do, either as individuals or as employers or institutions, to stop that negative slide and potentially push back to a more positive use and frame? Cornelia: A very useful tool that I have found resonates with many people is the A frame, which looks at awareness, appreciation, acceptance, and accountability. I have an alliteration affinity, as you can see. The awareness stage looks at the mindset itself and really disciplines us not to slip down that slope, but to be aware of the steps we’re taking. The appreciation is about what makes us, in our own NI, unique, and the appreciation of where, in combination with certain external tools, it can be better. We all have gaps, we all have weaknesses, and that’s what we have to accept. The human being, even though now it’s sometimes put in opposition to AI as the better one, is not perfect either. Like probably you and most of the listeners have read Thinking, Fast and Slow by Daniel Kahneman and many others—there are libraries about human heuristics, human fallacies, our inability for actual rational thinking. But the fact that you have read a book does not mean that you are immune to that. We need to accept that this is part of our modus operandi, and in the same way as w

    36 min
  7. Ross Dawson on Humans + AI agentic systems

    4 MAR

    Ross Dawson on Humans + AI agentic systems

    “Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Website: Collaborating with AI Agents Intelligent AI Delegation Agentic Interactions LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems where these agents are interacting with each other as well as with humans. So there are three papers which I want to just talk about, just give you a quick overview, and please go and check out the papers in more detail if you’re interested. There’ll be links in the show notes. First is Collaborating with AI Agents: A Field Experiment on Teamwork, Productivity and Performance, by Harang Ju at Johns Hopkins and Sinan Aral at MIT. So this, there was an experiment which had over 2,300 participants who were working on creating advertisements. And they had a whole array of humans plus AI, human-human teams, human-AI teams, sort of quite small or just in duos and so on, working on being able to create those which were then assessed in terms of quality and how they worked. So a few particularly interesting findings from that. So individually, just having a human-AI team essentially enhanced performance significantly compared to just human-only teams. And so they were able to move faster and to complete more of their tasks, and the quality was strong. But there’s a phrase which is commonly used around the jagged frontier of capability of AI, and it was quite clear that there were some domains where AI does very well and others where it didn’t. And so this comes to the part where, in terms of the design of the tasks, the design of the human-AI systems, and also the understanding by the human users of what AI is good at or not, is fundamental in being able to do that. And so in some cases, if AI was used in some domains such as image quality, they actually decreased quality. So we need to understand where and how both to apply AI in this jagged frontier and design the systems around that. This changes the role of the humans, of course. Humans then tend to delegate more. And there’s one of the things which they tested for, which is how do you behave differently if you know your teammate is an AI as opposed to not knowing whether a human or AI. And it changes. So they become more task-oriented. They are less using the social cues to interact, and they are essentially becoming more efficient. But some of these social cues which are valuable in the human-human collaboration started to disappear. And this automation process meant that there was not, in the end, as much creative diversity. Now I’ve often pointed to the role of AI in creativity tasks. It depends fundamentally on the architecture—where does the AI sit in terms of initial ideas which are then sorted by filtered by humans and then are involved, or where it sits in that process. But in this particular structure, they found that humans plus AI teams started to create more and more similar-type outputs. So this homogenization of outputs in these human-AI teams was very notable and significant. And so this again creates a design factor for how it is that we build human-AI systems which actually do not lead to homogeneous output. And we’re making sure that we are ensuring that the human diversity is maintained. Often that can be done by being able to have human outputs first without AI then blunting or narrowing the breadth of the creative outputs of humans. Second paper I’d like to point to is called Intelligent AI Delegation, from a team at Google DeepMind. So this is this point where we now have not just single AI agents to delegate decisions to or problems to, but in fact systems of AI. And so this creates a different challenge. And the key point is, I’m saying this, is around you are delegating tasks, but when you are delegating tasks it’s more than just saying, okay, which agent gets the task. You have to understand responsibility. So where does accountability reside? Who is responsible for that? How clarity around the roles of the agents, what are the boundaries of what it is they can do and cannot do, the clarity of the intent, and how that’s communicated and cascaded through the agents, and the critical role of trust and appropriate degrees of trust in the systems. So this means that we have to define what are the different characteristics of the task. And in the paper it goes through quite a few different characteristics. And a few of the critical ones was the degree of uncertainty around the task. Obviously, if it is very clear that can be appropriately delegated, but many tasks and problems are uncertain. And so this creates a different dynamic. Whether verifiable, as you know you have high-quality information, or whether that’s the degre

    19 min
  8. Davide Dell’Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics

    25 FEB

    Davide Dell’Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics

    “In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell’Anna About Davide Dell’Anna Davide Dell’Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Website: davidedellanna.com LinkedIn Profile: Davide Dell’Anna University Profile: Davide Dell’Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One o

    36 min

About

Exploring and unlocking the potential of AI for individuals, organizations, and humanity

You Might Also Like