Humans + AI

Ross Dawson

Exploring and unlocking the potential of AI for individuals, organizations, and humanity

  1. David Vivancos on the end of knowledge, cognitive flourishing, resilient societies, and artificial democracy

    22 HR AGO

    David Vivancos on the end of knowledge, cognitive flourishing, resilient societies, and artificial democracy

    “Delegating knowledge is not the same as delegating wisdom. You learn by experience, and if you don’t have any experiences…you will get cognitive atrophy.” –David Vivancos About David Vivancos David Vivancos is an AI, data, and neuroscience serial entrepreneur, having cofounded five startups since 1995. He is a frequent keynote speaker and is the author of six books, including the Artificiology series. Website: vivancos.com LinkedIn Profile: David Vivancos What you will learn Why embracing advanced AI is crucial for human progress How shifting from digitization to automation and datification redefines value The evolving distinction between human-acquired and AI-generated knowledge How to avoid cognitive atrophy and actively exercise your mind alongside AI What cognitive flourishing means in a world of widespread AI augmentation Ways AI can transform and personalize education across all levels The importance of coexistence training as we prepare for AGI’s societal integration Why rethinking human identity, humility, and social structures is essential for a future with machine citizens Episode Resources Transcript Ross Dawson: David, it is wonderful to have you on the show. David Vivancos: Thank you very much, Ross. Glad to be here. Ross: So you have a more developed, or some would say, extreme view of the relative role of humans plus AI. I’d love to dig into where you think things are going, and how we can best respond. Perhaps the starting point is, you say that we should not be resisting or pushing back. We should fully embrace the shift towards very high levels of AI capability, or at some point, AGI. David: Yeah, that’s fully my point. I think we are in a moment in history where we are really building this technology that one day is not going to be a technology anymore. So, the sooner we start to embrace it, to teach it, and to be really in sync with what we are creating day by day, the better off we will be. So yes, my point of view is that we should embrace it. We should start building as soon as possible. We should fix most of the problems that humans have had over the last millennia, and some of these problems could be solved by using AI. So basically, our “fourth brain”—we have the three-part brain, but in reality, there’s only one brain—this fourth brain, AI, will help us solve all of these issues. So yes, it’s an opportunity. Ross: Yes. I mean, I think there’s always two sides—as in, every opportunity has a challenge, every challenge has an opportunity. So I always think we need to acknowledge challenges and focus on opportunities. I think we’ll get onto that in discussing some of the cognitive implications. You have a series of books which have really told the story over time around this. One of them was “Automate or Be Automated.” This idea of saying, well, there are things which machines, in the broader sense, can do in automating things. So, how would you frame that now, in terms of what it is that can be automated, and how do we position ourselves relative to that? Where do machines start to do what humans have done? David: Yep. I’ve been in this business of trying to build the impossible for the last 30-plus years. “Automate or Be Automated,” the book you mentioned, is from about six years ago. When I started creating and building technology, also about VR and many other things, about 30 years ago, the first companies were internet companies. Back then, what we did is what people now call digitization. But over the last 20–25 years, what we’ve mostly been doing is datification—gathering data and using that data for companies to grow and to understand what happens in the world. But over the last maybe 10 or 11 years, what I call the new golden age of AI, we are starting to build the capabilities to use that data to really build algorithms. Once we have that, we can start to automate, and with this automation, basically what we regain is time. I think time is our most precious asset, along with health and the people we love. Being able to stop doing these repetitive things over and over and put a machine to do that is a fundamental trait for humans. That book, six years ago, was about building a methodology of what can be automated in the digital world, but also in the physical world. That has changed over the last year and a half with the physicality of AI—humanoid robots. I was invited last year to attend the first humanoid Olympia in Greece, in Olympia, the place where 2,800 years ago, humans started to compete. We’ve just seen this week the explosion of the new race, for example, of the half marathon in China, where robots already beat the human mark. So yes, with automation, you need to see what you are doing, and if you are repeating anything, you can try to see if that can be automated by using an agent, by using the cloud, by using a robot—whatever. So yes, we should regain our time and automate, or be automated. It’s all about that. Ross: Yeah. I think people understand the automation thesis. It’s obviously not new—we’ve been automating things in various ways for centuries, at an increasing pace. Your following book was “The End of Knowledge.” This is an interesting framework, starting to get to cognition. The idea is that knowledge is built on experience of whatever kind, whether that’s just in data or otherwise. Obviously, humans use data just as much as machines. But where this starts to become a distinction, as well as a complementarity, is between AI-embedded knowledge and human knowledge. So why is it “the end of knowledge”? David: Yeah, that’s a really great question. It came as an epiphany for me. That book is from about three years ago. I’ve also been involved, of course, in building AI and AGI algorithms over the last 20 years. We started using GPT models before they became can across, but the GPT moment, a year before that, really marked the difference—when we started to be able to use AI in a very seamless way to regenerate and process knowledge. That book, “The End of Knowledge,” came from the realization that we are starting to delegate the production and understanding of knowledge to machines. That’s a critical shift in human history, because through history, humans have needed and used knowledge a lot. Knowledge is power. The more knowledge you have that others don’t, the more advantages you have to do whatever you want. That started to change back then. Now, what people call the “dead internet theory” is basically some of the things I expressed in that book earlier, because we are starting to generate more knowledge. In fact, we’ve already passed the point where most of the human-written knowledge since the printing press has been surpassed by the amount of knowledge we can create using AI. Myself, for example, I started learning to code when I was young. I’ve coded in more than 25 languages and written over a million lines of code in my life. That same number of lines of code, I might now write in the last couple of weeks. So as you can see, you have 40-plus years of your own life in a week. That’s why “the end of knowledge” means that the human capability to gather knowledge and to be knowledgeable about whatever you want can now be delegated to machines. That book marked the difference and started a new field that I now call artificiality. I didn’t know that when I started writing it, but I started this path of trying to see what happens when you delegate some of the main capabilities of your mind to a machine. Ross: Yeah, and I’d like to come back later to the themes of artificiality, machine citizenship, and the societal value we attribute to machines. But I want to start digging into the cognitive piece here. One of the points you make is that we do need to avoid cognitive atrophy. You say we need to have cognitive exercise in order to avoid cognitive atrophy—obviously, a strong analog to the physical world. We need to collaborate with others and with machines to do that. I’d love to get more specific around that. What is the nature of cognitive exercise that will avoid cognitive atrophy, which will enable us to keep our cognition refined and even improving? David: Yeah, that’s a fundamental piece. When we start to delegate all these things to machines, the easy thing to do—and probably the oldest human brain capability—is to not do it yourself. You just delegate everything, and you basically become like in the movie “Idiocracy,” which played out quite well what could happen if we do that. The thing is, with the current AIs—even with the latest releases, like DeepSeek and GPT-5.5—everything is changing quite fast. But even with those AIs, you still need to be in the loop. It’s good if you stay in the loop. I think it’s fundamental. Use the technologies—the AIs, I always call them in plural because there are many—and use as many as you can, but you should still be in the loop, at least for now. Maybe for a couple of years or months, I don’t know exactly, but for a while, you still need to have your hands on the wheel. If you use most of them and get all the information from all these AIs, as a human you need to understand the bias, because all AIs are going to be biased. We all know humans are biased; there are no unbiased humans. The same happens with AIs. But if you

    36 min
  2. Jon Husband on wirearchy, web weaving, the relational economy, and drift diving

    29 APR

    Jon Husband on wirearchy, web weaving, the relational economy, and drift diving

    “What I’m really interested in and fascinated about is that, as AI penetrates and spreads throughout the workplace and gets placed into or integrated into workflows, the first thing that happens is that people in the mix are going to have to learn how to use AI and learn why to use AI when they do.” –Jon Husband About Jon Husband Jon Husband is the Founder and Principal of Wirearchy, a creative research and experimentation laboratory exploring the crossroads of AI and networked workplaces and society. He works as a coach, consultant, speaker and writer, and has co-authored three books, including Wirearchy. Website: wirearchy.com LinkedIn Profile: Jon Husband What you will learn The origins and evolution of wirearchy as a response to traditional organizational hierarchies How AI integration is reshaping knowledge work, workflows, and tacit knowledge within organizations The persistence of Taylorist job evaluation and why traditional work design remains resistant to change The rise of the relational economy and the increasing value of human judgment, trust, and relationships beyond financial exchange New approaches and tools for surfacing and mapping intangible or non-financial value exchanges in organizations The concept of emergence and the need to foster conditions for positive outcomes in complex adaptive systems Challenges and opportunities as organizations shift from rigid, control-based management to adaptive, networked, feedback-driven models Why coaching, facilitation, and skills like listening and allowing for emergence will be critical in navigating AI-augmented workplaces Episode Resources Transcript Ross Dawson: Jon, it is wonderful to have you on the show. Jon: Thank you very much, Ross, it’s good to see you again. Ross Dawson: We’ve known of each other and each other’s work for a very, very long time now from, I suppose, the roots of—yeah, I suppose you can crudely say—the intersection of knowledge and networks. So, as I think many of us who have come from that background, we now are thinking about humans and their relative role to AI. Some people will know of your wirearchy and a lot of your work of the past; others will not. So I’d love to just start off with: what is the concept of wirearchy? And then, how is that morphing or evolving, or are you building on that in how you’re thinking now? We’ll dig in and explore that. Jon: Okay, well, I started paying attention to knowledge work and work in organizations and so on as I changed careers in my early 30s, moving from banking, where I was in management, into management consulting. I ended up working for a large global HR consulting firm that, amongst several others—all the major consulting firms that address organizational issues—have services where they do what’s called job evaluation. What job evaluation does is put a size or a measure or a weight to a job, which then basically places it on the organization chart. I spent quite a few years writing thousands of job descriptions and helping streamline workflows and so on and so forth. So, when the internet came along, I had always been an avid reader, and I suppose a wannabe futurist—a wannabe Ross Dawson, if you will. I was reading all sorts of books back then. Instead of dating, because I was single in my mid-30s, I was spending Friday nights reading books about organizations, like “The Living Company” by Arie de Geus, the Tofflers’ work, “Powershift,” certainly Peter Drucker’s work. There was one day—well, I was reading all of these books, and all of the books were about the coming Information Age. The Information Age had not arrived yet; this was roughly late ’80s, early ’90s. All of a sudden, we hit 1994. I’m sitting in London, and I was just told by my team leader in my consulting firm that I was going to be proposed as one of the next global partners. Three weeks later, I quit my job in the consulting firm because I had begun to feel very uneasy about the work I was doing. If I was made a partner, your job becomes basically selling larger projects to keep the younger consultants employed. I realized that I would be selling methods that I had come to not believe in anymore, and the reason for that is that all of the job evaluation methods sold by all the major consulting companies are all versions of generic Taylorism. They have semantic statements that you pick to figure out a level of a job on a number of different factors. This is one of the things I’ve talked and written quite a bit about in wirearchy: this generic Taylorism is still deeply at the core of most of the work of most organizations. It’s how the work is designed. There has been now, what, 15 or 20 years—how far back does Enterprise 2.0 go?—about collaboration and cooperation and better knowledge management and sharing and transfer of knowledge, and so on and so forth. If you know these semantic statements, which are burned into my brain from this method—the Hay method—you realize that no amount of talking about doing things differently is going to make much difference. It’s not going to change much. And the remuneration—the way people get paid—every single person in every single company, is tied to all of that. It’s tied to your job size, it’s tied to the compensation practice, it’s tied to your performance management, it’s tied to your career plans, if an organization is still doing career planning. Frankly, it has not been touched in 75 years now. Ross Dawson: Used to describe it as a job as a box. Jon: Well, sure, and that’s where that term “think outside the box” comes from. I wrote an article about this at one point in time—oh, I can’t remember the title, so it doesn’t matter—but about the semantic statements essentially becoming semantic straightjackets, because they put limits around what you do. They’re a graded level of permissions, basically, or amounts of influence and authority, and that’s the codified, official organizational chart. So anyway, I was working with this all the time, and I realized if I was going to be made a big-time partner, I’d have to be selling these tools all the time. The internet had come along, so I quit, and I didn’t know what to do after that. I had to move from the UK because I was on a work permit, had to go back to Canada. When I went back to Canada, all the companies I tried to approach to work as an independent consultant didn’t want to engage me, because all of the work I’d been doing in the UK was with really large multinationals, and according to them, too sophisticated for what they were doing in Vancouver. But at the same time, I was still reading all the time—reading Charles Handy’s work, reading Gerard Fairtlough’s work on heterarchy, and so on. I came to believe very strongly that the ongoing sharing of information—which we were starting even 20 years ago to build into constant, incessant flows of information carried via hyperlinks—was going to inevitably begin to affect, I’m going to use the word affect, the traditional top-down power of hierarchy. That comes from the “knowledge is power” by Francis Bacon kind of perspective. Now, that was 25 years ago. What we’ve seen since is, of course, what you know—one umbrella term I could apply to much of what’s going on outside of organizations is the “enshittification” of the web. The same thing applies in a lot of ways, I think, to people doing work, sitting behind screens in organizations. Now, a whole host of things have happened in the past 10 or 15 years: there were armies of developers sitting in office spaces, all of them with their headphones on behind screens coding. There were all sorts of people beginning to understand how to use the internet. There were many failed attempts at effective knowledge management because of the idea that it’s still just good search, find documents, retrieval, without really paying any attention to the connections between people and how they work together, and so on. Ross Dawson: So, the frame there is, I mean, obviously, moving—the wirearchy being an arche of the organization being essentially a network. Obviously, there’s more richness to that as you describe the organization as a network, as opposed to the rigid structures, which are still very much rampant. But fast-forwarding to today, what we’ve overlaid is, whilst the old rigid structure is in place, organizations are effectively a lot more loosened up by Enterprise 2.0 and other types of frames, and essentially more peer communication. Now AI is changing a fundamental role, now being, in many ways, a participant in those workflows, in the creation of value. So where does that take us today, in this humans-plus—essentially wirearchy—pulled into where AI plays a role within those networks? Jon: Well, it’s a fascinating question for which I don’t have an answer. I have some responses, I suppose. The notion of wirearchy came, as you pointed out, out of everybody being wired, everybody being networked—the organization as a network. What I’m really interested in and fascinated about is that, as AI penetrates and spreads throughout the workplace and gets placed into or integrated into workflows, the first thing that happens is that people in the mix are going to have to learn how to use AI and learn why to use AI when they do.

    38 min
  3. Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency

    22 APR

    Michael Gebert on designing freedom, human self-determination, cognitive sovereignty, and systems of agency

    “Freedom no longer exists outside the systems, and it depends on the design. Coming back to the design, it’s about understanding that we need to distinguish between intelligent systems and agency.” –Dr Michael Gebert About Dr Michael Gebert Dr Michael Gebert is Chairman of the European Blockchain Association and co-founder of AI Expert Forum. He works at the intersection of artificial intelligence, digital sovereignty, and institutional responsibility. His book 2079 – Designing Freedom is just out. Website: 2079.life LinkedIn Profile: Dr Michael Gebert What you will learn How the concept of freedom extends beyond politics and economics to personal agency in an AI-driven world Why cognitive sovereignty is essential for maintaining individual responsibility and accountability as intelligent systems become more pervasive The shift from making decisions ourselves to designing the frameworks and conditions for decision-making with AI involvement How to distinguish optimization from true human empowerment when integrating AI tools into personal and organizational life Practical routines and metacognitive strategies for individuals to retain agency when collaborating with large language models and intelligent systems Why organizational leaders must prioritize cognitive sovereignty and human potential early in AI deployment, not just technical efficiency Insights into the challenges and importance of embedding frameworks for freedom and cognitive sovereignty within corporate, governmental, and policy structures The critical need for ambassadors of freedom within institutions to promote reflection, ongoing discussion, and the integration of responsible AI practices across all levels Episode Resources Transcript Ross Dawson: Michael. It is awesome to have you on the show. Michael Gebert: Hey, great to be on the show. Thanks for having me. Ross Dawson: So we connected first, probably around 15 years ago, and we were both involved in crowds, creating value from many people. And I think, you know, there’s one of the interesting points now is, I guess, you know, we still live in a world of many people. We’re trying to create collective value. AI is laid over that. So it’s interesting to see that journey from where we’ve come to where we are today. Michael Gebert: Absolutely, and I really remember visually when we first had contact about this very exciting topic of crowdsourcing and empowerment of the crowd, and really making people believe, not only in themselves, but really in communities. And therefore, not only strengths in terms of crowdfunding, crowd investing, their financial gains, but also being empowered in what they do. And this is a very fundamental, I would say, even a right for humanity to reflect on and do that. I think the methodology and technology back then helped a lot. And to be honest, I’m still partly involved in some of those efforts. Even the big crowdfunding platforms, also here in Europe and in Germany, are vital and really active. Of course, not in that dramatic media shift hype that we experienced, but they’re still there, and it proves that it’s a concept that should stay. Ross Dawson: Yep, absolutely. You know, there’s obviously collective intelligence, amongst other facets. But this goes to, I think, the frame of your new book, 2079, Designing Freedom. So freedom is an interesting word, and something which I hope we all aspire to. Michael Gebert: Yeah, you know, freedom, of course, is one of those very multifaceted words, right? It could be translated in a political context. It could be translated in an economic concept, meaning monetary-wise. It could be translated—and this is my translation—in a very personal, one-to-one reflection about how do I as a human being see myself in that surrounding, bombarded not only by information but by intelligent systems, basically AI as we describe them, and all that is behind those systems. Ross Dawson: So there’s a few things I want to dig into here. And I guess there’s another word there: designing. Obviously, at a societal infrastructure layer, we want to be able to design the systems whereby we can all individually have that freedom of choice in how we live our lives. Michael Gebert: Yeah, and not always, I would say, looking at the world geopolitically, of course, there is sometimes no choice. And if you are able to generate those choices, first of all by understanding how to design them, that’s a very good first step. So when I wrote the book, the prior part was basically a research paper I did, a small research paper also on ResearchGate. This is the foundation where I started thinking and reflecting. Basically, the core there is about a question that I think is becoming unavoidable now and for the future. The question is: if more and more cognition or judgment and action are delegated to intelligent systems, what has to be true for human beings in order to remain genuinely free? So the book is really about freedom, agency, responsibility, and at the end, about belonging in a world of increasingly disruptive intelligence. Ross Dawson: Yeah, yeah. So the word agency is obviously very much of the moment, in lots of ways. But I think human agency is absolutely critical. One of the central things you lay out in the paper, which I think is really, as you were saying a moment ago, is on everyone’s minds. You’re saying this idea of agency used to be about making decisions, whereas now, as you describe it, agency is shifting to authoring the conditions for decision making. So we’re not necessarily making the decisions ourselves, but we do control and guide the conditions, the context, or the structures for decisions so that we retain responsibility and accountability, and those decisions are the ones we would want. So how do we do that? Michael Gebert: Yeah, you know, the question before asking how is really to understand under what conditions do human beings remain authors of their lives when more and more of those decisions are shaped by, as you say, agency systems or whatever name they go by, whether fancy, new, or already existent. So the how—and it’s not about lifting a secret—is about going back to cognition and having that cognitive intelligence and cognitive roots, which are in us, but which, over the years—and you reflected on the last 15 years, especially the generation after 2008, meaning after the iPhone—have lost large parts of that ability, which is very human. So it’s not really a reshaping or something new. It’s also not a book advising how to; it is really a finger going up and saying, people, please remember that the deeper question is under what conditions do human beings remain genuinely free when more and more cognition, judgment, and action is to be owned back and not delegated to the systems. This is, of course, very formal in the need and in the demand, but especially, as you mentioned, when laying it out into organizations or government structures, it is hardcore policy and hardcore principle. You can write a lot of things in your genuine AI policies, but what I see right now is that in reality, first of all, nobody’s really reading them in depth. Secondly, there is really no reflection point on this cognition, judgment, and delegation. Therefore, this is really prior before any interest in how-to in terms of technology and what LLM to choose. This is really prior—it’s day zero—when you think about what’s going on, and when you think about how to position yourself, your company, and your team in there. Then this is the next step of thinking. Ross Dawson: So I want to come back to that, but I think one of the phrases you use is cognitive sovereignty, and this is in a context where one of the most shared papers recently is around cognitive surrender. Cognitive sovereignty is the opposite of cognitive surrender. But the reality is that in interacting with LLMs, it does change our cognition. Michael Gebert: As long as we, yeah, as long as we delegate cognition, basically. The auto effect is— Ross Dawson: Conversation with a human changes our cognition too, and I think we need to recognize that. So it’s not just conversing with LLMs. Conversing with a human changes the way we think, which is a good thing because we’re getting more diverse opinions. But obviously, LLMs are not humans, and while possibly that interaction could enhance our thinking, if we get some great ideas and different perspectives from an LLM, then we’re still retaining cognitive sovereignty. So let’s frame this: how do we as individuals get to cognitive sovereignty? What does that look like? Michael Gebert: Yeah. So first of all, I think we need to understand that when we delegate cognition to an AI, we redesign responsibility. This is undisputably non-negotiable. This is a fact. When you compare it to a human interaction, there is no default responsibility redesign necessary. It’s a reflection point, it’s a discussion. If it’s a good conversation, it’s uplifting for both ends. You go out of this conversation and you have, yeah, uplifted cognition. Surrendering cognition, as you said, is a very factual statement that brings a lot of views, but it’s basically raising the white flag and saying, I surrender. What I say is, no, it’s not time to surrender. It’s time to appreciate, and it is time to understand that freedom n

    38 min
  4. Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning

    8 APR

    Marshall Kirkpatrick on cognitive levers, combinatorial possibilities, symphonic thinking, and compound learning

    “The technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people.” –Marshall Kirkpatrick About Marshall Kirkpatrick Marshall Kirkpatrick is founder of sustainabilty consultancy Earth Catalyst and AI thinking tool What’s Up With That. His many previous roles include founder of influence network analysis tool Little Bird, which was acquired by Sprinklr, where he was last Vice President Market Research. Website: whatsupwiththat.app LinkedIn Profile: Marshall Kirkpatrick What you will learn How generative AI transforms cognitive tools and lowers barriers to advanced thinking Techniques to combine human and AI-powered sensemaking for richer insights Practical strategies for filtering and extracting value from infinite information The importance and application of diverse mental models in modern decision-making Methods to balance manual cognitive work with AI assistance for optimal outcomes The role of adaptive interfaces in enhancing individual cognitive capacity Metacognitive approaches to networks and how AI can foster organizational awareness Ethical and societal implications of democratizing access to AI-powered cognitive enhancements Episode Resources Transcript Ross Dawson: Marshall, it is awesome to have you back on the show. Marshall Kirkpatrick: Oh, thank you, Ross. It’s such a pleasure to be reconnecting with you here. Thanks for having me on. Ross Dawson: So back you were very, very early on in the podcast when it was Thriving on Overload, and it was interviews with the book, and you got incorporated—some of the wonderful things you were doing in Thriving on Overload. So I think today, in this world of generative AI, which has transformed everything, including the way in which we think, the Thriving on Overload themes are still super, super relevant, and in a way, we need to be talking about them more. That theme at the time was finite cognition, infinite information. How do we work well with it? I don’t know if our cognition has become more finite, but the information has become more infinite, and there’s just more and more. But also, it cuts two ways, as in, what is the source of all the information? AI is also a tool. So anyway, let’s segue from some of your cognitive thinking tools, technology-enabled cognitive thinking tools and so on, which we looked at. So how do you—where are we? 2026, what do you think about human cognition in our current universe? Marshall Kirkpatrick: Well, especially when you frame it up in Thriving on Overload terms. I mean, those were four, five long years ago that we last spoke, and the book that came out of it was just fantastic. I think it has some timeless qualities, and I think that the technology we’re working with today really makes a lot of those best practices and mental models and the whole toolkit more accessible than ever to more people. That’s what I hope. I think that, yeah, between individuals and organizations, there’s so much that, historically, someone like you or me or the people closest in our networks were willing and able to do and excited to do, that many other people said, “That sounds like a lot of work.” The bar is lower now, because a lot of just the raw cognitive processing can be outsourced into a technology that serves as a lever. Ross Dawson: Well, I mean, that idea of levers for these cognitive tools is interesting. I guess, the very crude way of saying it is, we’ve got inputs into our human brain, and then we are processing information. I’m just thinking out loud a bit here, but it’s like, okay, we have tools to be able to filter, to present, to find what is most relevant, to present it to us in the ways which are most useful—very obvious, like summarization, visualization. Then as we are processing it ourselves, we have dialog, or we can have interlocutors who we can engage with and be able to refine and help our thinking. Does that sort of make sense, or how would you flesh that out? Marshall Kirkpatrick: Yeah, I mean, when you put it that way, it makes me think about Harold Jarche and his Seek, Sense, Share model, right? I think that AI, especially when connected to things like search and syndication and other traditional technologies, can impact all three of those stages. It can hypercharge our search. I think the archetypal example of that, on some level, feels like the combinatorial drug research being done, where just an otherwise cognitively uncontainable quantity of combinatorial possibilities between molecules can be sought out and experimented with for a desirable reaction. And then that sensing, or the pattern recognition that AI is so good at, is something that we do as humans—some of us better than others—and it’s a lifelong muscle to build and what have you. But the AI is really, really good at it, and so it’s a ladder to climb up in some of that sensing. And then the sharing component becomes so much easier with the rewriting capabilities—turn A into B, reformat something into a summary or a set of bullet points, or ideas and words into code. AI is just so excellent for that translation that makes new levels of sharing possible. Ross Dawson: That’s fantastic. Yeah, I had Harold on the show again in the Thriving on Overload days. But you’re right, that’s extremely relevant. Let’s dig into that. I love that you brought up that combinatorial search, which is so important. As opposed to going into Perplexity to do a search, it’s far more interesting to find the uncovered connections between things, which are relevant to what you’re doing. And that’s— Marshall Kirkpatrick: Absolutely. I remember reading, years ago, Dan Pink’s book “A Whole New Mind,” which preceded the generative AI era. But he said, if your kind of work is something that’s easily reproducible by computers, good luck to you. You really are going to need uniquely human practices in the future, and what exactly those are, I’m not sure, because the one that he identified, I don’t think has proven to be uniquely human. But I really appreciated learning about it from him, and that was what he called symphonic thinking, or the ability to draw connections between seemingly unconnected phenomena. So for many years, I have been doing a personal exercise with pen and paper that I call triangle thinking, where I’ll take three different phenomena—maybe that’s the owl outside my window, one of the notes that I’ve taken on paper, and something I come upon on the internet, or maybe it’s three very deliberately related things. I label them A, B, and C, and I ask, what might A have to say about B? What might B offer to A, and vice versa? I write out the six unidirectional connections between those things. And without fail, one, two, or three of those end up being real keepers, where I say, “Aha, that’s a really interesting idea. I’m going to take action on that.” And now, by the time I’ve got the letter B written out, an AI has done that ten times over. I like to do it both ways—still both AI and with my naked brain—but that combinatorial ideation, the generative combinatorial ideation, is, yeah. I’m curious what your thoughts and experience and hope for that might be. Ross Dawson: Well, there’s a prompt I use called “Apply Diverse Thinking,” where it generates extremely diverse perspectives on a topic—who might those very unusual people to think about something be, and then what would they think about this particular situation? Of course, there are a whole array of different thinking tools. There’s Marshall McLuhan’s tetrad, which is a little bit similar to your thing where, again, you can and should do it—well, not manually. What’s the manual equivalent of brain? Marshall Kirkpatrick: Thoughtfully, perhaps. Yeah, good one—deliberately, manually. I mean, Azeem Azhar over at Exponential View uses a fountain pen and paper and will sometimes have his team come online and they’ll do two-hour thinking sessions with no AI allowed. They just get on, I believe, Zoom, and just think through things with pen and paper, individually and together. And then they’ll kick off OpenAI or what have you, and use all the tools afterwards. Ross Dawson: Yeah, well, a couple of things. Actually, research has shown that in brainstorming, it is better for everyone to ideate individually before doing it collectively. And of course, that’s unaided. I think there are analogs there where—actually, one of the frameworks I just released last week was basically to say, think it through for yourself before you ask the AI, because then you have a reference point. If not, you don’t have a reference point to say, “Well, what am I expecting it to do? Let me think it through for myself,” even if it’s just a little bit, as opposed to just going in blank—”All right, give me an answer.” Just that simple thing of thinking through for yourself first is enormous. What it does is, obviously, give you a reference point for that. And I’m going on a lot about appropriate trust at the moment—as in, trust the AI enough, but not too much, which I think is absolutely critical capability. And part of it is being able to say, “Well, this is what I think it should be giving me.” Now you have a reference point for what it give

    40 min
  5. Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension

    1 APR

    Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension

    “Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.” –Nina Begus About Nina Begus Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI. Website: ninabegus.com LinkedIn Profile: Nina Begus  Book: Artificial Humanities   What you will learn How ancient myths and archetypes influence our understanding and design of AI Why the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systems The dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginaries How metaphors shape our interactions with AI products and the user experiences companies choose to enable The challenges and possibilities of imagining forms of machine intelligence and language beyond human templates Why collaboration between technical experts and humanists opens new frontiers for creativity and responsible technology What makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulses Practical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI Episode Resources Transcript Ross Dawson: Nina, it is wonderful to have you on the show. Nina Begus: Thank you for having me. Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities? Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed. I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it. Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts. Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them. Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space. The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on. So I think this kind of story is actually telling us a lot about how humans are not rational, how we have some very deeply embedded behaviors in us, and one of them is that we anthropomorphize everything, including machines.So I think this was a really important takeaway that we got already from the early days of AI with the first chatbot, Eliza. We’ve learned that that will be a feature of us relating to machines. Ross Dawson: So Joseph Campbell called the hero’s journey the monomyth, as in, there is a single myth. And I guess what you are doing here is—well, if you agree with that, which I’d be interested in—is that there are facets. The classic hero’s journey is quite simple, but there are facets of that monomyth, or something intrinsic to who we are, that is around this creation. And in this case, as you say, this relation we have with what we have created. Would you relate that at all to Joseph Campbell’s work? Nina Begus: I haven’t thought about it in this way, because I thought about myth and myths more and less of a storytelling issue, which here is definitely happening—the hero goes on a task, returns back changed, and maybe changes something in the community. The myths that I was looking into and the metaphors that I was exploring, primarily this huge metaphor of AI as a human mind, as an artificial reason—I think it works differently. It’s less of a narrative; it’s more of an imaginary of how or towards what we are building. I think this is a big problem, actually, because the imaginary around AI is very poor. What you get is mostly imagining machine intelligence on human terms, and a lot of people are bothered by that in the AI discourse—right, when you say the machine thinks, or the machine learns, or it has a mind, and some people go as far as to say it has consciousness. I think this kind of debate is actually not that productive. I think it’s more important to see how all these different AI products that we’ve created—and mostly when we talk about AI, people think of language models now—are very much designed as a sort of character, almost as an artificial human that, in literature, authors have been creating for a long time. So I think in that case, we can get back to a hero’s journey. But I think what I was looking at was actually more on the surface level of what kind of shortcuts we are using with these metaphors that we’re employing when building and using AI. I think the book makes a really good case showing that, yes, this is actually a very cultural technology. It’s very much informed by our imaginaries. One surprising part of it was really how hard it was to break out of this human mold. It was pretty much impossible to find examples of machines that are not exclusively human-like. I think Stanislaw Lem is one of the rare writers who can consistently deliver this kind of imaginary. Even looking at more recent works, like popular films such as Hollywood’s Ex Machina or Her, you can see how the technologists themselves would say, “Oh, we were influenced by this film,” in a way that it affirmed their product development trajectory. You can see it now, at this moment, with OpenAI launching companionship. So in many ways, not a lot has changed. Ross Dawson: Yeah, there’s a lot to dig into there. I just want to go back—in a sense, Pygmalion is a metaphor, but it’s also a myth. It is a story: creates a woman, and then falls in love with her, and then whatever happens from there. There is this, something happens, and then something else happens. That’s what a story is. I think that can impact the implicit metaphor, but coming back to the metaphor—so George Lakoff wrote the beautiful book Metaphors We Live By. I think the way the brain works is in metaphors and analogies to a very large degree. Some of those are enabling metaphors, and some of those are not very useful metaphors. I think part of your point is that some of the metaphors that we have for thinking about AI and machines are not useful. There may be, or we could create, some metaphors that are more useful. So, what are some of the most disabling metaphors, and what are some of the ones which could be more constructive? Nina Begus: Yes, So I think this main metaphor that I’ve mentioned—of AI as a human mind—is very limiting. I think it really limits the machinic potential to actually do something good with it. The fact that we’re still using the criteria that were made for humans, like different criteria developed on human language—the T

    35 min
  6. Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution

    25 MAR

    Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution

    “The center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adopt, adapt to skills, and adjust to an environment.” –Henrik von Scheel About Henrik von Scheel Henrik von Scheel is Co-Founder of advisory firm Strategic Intelligence, Chairman of the Climate Asset Trust, Vice Chairman of Regulatory Intelligence Committee, and Professor of Strategy, Arthur Lok Jack School of Business, among other roles. He is best known as originator of Industry 4.0, with many awards and extensive global recognition of his work. Website: von-scheel.com LinkedIn Profile: Henrik von Scheel   What you will learn Why human-centered AI is crucial for widespread societal prosperity The impact of AI hype cycles, media narratives, and the realities of technology adoption How equitable wealth distribution and capital allocation in AI can shape economic outcomes Risks around data ownership, privacy, and the importance of controlling your own data in the AI era Divergent approaches to AI regulation in the US, EU, and China, and the implications for global AI leadership The importance of trust calibration and intentional human-AI collaboration in practical applications How education and lifelong learning can be reshaped by AI to support individualized growth and mistake-enabled reasoning Opportunities for AI to amplify individual talents, address educational gaps, and enable more specialized and innovative skills Episode Resources Transcript Ross Dawson: Henrik, it is wonderful to have you on the show. Henrik von Scheel: Thank you very much for having me, Ross. Ross Dawson: So I think we’re pretty aligned in believing that we need to approach AI from a human-centered perspective and how it can bring us prosperity. So I’d just love to start with, how do you think about how we should be thinking about AI? Henrik von Scheel: Well, I think, like every technology that comes into play, it brings a lot of changes to us. But I think the center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adapt, adapt to skills, and adjust to an environment. So technology is something that we apply, but it’s the strategy on how we adapt with it that makes a difference. It’s never the technology itself. So I’m excited. It’s one of the most exciting periods for the industry and for us as people. Ross Dawson: There’s a phrase which I’ve heard you say more than once around AI should make us smarter, healthier, and wealthier. So if that’s the case, how do we frame it? How do we start to get on that journey? Henrik von Scheel: So I think what people experience today in AI is that they experience a lot of media hype—large language models, ChatGPT, and all of this—and they consume it from the media. So there’s a big hype around it, and I believe that AI is about to crash fundamentally, but crashing in technology is not bad, right? There are a lot of promises and then an inability to deliver, and then it crashes. What you hear in the media today is very much driven by a story of them raising funds because it’s so expensive, and so they are promising the world of everything and nothing, and the reality looks a little bit better. The world that they are presenting is that you will be replaced, and you will be happy, and you’ll be served by everything else. And somehow it will work out. We don’t know how, but it will work out. And that’s not a future that is really a real future. The future must include that everybody gets smarter, wealthier, and healthier. And when I say everybody, I mean not only the guys that have money, that they become more rich, or the middle class. It’s like everybody in society should get smarter from AI. That means part of the things that they need to learn or how human evolution works should be better, and it should make us healthier people and wealthier people. So it should not only be that we sacrifice our convenience with our freedom, with our privacy, with our environment, or any other things that we put on the table to get convenience back. That exchange we have done a couple of times, and it’s not working really well for humans, and it’s not a good trade for us, right? Ross Dawson: Yeah, I love that. And since it’s quite simple, you know, you can say it, it’s clear, it sounds good, and it is a really clear direction. But you’re actually pointing in a couple of ways there to capital allocation. So obviously, if you’re looking at the AI economic story, this is around this diversion of capital from other places to AI model development, data centers, deployment, and so on. But also, when you’re saying wealth here, this is around the distribution of wealth—where we’re allocating capital to AI development, but also from the way in which AI is developed, there will be creation of wealth. There is the real potential for productivity improvement. But then it’s about finding, how do we have the mechanisms for allocation of wealth or capital from that which is allocated? Let’s call it equitably. Henrik von Scheel: I’m a firm believer that this year, 35 to 45% of the money invested in AI will evaporate. Companies that have invested—they’re the early adopters—they have this format, so they’re rushing to it. From a company perspective, you always adapt the best practices. When it goes beyond the hype, and the performance curve and adoption curve is low. For example, for AI, the simple version is there. You heard that Deloitte and McKinsey talked 10 years ago about robotic process automation like God’s gift to mankind in AI. Today, you don’t hear them talking about it, because you can download it for free—for HR, for forecasting, planning, budgeting, and so on, you can save 20 or 30%, and as an organization, you can do it yourself. You download two, three models, you test it, and you run it. Good, okay, so that’s when you apply best practices. Then you have industry practices, like AI agents. So when you have AI agents for manufacturing, for industrial sectors, for energy sectors, they are nothing else than workflow optimization. You use robotic process optimization, you do a visualization on it, so it’s far more practical at a level, because you use the data they already have in the organizations under a simple line on the process flow, on the safety, security—it’s very much down at the level where they can apply it and use it. So this version of large language models, where you have this magic powder you spread over the organization and it’s totally working—it’s not really there. And then there’s the third leg that companies are quite aware of. It’s called Shadow AI, right? Shadow AI is because AI is the biggest infringement on intellectual capital within organizations. The reason why normal people are not allowed to look at pornography at their work is because of cybersecurity. It’s not that your boss doesn’t like you to look at pornography; it’s because of cybersecurity. It’s the same reason with AI—you should not be allowed to use Copilot latest version or large language models as a CFO or as a worker, because you’re exporting your own information outside. Copilot takes, every five seconds, a screenshot for the large language models’ learning. So as a corporate point of view, that’s the first thing—you should actually protect your own data so you can monetize your data in the future. From an economic point of view, if you go two, three steps behind this, you ask, okay, what is it that makes sense in this? There’s something really, really strange in this. Australia was built by building railways—they take 100 years to build, they also last 100 years. The infrastructure that lasts. So there’s a return on investment. You build streets, you build education systems—everything we build as humans, as society, has a lasting element to it. Now, we build data centers that last three months until the chips need to be returned, or six months. So there’s no sense in that we are building data centers around the world where we capture all data. It has a volume of hundreds of trillions of dollars, and we need to exchange them at a rate between three to six months to maintain the data. And then you say, wow. And you do that via license models of large language models—the data can never, in its entire life cycle, be that much worth. So there’s a very strange element, because most of the entrepreneurs that go to large language models and use their solutions on Gemini and ChatGPT and so on, you say, okay, you are building your solution on large language models, but you don’t own the model. You don’t own the data. You don’t own your own data. So what are you doing? Ross Dawson: You have architectural choices, to a point, as to— Henrik von Scheel: That’s Architectural choices, but you are limiting yourself. So the first element you always say, if my value is customizing a solution, your value is actually the data. So you must have a way to keep and maintain the data yourself. We can take another call to say how you apply AI and what the future of AI looks like, because AI today is very much focused on language models, and language models are the most limited version of AI science of all. It has the least data, but i

    47 min
  7. Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation

    18 MAR

    Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation

    “Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through.” –Joanna Michalska About Dr Joanna Michalska Dr Joanna Michalska is Founder of Ethica Group Ltd., Co-Founder of The Strategic Centre, and an advisor to boards on AI risk, ethics, and governance. She holds a PhD in Strategic Enterprise Risk Management and has twenty years’ experience leading enterprise risk, strategy and transformation at J.P. Morgan and HSBC. Website: ethicagroup.ai LinkedIn Profile: Dr Joanna Michalska   What you will learn How boards and executives can rethink governance and accountability in the age of AI The importance of embedding governance into organizational ecosystems for agile, responsible AI adoption How to map and assign human accountability for both automated and hybrid AI-human decisions The decision architecture needed for scalable oversight, intervention, and escalation pathways Practical examples of effective AI oversight in areas like fraud detection and exception handling Steps for complying with new regulations like the EU AI Act, including inventorying AI systems and risk tiering Why human qualities like emotional intelligence, psychological safety, and honest communication are critical in AI-driven organizations How leaders can foster organizational resilience and help teams adapt by building AI literacy, retraining, and supporting personal growth Episode Resources Transcript Ross Dawson: Joanna, it’s a delight to have you on the show. Joanna Michalska: Well, thank you for having me, Ross. Ross Dawson: So, AI is wonderful, but it also brings us into a whole lot of new territory where we have to be careful in various ways. I’d love to just hear, first of all, the big framing around how boards and executive teams need to be thinking about governance and accountability as AI is incorporated more and more into work and organizations. Joanna Michalska: I think we’re all very excited about the capability that exists today to help us enhance our performance and the way we think about strategic execution for our organizations. It has multidimensional consequences for how we adapt it. What’s very important right now is, as executives and boards think about accelerating their ambitions and growth plans, there needs to be awareness of two components. First, how do we as leaders, as humans, need to adapt to that new environment? There are new conditions, or perhaps existing conditions that really need to be enhanced. They’re very important to exist in order to be able to adapt and to scale. Second, do we actually have the right systems in place to enable that scale? I think it’s important to recognize that, yes, governance has always existed, but the way it existed was more as external supporting scaffolding, rather than being built into an organizational ecosystem. We also need to have the right leadership in place to ensure that decisions are made in the right way and the organization is designed in a much more robust, agile way. These two conditions are critical for not only increasing adoption, but also doing so in a safe and responsible way, especially as we expand our ambitions for the future. It’s exciting, but there’s also a lot of caution and a lot of questions being asked by executives at this time. Ross Dawson: Yes and I guess the more we can address those concerns upfront, the more it enables us to do. I have this idea of minimum viable governance—at least having some governance in place so we don’t go too badly astray. But I always think of governance for transformation as: how do you set governance not as a brake to slow you, but in fact to accelerate you, because you have confidence in how you’re going about it? Joanna Michalska: Absolutely! I think the mindset shift is very important, because governance, to your point, has always been seen as a compliance-driven thing that we must do because regulators require us to, and we need to demonstrate we have these policies and procedures in place and the right people in the right positions. Now, what the new environment is requiring of us—as executives, even board members—is a different set of responsibilities that really cannot be assumed as pre-existing. In this accelerated environment—let’s call it that, rather than just “AI,” because it’s so overused and can mean so many different things—where the automation rate is fast and overtaking everything, governance needs to change. It can’t be an afterthought or something we designed at one point in the past and now just try to fit into what’s happening. It really needs to become a well-designed, living organism. It needs to organically evolve. It needs to have the right people with the right accountability that is well understood. Accountability that was designed in the past needs to be looked at, discussed, and understood by all executives and across the organization, cross-functionally, to really work. Another important thing is to make sure executives have the right level of ownership and responsibility to ensure the conditions exist to enable that system to work. That’s a very difficult thing to do, because now you’re talking about having designed human oversight that doesn’t just become a “human in the loop,” but the right human in the right loop. By “right,” I mean: does this person, or these people, understand exactly what the output of the automated system is? How has this decision been made? Is there the right level of executive oversight when that decision is already made? How confident are we that we can say, with a level of certainty, “I’m comfortable with this, and this is not going to create negative consequences I’m not willing to accept”? That’s not an easy thing to do—to create those conditions of trust and safety. Ross Dawson: Particularly when there are so many decisions and outputs throughout the organization. Let’s go into decision making. I’ve built a little framework around going from humans-only through to AI-only decisions. Hopefully, there are no purely human decisions anymore; at least you can ask an AI, “Am I crazy or not?” even if it’s a human decision. Some decisions are already fully automated, but they still need oversight. You can bring in exceptions, conditional things, humans in the loop for approval, humans in the process, or build an explainability layer. There’s a whole array of different things. For every decision, you need to create the right way to implement it. In an organization with that profusion of different decisions and possible approaches, how can you actually make that happen? Joanna Michalska: Yeah, it’s a great question. Decisions are at the center of everything, and the quality of those decisions—and the whole architecture, how it’s designed for decisions to be made—is really important. It doesn’t stay static; it evolves as the organizational structure evolves. Questions like accountability—what does it look like, and what is the governance around accountability—are critical. Intervention capability is also very important, because with this level of automation, the whole design of how automated decisions are made raises multiple questions. Are these decisions made by old algorithms that are very simple, where the risk is determined by a set of rules? Is there clarity around who actually has the decision intervention rights in the organization, and how does that roll up to an executive layer? Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through. The quality of human decision-making, and determining when a human is able to review decisions made by complex systems—whether agentic or whatever structure the organization has—is critical at any level, whether it’s middle management, executive management, or board. There are different layers of how the architecture requires design and measurement. Escalation pathways are another one. People will not naturally escalate if they fear negative consequences, retaliation, or any type of fear created because there isn’t psychological safety or trust within the organization. Even if there is an escalation protocol in place within the decision architecture, how do we know that people will raise the problem? Ross Dawson: The accountability. Of course, only humans are accountable. Ultimately, the board and their executives are accountable. But what you’re suggesting, it sounds like, is that for every decision, there is somebody where you can say, “That person is accountable.” Obviously, it cascades up to who they’re reporting to, but there is human accountability for every decision made, even if it’s a thousand decisions where somebody has oversight and responsibility that those are the right decisions. I want to talk about escalation and how that might happen, but perhaps we can ground this with a couple of examples. What are some examples of decisions made in organizations—hopefully well-designed, or perhaps not so well-designed and haven’t worked out? Joanna Michalska: Yes, I have a couple of good examples where an automated system allows review of multiple false positives, where a hum

    34 min
  8. Cornelia Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone

    12 MAR

    Cornelia Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone

    “You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Website: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way it goes. Right now, we consider AI like this big phenomenon that is happening to us. It is not—it is happening with, amongst, and because of us. I think that is the big change that needs to happen in our minds, which is that AI is neutral at the end of the day. It’s a means to an end, not an end in itself. We have an opportunity to shift from the old saying—which I think still holds true—garbage in, garbage out, towards values in, values out. But for that, we need to start offline and think: what are the values that we stand for? What is the world that we want to live in and leave behind? As you know, I’m a big defender of pro social AI, which refers to AI systems that are deliberately tailored, trained, tested, and targeted to bring out the best in and for people and planet. Ross: So again, lots of angles to dig into, but I just want to come back to that agency decay. I created a framework around the cognitive impact of AI, going from, at the bottom, cognitive corruption and cognitive erosion, through to neutral aspects, to the potential for cognitive augmentation. There are some individuals, of course, who are getting their thinking corrupted or eroded, as you’ve suggested; others are using it well and in ways which are potentially enhancing their cognition. So, there is what individuals can do to be able to do that. There’s also what institutions, including education and employers, can do to provide the conditions where people are more likely to have a positive impact on cognition. But more broadly, the question is, again, how can we tip that more in the positive direction? Because absolutely, not just the potential, but the reality of cognitive erosion—or agency decay, as you describe it, which I think is a great phrase. So are there things we can do to move away from the widespread agency decay, which we are in danger of? Cornelia: Yeah, I think maybe we could marry our two frameworks, because the scale of agency decay that I have developed looks at experience, experimentation, integration, reliance, and addiction. I would say we have now passed the stage of experimentation, and most of us are very deeply into the field of integration. That means we’re just half a step away from reliance, where all of a sudden it becomes nearly unthinkable to write that email yourself, to do that calendar scheduling yourself, or to write that report from scratch. But that means we’re just one step away from full-blown addiction. At least now, we still have the possibility to compare the before and after, which comes back to us as an analog generation. Now is the time to invest in what I would call double literacy—a holistic understanding of our NI, our natural intelligence, but also our algorithmic, our AI. That requires a double literacy—not just AI literacy or digital literacy, but the complementarity of these two intelligences and their mutual influence, because none of them happens in a vacuum anymore. Ross: Absolutely, So what you described—experiment, integration, reliance, addiction—sounds like a slippery slope. So, what are the things we can do to mitigate or push back against that, to use AI without being over-reliant, and where that experiment leads to integration in a positive way? What can we do, either as individuals or as employers or institutions, to stop that negative slide and potentially push back to a more positive use and frame? Cornelia: A very useful tool that I have found resonates with many people is the A frame, which looks at awareness, appreciation, acceptance, and accountability. I have an alliteration affinity, as you can see. The awareness stage looks at the mindset itself and really disciplines us not to slip down that slope, but to be aware of the steps we’re taking. The appreciation is about what makes us, in our own NI, unique, and the appreciation of where, in combination with certain external tools, it can be better. We all have gaps, we all have weaknesses, and that’s what we have to accept. The human being, even though now it’s sometimes put in opposition to AI as the better one, is not perfect either. Like probably you and most of the listeners have read Thinking, Fast and Slow by Daniel Kahneman and many others—there are libraries about human heuristics, human fallacies, our inability for actual rational thinking. But the fact that you have read a book does not mean that you are immune to that. We need to accept that this is part of our modus operandi, and in the same way as w

    36 min

Ratings & Reviews

About

Exploring and unlocking the potential of AI for individuals, organizations, and humanity

You Might Also Like