Humans + AI

Ross Dawson

Exploring and unlocking the potential of AI for individuals, organizations, and humanity

  1. Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension

    1 DAY AGO

    Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension

    “Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.” –Nina Begus About Nina Begus Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI. Webiste: ninabegus.com LinkedIn Profile: Nina Begus  Book: Artificial Humanities   What you will learn How ancient myths and archetypes influence our understanding and design of AI Why the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systems The dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginaries How metaphors shape our interactions with AI products and the user experiences companies choose to enable The challenges and possibilities of imagining forms of machine intelligence and language beyond human templates Why collaboration between technical experts and humanists opens new frontiers for creativity and responsible technology What makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulses Practical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI Episode Resources Transcript Ross Dawson: Nina, it is wonderful to have you on the show. Nina Begus: Thank you for having me. Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities? Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed. I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it. Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts. Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them. Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space. The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on. So I think this kind of story is actually telling us a lot about how humans are not rational, how we have some very deeply embedded behaviors in us, and one of them is that we anthropomorphize everything, including machines.So I think this was a really important takeaway that we got already from the early days of AI with the first chatbot, Eliza. We’ve learned that that will be a feature of us relating to machines. Ross Dawson: So Joseph Campbell called the hero’s journey the monomyth, as in, there is a single myth. And I guess what you are doing here is—well, if you agree with that, which I’d be interested in—is that there are facets. The classic hero’s journey is quite simple, but there are facets of that monomyth, or something intrinsic to who we are, that is around this creation. And in this case, as you say, this relation we have with what we have created. Would you relate that at all to Joseph Campbell’s work? Nina Begus: I haven’t thought about it in this way, because I thought about myth and myths more and less of a storytelling issue, which here is definitely happening—the hero goes on a task, returns back changed, and maybe changes something in the community. The myths that I was looking into and the metaphors that I was exploring, primarily this huge metaphor of AI as a human mind, as an artificial reason—I think it works differently. It’s less of a narrative; it’s more of an imaginary of how or towards what we are building. I think this is a big problem, actually, because the imaginary around AI is very poor. What you get is mostly imagining machine intelligence on human terms, and a lot of people are bothered by that in the AI discourse—right, when you say the machine thinks, or the machine learns, or it has a mind, and some people go as far as to say it has consciousness. I think this kind of debate is actually not that productive. I think it’s more important to see how all these different AI products that we’ve created—and mostly when we talk about AI, people think of language models now—are very much designed as a sort of character, almost as an artificial human that, in literature, authors have been creating for a long time. So I think in that case, we can get back to a hero’s journey. But I think what I was looking at was actually more on the surface level of what kind of shortcuts we are using with these metaphors that we’re employing when building and using AI. I think the book makes a really good case showing that, yes, this is actually a very cultural technology. It’s very much informed by our imaginaries. One surprising part of it was really how hard it was to break out of this human mold. It was pretty much impossible to find examples of machines that are not exclusively human-like. I think Stanislaw Lem is one of the rare writers who can consistently deliver this kind of imaginary. Even looking at more recent works, like popular films such as Hollywood’s Ex Machina or Her, you can see how the technologists themselves would say, “Oh, we were influenced by this film,” in a way that it affirmed their product development trajectory. You can see it now, at this moment, with OpenAI launching companionship. So in many ways, not a lot has changed. Ross Dawson: Yeah, there’s a lot to dig into there. I just want to go back—in a sense, Pygmalion is a metaphor, but it’s also a myth. It is a story: creates a woman, and then falls in love with her, and then whatever happens from there. There is this, something happens, and then something else happens. That’s what a story is. I think that can impact the implicit metaphor, but coming back to the metaphor—so George Lakoff wrote the beautiful book Metaphors We Live By. I think the way the brain works is in metaphors and analogies to a very large degree. Some of those are enabling metaphors, and some of those are not very useful metaphors. I think part of your point is that some of the metaphors that we have for thinking about AI and machines are not useful. There may be, or we could create, some metaphors that are more useful. So, what are some of the most disabling metaphors, and what are some of the ones which could be more constructive? Nina Begus: Yes, So I think this main metaphor that I’ve mentioned—of AI as a human mind—is very limiting. I think it really limits the machinic potential to actually do something good with it. The fact that we’re still using the criteria that were made for humans, like different criteria developed on human language—the T

    35 min
  2. Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution

    25 MAR

    Henrik von Scheel on making people smarter, wealthier and healthier, biophysical data, resilient learning, and human evolution

    “The center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adopt, adapt to skills, and adjust to an environment.” –Henrik von Scheel About Henrik von Scheel Henrik von Scheel is Co-Founder of advisory firm Strategic Intelligence, Chairman of the Climate Asset Trust, Vice Chairman of Regulatory Intelligence Committee, and Professor of Strategy, Arthur Lok Jack School of Business, among other roles. He is best known as originator of Industry 4.0, with many awards and extensive global recognition of his work. Webiste: von-scheel.com LinkedIn Profile: Henrik von Scheel   What you will learn Why human-centered AI is crucial for widespread societal prosperity The impact of AI hype cycles, media narratives, and the realities of technology adoption How equitable wealth distribution and capital allocation in AI can shape economic outcomes Risks around data ownership, privacy, and the importance of controlling your own data in the AI era Divergent approaches to AI regulation in the US, EU, and China, and the implications for global AI leadership The importance of trust calibration and intentional human-AI collaboration in practical applications How education and lifelong learning can be reshaped by AI to support individualized growth and mistake-enabled reasoning Opportunities for AI to amplify individual talents, address educational gaps, and enable more specialized and innovative skills Episode Resources Transcript Ross Dawson: Henrik, it is wonderful to have you on the show. Henrik von Scheel: Thank you very much for having me, Ross. Ross Dawson: So I think we’re pretty aligned in believing that we need to approach AI from a human-centered perspective and how it can bring us prosperity. So I’d just love to start with, how do you think about how we should be thinking about AI? Henrik von Scheel: Well, I think, like every technology that comes into play, it brings a lot of changes to us. But I think the center of any change that we’re doing in the fourth industrial revolution is always the human being, because humans have an ability to adapt, adapt to skills, and adjust to an environment. So technology is something that we apply, but it’s the strategy on how we adapt with it that makes a difference. It’s never the technology itself. So I’m excited. It’s one of the most exciting periods for the industry and for us as people. Ross Dawson: There’s a phrase which I’ve heard you say more than once around AI should make us smarter, healthier, and wealthier. So if that’s the case, how do we frame it? How do we start to get on that journey? Henrik von Scheel: So I think what people experience today in AI is that they experience a lot of media hype—large language models, ChatGPT, and all of this—and they consume it from the media. So there’s a big hype around it, and I believe that AI is about to crash fundamentally, but crashing in technology is not bad, right? There are a lot of promises and then an inability to deliver, and then it crashes. What you hear in the media today is very much driven by a story of them raising funds because it’s so expensive, and so they are promising the world of everything and nothing, and the reality looks a little bit better. The world that they are presenting is that you will be replaced, and you will be happy, and you’ll be served by everything else. And somehow it will work out. We don’t know how, but it will work out. And that’s not a future that is really a real future. The future must include that everybody gets smarter, wealthier, and healthier. And when I say everybody, I mean not only the guys that have money, that they become more rich, or the middle class. It’s like everybody in society should get smarter from AI. That means part of the things that they need to learn or how human evolution works should be better, and it should make us healthier people and wealthier people. So it should not only be that we sacrifice our convenience with our freedom, with our privacy, with our environment, or any other things that we put on the table to get convenience back. That exchange we have done a couple of times, and it’s not working really well for humans, and it’s not a good trade for us, right? Ross Dawson: Yeah, I love that. And since it’s quite simple, you know, you can say it, it’s clear, it sounds good, and it is a really clear direction. But you’re actually pointing in a couple of ways there to capital allocation. So obviously, if you’re looking at the AI economic story, this is around this diversion of capital from other places to AI model development, data centers, deployment, and so on. But also, when you’re saying wealth here, this is around the distribution of wealth—where we’re allocating capital to AI development, but also from the way in which AI is developed, there will be creation of wealth. There is the real potential for productivity improvement. But then it’s about finding, how do we have the mechanisms for allocation of wealth or capital from that which is allocated? Let’s call it equitably. Henrik von Scheel: I’m a firm believer that this year, 35 to 45% of the money invested in AI will evaporate. Companies that have invested—they’re the early adopters—they have this format, so they’re rushing to it. From a company perspective, you always adapt the best practices. When it goes beyond the hype, and the performance curve and adoption curve is low. For example, for AI, the simple version is there. You heard that Deloitte and McKinsey talked 10 years ago about robotic process automation like God’s gift to mankind in AI. Today, you don’t hear them talking about it, because you can download it for free—for HR, for forecasting, planning, budgeting, and so on, you can save 20 or 30%, and as an organization, you can do it yourself. You download two, three models, you test it, and you run it. Good, okay, so that’s when you apply best practices. Then you have industry practices, like AI agents. So when you have AI agents for manufacturing, for industrial sectors, for energy sectors, they are nothing else than workflow optimization. You use robotic process optimization, you do a visualization on it, so it’s far more practical at a level, because you use the data they already have in the organizations under a simple line on the process flow, on the safety, security—it’s very much down at the level where they can apply it and use it. So this version of large language models, where you have this magic powder you spread over the organization and it’s totally working—it’s not really there. And then there’s the third leg that companies are quite aware of. It’s called Shadow AI, right? Shadow AI is because AI is the biggest infringement on intellectual capital within organizations. The reason why normal people are not allowed to look at pornography at their work is because of cybersecurity. It’s not that your boss doesn’t like you to look at pornography; it’s because of cybersecurity. It’s the same reason with AI—you should not be allowed to use Copilot latest version or large language models as a CFO or as a worker, because you’re exporting your own information outside. Copilot takes, every five seconds, a screenshot for the large language models’ learning. So as a corporate point of view, that’s the first thing—you should actually protect your own data so you can monetize your data in the future. From an economic point of view, if you go two, three steps behind this, you ask, okay, what is it that makes sense in this? There’s something really, really strange in this. Australia was built by building railways—they take 100 years to build, they also last 100 years. The infrastructure that lasts. So there’s a return on investment. You build streets, you build education systems—everything we build as humans, as society, has a lasting element to it. Now, we build data centers that last three months until the chips need to be returned, or six months. So there’s no sense in that we are building data centers around the world where we capture all data. It has a volume of hundreds of trillions of dollars, and we need to exchange them at a rate between three to six months to maintain the data. And then you say, wow. And you do that via license models of large language models—the data can never, in its entire life cycle, be that much worth. So there’s a very strange element, because most of the entrepreneurs that go to large language models and use their solutions on Gemini and ChatGPT and so on, you say, okay, you are building your solution on large language models, but you don’t own the model. You don’t own the data. You don’t own your own data. So what are you doing? Ross Dawson: You have architectural choices, to a point, as to— Henrik von Scheel: That’s Architectural choices, but you are limiting yourself. So the first element you always say, if my value is customizing a solution, your value is actually the data. So you must have a way to keep and maintain the data yourself. We can take another call to say how you apply AI and what the future of AI looks like, because AI today is very much focused on language models, and language models are the most limited version of AI science of all. It has the least data, but i

    47 min
  3. Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation

    18 MAR

    Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation

    “Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through.” –Joanna Michalska About Dr Joanna Michalska Dr Joanna Michalska is Founder of Ethica Group Ltd., Co-Founder of The Strategic Centre, and an advisor to boards on AI risk, ethics, and governance. She holds a PhD in Strategic Enterprise Risk Management and has twenty years’ experience leading enterprise risk, strategy and transformation at J.P. Morgan and HSBC. Webiste: ethicagroup.ai LinkedIn Profile: Dr Joanna Michalska   What you will learn How boards and executives can rethink governance and accountability in the age of AI The importance of embedding governance into organizational ecosystems for agile, responsible AI adoption How to map and assign human accountability for both automated and hybrid AI-human decisions The decision architecture needed for scalable oversight, intervention, and escalation pathways Practical examples of effective AI oversight in areas like fraud detection and exception handling Steps for complying with new regulations like the EU AI Act, including inventorying AI systems and risk tiering Why human qualities like emotional intelligence, psychological safety, and honest communication are critical in AI-driven organizations How leaders can foster organizational resilience and help teams adapt by building AI literacy, retraining, and supporting personal growth Episode Resources Transcript Ross Dawson: Joanna, it’s a delight to have you on the show. Joanna Michalska: Well, thank you for having me, Ross. Ross Dawson: So, AI is wonderful, but it also brings us into a whole lot of new territory where we have to be careful in various ways. I’d love to just hear, first of all, the big framing around how boards and executive teams need to be thinking about governance and accountability as AI is incorporated more and more into work and organizations. Joanna Michalska: I think we’re all very excited about the capability that exists today to help us enhance our performance and the way we think about strategic execution for our organizations. It has multidimensional consequences for how we adapt it. What’s very important right now is, as executives and boards think about accelerating their ambitions and growth plans, there needs to be awareness of two components. First, how do we as leaders, as humans, need to adapt to that new environment? There are new conditions, or perhaps existing conditions that really need to be enhanced. They’re very important to exist in order to be able to adapt and to scale. Second, do we actually have the right systems in place to enable that scale? I think it’s important to recognize that, yes, governance has always existed, but the way it existed was more as external supporting scaffolding, rather than being built into an organizational ecosystem. We also need to have the right leadership in place to ensure that decisions are made in the right way and the organization is designed in a much more robust, agile way. These two conditions are critical for not only increasing adoption, but also doing so in a safe and responsible way, especially as we expand our ambitions for the future. It’s exciting, but there’s also a lot of caution and a lot of questions being asked by executives at this time. Ross Dawson: Yes and I guess the more we can address those concerns upfront, the more it enables us to do. I have this idea of minimum viable governance—at least having some governance in place so we don’t go too badly astray. But I always think of governance for transformation as: how do you set governance not as a brake to slow you, but in fact to accelerate you, because you have confidence in how you’re going about it? Joanna Michalska: Absolutely! I think the mindset shift is very important, because governance, to your point, has always been seen as a compliance-driven thing that we must do because regulators require us to, and we need to demonstrate we have these policies and procedures in place and the right people in the right positions. Now, what the new environment is requiring of us—as executives, even board members—is a different set of responsibilities that really cannot be assumed as pre-existing. In this accelerated environment—let’s call it that, rather than just “AI,” because it’s so overused and can mean so many different things—where the automation rate is fast and overtaking everything, governance needs to change. It can’t be an afterthought or something we designed at one point in the past and now just try to fit into what’s happening. It really needs to become a well-designed, living organism. It needs to organically evolve. It needs to have the right people with the right accountability that is well understood. Accountability that was designed in the past needs to be looked at, discussed, and understood by all executives and across the organization, cross-functionally, to really work. Another important thing is to make sure executives have the right level of ownership and responsibility to ensure the conditions exist to enable that system to work. That’s a very difficult thing to do, because now you’re talking about having designed human oversight that doesn’t just become a “human in the loop,” but the right human in the right loop. By “right,” I mean: does this person, or these people, understand exactly what the output of the automated system is? How has this decision been made? Is there the right level of executive oversight when that decision is already made? How confident are we that we can say, with a level of certainty, “I’m comfortable with this, and this is not going to create negative consequences I’m not willing to accept”? That’s not an easy thing to do—to create those conditions of trust and safety. Ross Dawson: Particularly when there are so many decisions and outputs throughout the organization. Let’s go into decision making. I’ve built a little framework around going from humans-only through to AI-only decisions. Hopefully, there are no purely human decisions anymore; at least you can ask an AI, “Am I crazy or not?” even if it’s a human decision. Some decisions are already fully automated, but they still need oversight. You can bring in exceptions, conditional things, humans in the loop for approval, humans in the process, or build an explainability layer. There’s a whole array of different things. For every decision, you need to create the right way to implement it. In an organization with that profusion of different decisions and possible approaches, how can you actually make that happen? Joanna Michalska: Yeah, it’s a great question. Decisions are at the center of everything, and the quality of those decisions—and the whole architecture, how it’s designed for decisions to be made—is really important. It doesn’t stay static; it evolves as the organizational structure evolves. Questions like accountability—what does it look like, and what is the governance around accountability—are critical. Intervention capability is also very important, because with this level of automation, the whole design of how automated decisions are made raises multiple questions. Are these decisions made by old algorithms that are very simple, where the risk is determined by a set of rules? Is there clarity around who actually has the decision intervention rights in the organization, and how does that roll up to an executive layer? Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through. The quality of human decision-making, and determining when a human is able to review decisions made by complex systems—whether agentic or whatever structure the organization has—is critical at any level, whether it’s middle management, executive management, or board. There are different layers of how the architecture requires design and measurement. Escalation pathways are another one. People will not naturally escalate if they fear negative consequences, retaliation, or any type of fear created because there isn’t psychological safety or trust within the organization. Even if there is an escalation protocol in place within the decision architecture, how do we know that people will raise the problem? Ross Dawson: The accountability. Of course, only humans are accountable. Ultimately, the board and their executives are accountable. But what you’re suggesting, it sounds like, is that for every decision, there is somebody where you can say, “That person is accountable.” Obviously, it cascades up to who they’re reporting to, but there is human accountability for every decision made, even if it’s a thousand decisions where somebody has oversight and responsibility that those are the right decisions. I want to talk about escalation and how that might happen, but perhaps we can ground this with a couple of examples. What are some examples of decisions made in organizations—hopefully well-designed, or perhaps not so well-designed and haven’t worked out? Joanna Michalska: Yes, I have a couple of good examples where an automated system allows review of multiple false positives, where a hum

    34 min
  4. Cornelia Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone

    12 MAR

    Cornelia Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone

    “You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.” –Cornelia C. Walther About Cornelia C. Walther Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally. Webiste: pozebeingchange LinkedIn Profile: Cornelia C. Walther University Profile: knowledge.wharton What you will learn How the ‘hybrid tipping zone’ between humans and AI shapes society’s future The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’ Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives Episode Resources Transcript Ross Dawson: Cornelia, it is fantastic to have you on the show Cornelia Walther: Thank you for having me Ross. Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is? Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI. We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out. Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages. Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map? Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have. Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way. So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome? Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way it goes. Right now, we consider AI like this big phenomenon that is happening to us. It is not—it is happening with, amongst, and because of us. I think that is the big change that needs to happen in our minds, which is that AI is neutral at the end of the day. It’s a means to an end, not an end in itself. We have an opportunity to shift from the old saying—which I think still holds true—garbage in, garbage out, towards values in, values out. But for that, we need to start offline and think: what are the values that we stand for? What is the world that we want to live in and leave behind? As you know, I’m a big defender of pro social AI, which refers to AI systems that are deliberately tailored, trained, tested, and targeted to bring out the best in and for people and planet. Ross: So again, lots of angles to dig into, but I just want to come back to that agency decay. I created a framework around the cognitive impact of AI, going from, at the bottom, cognitive corruption and cognitive erosion, through to neutral aspects, to the potential for cognitive augmentation. There are some individuals, of course, who are getting their thinking corrupted or eroded, as you’ve suggested; others are using it well and in ways which are potentially enhancing their cognition. So, there is what individuals can do to be able to do that. There’s also what institutions, including education and employers, can do to provide the conditions where people are more likely to have a positive impact on cognition. But more broadly, the question is, again, how can we tip that more in the positive direction? Because absolutely, not just the potential, but the reality of cognitive erosion—or agency decay, as you describe it, which I think is a great phrase. So are there things we can do to move away from the widespread agency decay, which we are in danger of? Cornelia: Yeah, I think maybe we could marry our two frameworks, because the scale of agency decay that I have developed looks at experience, experimentation, integration, reliance, and addiction. I would say we have now passed the stage of experimentation, and most of us are very deeply into the field of integration. That means we’re just half a step away from reliance, where all of a sudden it becomes nearly unthinkable to write that email yourself, to do that calendar scheduling yourself, or to write that report from scratch. But that means we’re just one step away from full-blown addiction. At least now, we still have the possibility to compare the before and after, which comes back to us as an analog generation. Now is the time to invest in what I would call double literacy—a holistic understanding of our NI, our natural intelligence, but also our algorithmic, our AI. That requires a double literacy—not just AI literacy or digital literacy, but the complementarity of these two intelligences and their mutual influence, because none of them happens in a vacuum anymore. Ross: Absolutely, So what you described—experiment, integration, reliance, addiction—sounds like a slippery slope. So, what are the things we can do to mitigate or push back against that, to use AI without being over-reliant, and where that experiment leads to integration in a positive way? What can we do, either as individuals or as employers or institutions, to stop that negative slide and potentially push back to a more positive use and frame? Cornelia: A very useful tool that I have found resonates with many people is the A frame, which looks at awareness, appreciation, acceptance, and accountability. I have an alliteration affinity, as you can see. The awareness stage looks at the mindset itself and really disciplines us not to slip down that slope, but to be aware of the steps we’re taking. The appreciation is about what makes us, in our own NI, unique, and the appreciation of where, in combination with certain external tools, it can be better. We all have gaps, we all have weaknesses, and that’s what we have to accept. The human being, even though now it’s sometimes put in opposition to AI as the better one, is not perfect either. Like probably you and most of the listeners have read Thinking, Fast and Slow by Daniel Kahneman and many others—there are libraries about human heuristics, human fallacies, our inability for actual rational thinking. But the fact that you have read a book does not mean that you are immune to that. We need to accept that this is part of our modus operandi, and in the same way as w

    36 min
  5. Ross Dawson on Humans + AI agentic systems

    4 MAR

    Ross Dawson on Humans + AI agentic systems

    “Transparency has to be built into the structure so that you know where the decision is made, what authorizations are given, and have an audit trail visible so you can always see what is going on.” –Ross Dawson About Ross Dawson Ross Dawson is a futurist, keynote speaker, strategy advisor, author, and host of Amplifying Cognition podcast. He is Chairman of the Advanced Human Technologies group of companies and Founder of Humans + AI startup Informivity. He has delivered keynote speeches and strategy workshops in 33 countries and is the bestselling author of 5 books, most recently Thriving on Overload. Webiste: Collaborating with AI Agents Intelligent AI Delegation Agentic Interactions LinkedIn Profile: Ross Dawson What you will learn How human-AI teams outperform human-only teams in productivity and efficiency The crucial role of understanding AI strengths and limitations when designing collaborative workflows Ways AI collaboration can lead to output homogenization and strategies to preserve human creativity Key principles of intelligent delegation within multi-agent AI systems, including dynamic assessment and trust Understanding accountability, transparency, and auditability in decision-making with autonomous AI agents How user intent and ‘machine fluency’ impact the effectiveness of AI agents in economic and organizational contexts The emergence of an ‘agentic economy’ and its implications for fairness, capability gaps, and representation Counterintuitive findings on AI-mediated negotiation, particularly advantages for women, and what it reveals about AI-human interaction Episode Resources Transcript Ross Dawson: This episode is a little bit different. Instead of doing an interview with somebody remarkable, as usual, today I’m going to just share a bit of an update and then share insights from three recent research papers that dig into something which I think is exceptionally important, which is how humans work with AI agentic systems. And we’ll look at a few different layers of that, from how small humans plus agent teams work through to how we can delegate decisions to AI through to some of the broader implications. But first, a bit of an update. 2026 seems to be moving exceptionally fast. It’s a very interesting time to be alive, and I think it’s pretty even hard to see what the end of this year is going to look like. So for me, I am doing my client work as usual. So I’ve got keynotes around the world on usually various things related to AI, the future of AI, humans plus AI, and so on. A few industry-specific ones in financial services and so on. And also doing some work as an advisor on AI transformation programs, so helping organizations and their leaders to frame the pathways, drawing on my AI roadmap framework in how it is you look at the phases, mapping those out, working out the issues, and being able to guide and coach the leaders to do that effectively. But the rest of my time is focused on three ventures, and I’ll share some more about these later on. But these are fairly evidently tied to my core interests. Fractious is our AI for strategy app. So this was really building a way in which we can capture the detailed nuance of the strategic thinking of leaders of the organization, to disambiguate it, to clarify it, and enable that to then be built into strategic options, strategic hypotheses, and to be able to evolve effectively. So that’ll be in beta soon. Please reach out if you’re interested in being part of the beta program, and that’ll go to market. So that’s deeply involved in that. We also have our Thought Weaver software, rebuilding previous software which had already built on AI-augmented thinking workflows. So again, that’ll be going to beta. That’s more an individual tool that will be going into beta in the next weeks. So again, go to Thought Weaver. Actually, don’t—the website isn’t updated yet—but I’ll let you know when it’s out, or keep posted for updates on that. And also building an enterprise course on humans plus AI teaming. It’s my fundamental belief that we’ve kind of been through the phase of augmentation of individuals, and we still need to work hard at doing that better. But the next phase for organizations is to focus on teams. How do you work with teams where we have both human members and AI Agentic members? And it creates a whole different series of dynamics and new skills and capabilities. It really calls for how to participate in the humans plus AI team and how to lead humans plus AI teams. And that is again going into the first few test organizations in the next month or so. So again, just let me know. So today what we’re going to look at is this theme: teams of humans working with AI agents. So not individual AI as in chat, but where we have a lot of agents with various degrees of autonomy, but also agentic systems where these agents are interacting with each other as well as with humans. So there are three papers which I want to just talk about, just give you a quick overview, and please go and check out the papers in more detail if you’re interested. There’ll be links in the show notes. First is Collaborating with AI Agents: A Field Experiment on Teamwork, Productivity and Performance, by Harang Ju at Johns Hopkins and Sinan Aral at MIT. So this, there was an experiment which had over 2,300 participants who were working on creating advertisements. And they had a whole array of humans plus AI, human-human teams, human-AI teams, sort of quite small or just in duos and so on, working on being able to create those which were then assessed in terms of quality and how they worked. So a few particularly interesting findings from that. So individually, just having a human-AI team essentially enhanced performance significantly compared to just human-only teams. And so they were able to move faster and to complete more of their tasks, and the quality was strong. But there’s a phrase which is commonly used around the jagged frontier of capability of AI, and it was quite clear that there were some domains where AI does very well and others where it didn’t. And so this comes to the part where, in terms of the design of the tasks, the design of the human-AI systems, and also the understanding by the human users of what AI is good at or not, is fundamental in being able to do that. And so in some cases, if AI was used in some domains such as image quality, they actually decreased quality. So we need to understand where and how both to apply AI in this jagged frontier and design the systems around that. This changes the role of the humans, of course. Humans then tend to delegate more. And there’s one of the things which they tested for, which is how do you behave differently if you know your teammate is an AI as opposed to not knowing whether a human or AI. And it changes. So they become more task-oriented. They are less using the social cues to interact, and they are essentially becoming more efficient. But some of these social cues which are valuable in the human-human collaboration started to disappear. And this automation process meant that there was not, in the end, as much creative diversity. Now I’ve often pointed to the role of AI in creativity tasks. It depends fundamentally on the architecture—where does the AI sit in terms of initial ideas which are then sorted by filtered by humans and then are involved, or where it sits in that process. But in this particular structure, they found that humans plus AI teams started to create more and more similar-type outputs. So this homogenization of outputs in these human-AI teams was very notable and significant. And so this again creates a design factor for how it is that we build human-AI systems which actually do not lead to homogeneous output. And we’re making sure that we are ensuring that the human diversity is maintained. Often that can be done by being able to have human outputs first without AI then blunting or narrowing the breadth of the creative outputs of humans. Second paper I’d like to point to is called Intelligent AI Delegation, from a team at Google DeepMind. So this is this point where we now have not just single AI agents to delegate decisions to or problems to, but in fact systems of AI. And so this creates a different challenge. And the key point is, I’m saying this, is around you are delegating tasks, but when you are delegating tasks it’s more than just saying, okay, which agent gets the task. You have to understand responsibility. So where does accountability reside? Who is responsible for that? How clarity around the roles of the agents, what are the boundaries of what it is they can do and cannot do, the clarity of the intent, and how that’s communicated and cascaded through the agents, and the critical role of trust and appropriate degrees of trust in the systems. So this means that we have to define what are the different characteristics of the task. And in the paper it goes through quite a few different characteristics. And a few of the critical ones was the degree of uncertainty around the task. Obviously, if it is very clear that can be appropriately delegated, but many tasks and problems are uncertain. And so this creates a different dynamic. Whether verifiable, as you know you have high-quality information, or whether that’s the degre

    19 min
  6. Davide Dell’Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics

    25 FEB

    Davide Dell’Anna on hybrid intelligence, guidelines for human-AI teams, calibrating trust, and team ethics

    “In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation.” – Davide Dell’Anna About Davide Dell’Anna Davide Dell’Anna is Assistant Professor of Responsible AI at Utrecht University, and a member of the Hybrid Intelligence Centre. His research focuses on how AI can cooperate synergistically and proactively with humans. Davide has published a wide range of leading research in the space. Webiste: davidedellanna.com LinkedIn Profile: Davide Dell’Anna University Profile: Davide Dell’Anna What you will learn The core concept of hybrid intelligence as collaborative human-AI teaming, not replacement Why effective hybrid teams require acknowledging and leveraging both human and AI strengths and weaknesses How lessons from human-human and human-animal teams inform better design of human-AI collaboration Key differences between humans and AI in teams, such as accountability, replaceability, and identity The importance of process-oriented evaluation, including satisfaction, trust, and adaptability, for measuring hybrid team effectiveness Why appropriately calibrated trust and shared ethics are central to performance and cohesion in hybrid teams The shift from explainability to justifiability in AI, emphasizing actions aligned with shared team norms and values New organizational roles and skills—like team facilitation and dynamic team design—needed to support successful human-AI collaboration Episode Resources Transcript Ross Dawson: Hi Davide. It’s wonderful to have you on the show. Davide Dell’Anna: Hi Ross, nice to meet you. Thank you so much for having me. Ross: So you do a lot of work around what you call hybrid intelligence, and I think that’s pretty well aligned with a lot of the topics we have on the podcast. But I’d love to hear your definition and framing—what is hybrid intelligence? Davide: Well, thank you so much for the question. Hybrid intelligence is a new paradigm, or a paradigm that tries to move the public narrative away from the common focus on replacement—AI or robots taking over our jobs. While that’s an understandable fear, more scientifically and societally, I think it’s more interesting and relevant to think of humans and AI as collaborators. In this sense, human and AI means a synergy where teams of humans and AI together lead to superior outcomes than either the human or the AI operating in isolation. In a human-AI team, members can compensate for each other’s weaknesses and amplify each other’s strengths. The goal is not to substitute human capabilities, but to augment them. This immediately moves the discussion from “what can the AI do to replace me?” to “how can we design the best possible team to work together?” I think that’s the foundation of the concept of hybrid intelligence. So hybrid intelligence, per se, is the ultimate goal. We aim at designing or engineering these human-AI teams so that we can effectively and responsibly collaborate together to achieve this superior type of intelligence, which we then call hybrid intelligence. Ross: That’s fantastic. And so extremely aligned with the humans plus AI thesis. That’s very similar to what I might have said myself, not using the word hybrid intelligence, but humans plus AI to say the same thing. We want to dive into the humans-AI teaming specifically in a moment. But in some of your writing, you’ve commented that, while others are thinking about augmentation in various ways, you point out that these are not necessarily as holistic as they could be. So what do you think is missing in some of the other ways people are approaching AI as a tool of augmentation? Davide: Yeah, so I think when you look at the literature—as a computer scientist myself, I notice how easily I fall into the trap of only discussing AI capabilities. When I talk about AI or even human-AI teams, I end up talking about how I can build the AI to do this, or how I can improve the process in this way. Most of the literature does that as well. There’s a technology-centric perspective to the discussion of even human-AI teams. We try to understand what we can build from the AI point of view to improve a team. But if you think of human-AI teams in this way, you realize that this significantly limits our vocabulary and our ability to look at the team from a broader, system-level perspective, where each member—including and especially human team members—is treated individually, and their skills and identity are considered and leveraged. So, if you look at the literature, you often end up talking about how to add one feature to the AI or how to extend its feature set in other ways. But what people often miss is looking at the weaknesses and strengths of the different individuals, so that we can engineer for their compensation and amplification. Machines and people are fundamentally different: humans are good at some things, AI is good at others, and we shouldn’t try to negate or hide or be ashamed of the things we’re worse at than AI, and vice versa. Instead, we should leverage those differences. For instance, just as an example, consider memory and context awareness. At the moment, at least, AI is much more powerful in having access to memory and retrieving it in a matter of seconds—AI can access basically the whole internet. But often, when you talk nowadays with these language model agents, they are completely decontextualized. They talk in the same way to millions across the world and often have very little clue about who the specific person is in front of them, what that person’s specific situation is—maybe they’re in an airport with noise, or just one minute from giving a lecture and in a rush. The type of things you might say also change based on the specific situation. While this is a limitation of AI, we shouldn’t forget that there is the human there. The human has that contextual knowledge. The human brings that crucial context. Sometimes we tend to say, “Okay, but then we can build an AI that can understand the context around it,” but we already have the human for that. Ross: Yes, yes. I don’t think that’s what I call the framing. Framing should come from the human, because that’s what we understand—including the ethical and other human aspects of the context, as well as that broader frame. It’s interesting because, in talking about hybrid intelligence, I think many who come to augmentation or hybrid intelligence think of it on an individual basis: how can an individual be augmented by AI, or, for example, in playing various games or simulations, humans plus AI teaming together, collaborating. But the team means you have multiple humans and quite probably multiple AI agents. So, in your research, what have you observed if you’re comparing a human-only team and a team which has both human and AI participants? What are some of the things that are the same, and what are some of the things that are different? Davide: Yes, this is a very interesting question. We’ve recently done work in collaboration with a number of researchers from the Hybrid Intelligence Center, which I am part of. If you’re not familiar with it, the Hybrid Intelligence Center is a collaboration that involves practically all the Dutch universities focused on hybrid intelligence, and it’s a long project—lasting around 10 years. One of the works we’ve done recently is to try to study to what extent established properties of effective human teams could be used to characterize human-AI teams. We looked at instruments that people use in practice to characterize human teams. One of them is called the Team Diagnostic Survey, which is an instrument people use to diagnose the strengths and weaknesses of human teams. It includes a number of dimensions that are generally considered important for effective human teams. These include aspects like members demonstrating their commitment to the team by putting in extra time and effort to help it succeed, the presence of coaches available in the team to help the team improve over time, and things related to the satisfaction of the members with the team, with the relationships with other members, and with the work they’re doing. What we’ve done was to study the extent to which we could use these dimensions to characterize human-AI teams. We looked at different types of configurations of teams—some had one AI agent and one human, others had multiple agents and multiple humans, for example in a warehouse context where you have multiple robots helping out in the warehouse that have to cooperate and collaborate with multiple humans. We tried to understand whether the properties of—by the way, we also looked at an interesting case, which is human-animal-animal teams, which is another example that’s interesting in the context of hybrid intelligence. You see very often in human-animal interaction—basically two species, two alien species—interacting and collaborating with each other. They often manage to collaborate pretty effectively, and there is an awareness of what both the humans and the animals are doing that is fascinating, at least for me. So, we tried to analyze whether properties of human teams could be understood when looking at human-AI teams or hybrid teams, and to what extent. One o

    36 min
  7. Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy

    18 FEB

    Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy

    “You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.” – Felipe Csaszar About Felipe Csaszar Felipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy. Webiste: papers.ssrn.com LinkedIn Profile: Felipe Csaszar University Profile: Felipe Csaszar What you will learn How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation. The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities. The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation. How AI assists in both visualizing strategists’ mental models and expanding the complexity of strategic frameworks. Experimental findings showing AI’s ability to generate and evaluate business strategies, often matching or outperforming humans. Emerging best practices and challenges in human-AI collaboration for more effective strategy processes. The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning. Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making. Episode Resources Transcript Ross Dawson: Felipe, it’s a delight to have you on the show. Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me. Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time. However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving. Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies. One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching. The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something. So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people. There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search. Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history. That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have a problem looked at from a different representation. And then the last one was aggregating. You can aggregate with AI virtual personas. For example, you can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with. And now you can aggregate those. Those are just examples, because there are different ways of changing search, representation, and aggregation, but it’s very clear that AI, at least the current version of AI, has the potential to change these three cognitive operations of strategy. Ross Dawson: That’s fantastic. It’s a novel framing—search, representation, aggregation. Many ways of framing strategy and the strategy process, and that is, I think, quite distinctive and very, very insightful, because it goes to the cognitive aspect of strategy. There’s a lot to dig into there, but I’d like to start with the representation. I think of it as the mental models, and you can have implicit mental models and explicit mental models, and also individual mental models and collective mental models, which goes to the aggregation piece. But when you talk about representation, to what degree—I mean, you mentioned a metaphor there, which, of course, is a form of representing a strategic space. There are, of course, classic two by twos. There are also the mental models which were classically used in investment strategy. So what are the ways in which we can think about representation from a human cognitive perspective, before we look at how AI can complement it? Felipe Csaszar: I think it’s important to distinguish—again, it’s three different things. There are three different types of representations. There are the internal representations: how people think in their minds about a given problem, and that usually people learn through experience, by doing things many times, by working at a given company—you start looking at the world from a given perspective. Part of the internal representations you can learn at school, also, like the typical frameworks. Then there are external representations—things that are outside our mind that help us make decisions. In strategy, essentially everything that we teach are external representations. The most famous one is called Porter’s Five Forces, and it’s a way of thinking about what affects the attractiveness of an industry in terms of five different things. This is useful to have as an external representation; it has many benefits, because you can write it down, you can externalize it, and once it’s outside of your mind, you free up space in your mind to think about other things, to consider other dimensions apart from those five. External representations help you to expand the memory, the working memory that you have to think about strategy. Visuals in general, in strategy, are typical external representations. They play a very important role also because strategy usually involves multiple people, so you want everybody to be on the same page. A great way of doing that is by having a visual so that we all see the same. So we have internal—what’s in your mind; external—what you can draw, essentially, in strategy. And then there are distributed representations, where multiple people—and now with AI, artifacts and software—among all of them, they share the whole representation, so they have parts of the representation. Then you need to aggregate those parts—partial representations; some of them can be internal, some of them are external, but they are aggregated in a given way. So representations are really core in strategic decision making. All strategic decisions come from a given set of representations. Ross Dawson: Yeah, that’s fantastic. So looking at—so again, so much to dive into—but thinking about the visual representations, again, this is a core interest of mine. Can you talk a little bit about how AI can assist? There’s an iterative process. Of course, visualization can be quite simple—a simple framework—or visuals can provide metaphors. There are wonderful strategy roadmaps which are laid out visually, and so on. So what are the ways in which you see AI being able to assist in that, both in the two-way process of the human being able to make their mental model explicit in a visualization, and the visualization being able to inform the internal representation of the strategist? Are there any particular ways yo

    38 min
  8. Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills

    11 FEB

    Lavinia Iosub on AI in leadership, People & AI Resources (PAIR), AI upskilling, and developing remote skills

    “In this next era, the key to leadership will be blending systems thinking and AI automation—at least being aware of what you can do with it—with empathy, discernment, connection, and clarity.” – Lavinia Iosub About Lavinia Iosub Lavinia Iosub is the Founder of Livit Hub Bali, which has been named as one of Asia’s Best Workplaces, and Remote Skills Academy, which has enabled 40,000+ youths globally to develop digital and remote work skills. She has been named a Top 50 Remote Innovator, a Top Voice in Asia Pacific on the future of work, with her work featured in the Washington Post, CNET, and other major media. Website: lavinia-iosub.com liv.it LinkedIn Profile: Lavinia Iosub X Profile: Lavinia Iosub What you will learn How AI can augment leadership decision-making by enhancing cognitive processes rather than replacing human judgment Strategies for integrating AI into teams, focusing on volunteer-driven adoption and fostering AI fluency without forcing uptake The importance of continuous experimentation and knowledge sharing with AI tools for organizational growth and team building Why successful leadership in the AI era requires blending systems thinking, empathy, and a focus on human-AI collaboration How organizational value is shifting from knowledge accumulation toward skills like curiosity, adaptability, and discernment The concept of “people and AI resources” (PAIR), emphasizing the quality of partnership between humans and AI for organizational effectiveness Critical skills for future workers in an AI-driven world, such as AI orchestration, emotional clarity, and the ability to direct AI outputs with taste and judgment Practical lessons from the Remote Skills Academy in democratizing access to digital and AI skills for a diverse range of job seekers and business owners Episode Resources Transcript Ross Dawson: Lavinia, it is awesome to have you on the show. Lavinia Iosub: Thank you so much for having me, Ross. Ross Dawson: Well, we’ve been planning it for a long time. We’ve had lots of conversations about interesting stuff. So let’s do something to share with the world. Lavinia Iosub: Let’s do it. Ross Dawson: So you run a very interesting organization, and you are a leader who is bringing AI into your work and that of your team, and more generally, providing AI skills to many people. I just want to start from that point—your role as a leader of a diverse, interesting organization or set of organizations. What do you see as the role of AI for you to assist you in being an effective leader? Lavinia Iosub: Great question. I think that the two of us initially met through the AI in Strategic Decision Making course, right? So I would say that’s actually probably one of the top uses for me, or one of the areas where I found it very useful. The most important thing here is to not start with the mindset that AI will make any worthy decisions for you, but that it will augment your cognition and your decision making when you are feeding it the right context, the right master prompts, the right information about your business, your values, what you’re trying to achieve, how you normally make decisions, and so on. Then you work with it, have a conversation with it, and even build an advisory board of different kinds of AI personas that may disagree or have slightly different views. So it enhances your thinking, rather than serving you decisions on a plate that you don’t know where they come from or what they’re based on. That’s one of the things that’s been really interesting for me to explore. If we zoom out a little bit, I think a lot of people think of AI as a way of doing the things they don’t want to do. I think of AI as a way to do more of the things I’ve always wanted to do—delegate some menial, drudgery work that no human should be doing in the year of our Lord 2025 anymore, and do more of the creative, strategic projects or activities that many of us who have been in what we call knowledge work—which, to me, is not a good term for 2025 anymore, but let’s call it knowledge work for now—just being able to do more of the things you’ve always wanted to do, probably as an entrepreneur, as a leader, as a creative person, or, for lack of a better word, a knowledge worker. Ross Dawson: Lots to dig into there. One of the things is, of course, as a leader, you have decisions to make, and you have input from AI, but you also have input from your team, from people, potentially customers or stakeholders. For your leadership team, how do you bring AI into the thinking or decision making in a way that is useful, and what’s that journey been like of introducing these approaches where there are different responses from some of your team? Lavinia Iosub: So we were, I’d say, fairly early AI adopters, and I have an approach where I really want to double down on working more with AI and giving more AI learning opportunities to those people who are interested, rather than forcing it on people who may not be interested. There are pros and cons to that approach—it can create inequality and so on—but I’m much more about giving willing people more opportunity, more chances, and more learning, rather than evangelizing AI. People need to decide their own take towards AI and then engage with that and go after opportunities. As a team, as a company, we were early AI adopters, and as a leadership team, quite a few quarters ago, we actually went through the Anthropic AI Fluency course as a team, and then produced practical projects that were shared with each other. We got certificates, which was the least important thing, but we shared learnings and it sparked a lot of interesting conversations and different uses for AI. Now, you also probably know that we’ve been running an AI L&D challenge for two years now, where, as a team, we explore AI tools and share mini demos with each other. For example, “I’d heard a lot about this tool, I tried it out, here’s what it looks like, here’s a screen share, and my verdict is I’m going to use this,” or maybe another person in the team finds it more useful. We found those exchanges to be really great for sparking ideas, not only about AI, but about our work in general. Because in the end, AI is a tool—it’s not the end purpose of anything. It’s a tool to do better work, more exciting work, double down on our human leverage, and so on. We’re now running this challenge for the second year straight, and we’ve actually allowed externals to join in. It’s really interesting because it adds to the community spirit, seeing people from other areas of business and with different jobs, and seeing what they do with it. I think, and you may agree, Ross, that people think we’re in an AI bubble, but we’re still very much in an LLM bubble. When people say AI, 90% of them actually mean LLMs and ChatGPT. So it’s interesting to see what others do. With the challenge, we’ve said every week you have to try different tools. You can’t just say, “Here’s the prompt I’m doing this week on ChatGPT.” No, it has to be different tools that do different things. It can be dabbling into agents, automating, or using some other AI tool that helps with your tasks. It can’t just be showing us your ChatGPT conversations or how it drafts your emails. We want to take it a step further. It’s really helped us reflect on our own thinking and workflows and share with each other. It’s almost been like team building as well. For example, I was exploring a tool for optimizing—basically, geo, switching from SEO to geo, and seeing what prompts your company comes up in, and so on. It was pure curiosity, and now I’m having a whole conversation with our marketing manager about that, that I probably wouldn’t have had if we weren’t doing that. Again, I describe myself as AI fluent but very much people-centered. To me it’s always, the goal is not AI fluency or AI use. The goal is, how do we work better with each other as humans, and do more of the work that excites us and provides value to our stakeholders? All those different things definitely help with that. Ross Dawson: Yeah, well, it obviously goes completely to the humans plus AI thesis. I think the nature of leadership—there are some aspects that don’t change, like integrity, presence, being able to share a vision, and so on. But do you think there are any aspects of what it takes to be an effective leader today that change, evolve, or highlight different facets of leadership as we enter this new age? Lavinia Iosub: I would say so. If we think of the different eras of leadership and what it took to be efficient—well, I don’t want to go into the whole leader versus manager debate—but when you look at the leaders who were succeeding in the 50s, there was a command and control model, certain ways of doing things, and it was largely male, especially in corporate leadership. That went through some transformations over the last few decades, and I think what’s happening right now with AI will trigger, or perhaps augment, another transformation. In this next era, the key to leadership will be blending systems thinking and AI automation—at least being aware of what you can do with it—with empathy, discernment, connection, and clarity. Sorry, just needed a sip of water. Secondly, for a very long time, when we

    38 min

Ratings & Reviews

About

Exploring and unlocking the potential of AI for individuals, organizations, and humanity

You Might Also Like