
Anthea Roberts on dragonfly thinking, integrating multiple perspectives, human-AI metacognition, and cognitive renaissance (AC Ep73)
“Not everyone can see with dragonfly eyes, but can we create tools that help enable people to see with dragonfly eyes?”
– Anthea Roberts
About Anthea Roberts
Anthea Roberts is Professor at the School of Regulation and Global Governance at the Australian National University (ANU) and a Visiting Professor at Harvard Law School. She is also the Founder, Director and CEO of Dragonfly Thinking. Her latest book, Six Faces of Globalization, was selected as one of the Best Books of 2021 by The Financial Times and Fortune Magazine. She has won numerous prestigious awards and has been named “The World’s Leading International Law Scholar” by the League of Scholars.
Website:
Dragonfly Thinking
Anthea Roberts
LinkedIn Profile:
Anthea Robert
University Profile:
Anthea Roberts
What you will learn
- Exploring the concept of dragonfly thinking
- Creating tools to see complex problems through many lenses
- Shifting roles from generator to director and editor with AI
- Understanding metacognition in human-AI collaboration
- Addressing cultural biases in large language models
- Applying structured analytic techniques to real-world decisions
- Navigating the cognitive industrial revolution with AI
Episode Resources
People
- Sam Bide
- Philip Tetlock
- Harrison Chase
Companies/Organizations
- Dragonfly Thinking
- Australian National University
Books
- Is International Law International? by Anthea Roberts
- Six Faces of Globalization by Anthea Roberts
Technical Terms
- Structured analytic techniques
- Risk, reward, and resilience framework
- Large language models (LLMs)
- Agentic workflows
- Cognitive architecture
- Metacognition
- Reinforcement learning
- Super forecasting
- Wisdom of the silicon crowd
Transcript
Ross Dawson: Anthea, it is a delight to have you on the show.
Anthea Roberts: Thank you very much for having me.
Ross: So you have a very interesting company called Dragonfly Thinking, and I’d like to delve into that and dive deep. But first of all, I’d like to hear the backstory of how you came to see the idea and create the company.
Anthea: Well, it’s probably an unusual route to creating a startup. I come with no technology background initially, and two years ago, if you told me I would start a tech startup, I would never have thought that was very likely—and no one around me would have, either.
My other hat that I wear when I’m not doing the company is as a professor of global governance at the Australian National University and a repeat visiting professor at Harvard. I’ve traditionally worked on international law, global governance, and, more recently, economics, security, and pushback against globalization.
I moved into a very interdisciplinary role, where I ended up doing a lot of work with different policymakers. Part of what I realized I was doing as I moved around these fields was creating something that the intelligence agencies call structured analytic techniques—techniques for understanding complex, ambiguous, evolving situations.
For instance, in my last book, I used one technique to understand the pushback against economic globalization through six narratives—looking at a complex problem from multiple sides. Another was a risk, reward, and resilience framework to integrate perspectives and make decisions. All of this, though, I had done completely analog.
Then the large language models came out. I was working with Sam Bide, a younger colleague who was more technically competent than I was. One day, he decided to teach one of my frameworks to ChatGPT. On a Saturday morning, he excitedly sent me a message saying, “That framework is really transferable!”
I replied, “I made it to be really transferable.”
He said, “No, no, it’s really transferable.”
We started going back and forth on this. At the time, Sam was moving into policy, and he created a persona called “Robo Anthea.” He and other policymakers would ask Robo Anthea questions. It had my published academic scholarship, but also my unpublished work.
At a very early stage, I had this confronting experience of having a digital twin. Some people asked, “Weren’t you horrified or worried about copyright infringement?” But I didn’t have that reaction. I thought it was amazingly interesting.
What could happen if you took structured techniques and worked with this extraordinary form of cognition? It allowed us to apply these techniques to areas I knew nothing about. It also let me hand this skill off to other people.
I leaned into it completely—on one condition: we changed the name from Robo Anthea to Dragonfly Thinking. It was both less creepy for me and a better metaphor. This way of seeing complex problems from many different sides is a dragonfly’s ability.
I think I’m a dragonfly, but I believe there are many dragonflies out there. I wanted to create a platform for this kind of thinking—where dragonflies could “swarm” around and develop ideas together
Ross: Just explain the dragonfly concept.
Anthea: We took the concept from some work done by Philip Tetlock. When the CIA wanted to determine who was best at understanding complex problems, they found that traditional experts performed poorly.
These experts tended to have one lens of analysis, which they overemphasized. This caused them to overlook some things and get blindsided by others.
In contrast, Tetlock found a group of individuals who were much better forecasters. They were incredibly diverse and 70% better than traditional experts—35% better than the CIA itself, even without access to classified material.
The one thing they had in common was that they saw the world through dragonfly eyes. Dragonfly eyes have thousands of lenses instead of one, allowing them to create an almost 360-degree view of reality. This predictive ability makes dragonflies some of the best predators in the world.
These qualities—seeing through multiple lenses, integrating perspectives, and stress-testing—are exactly what we need for complex problems.
- We need to see problems from many lenses: different perspectives, disciplines, and cognitive approaches.
- We must integrate this into a cohesive understanding to make decisions.
- We need to stress-test it by thinking about complex systems, dynamics, and future scenarios, so we can act with foresight despite uncertainty.
The AI part of this is critical because not everyone can see with dragonfly eyes. The question becomes: can we create tools to enable people to do so?
Ross: There are so many things I’d like to dive into, but just to get the big picture: this is obviously human-AI collaboration. These are complex problems where humans have the fullest context and decision-making ability, complemented by AI.
What does that interface look like? How do humans develop the skills to use AI effectively?
Anthea: I think this is one of the most interesting and evolving questions. In the kind of complex cognition we deal with, we aim to co-create with the LLMs as partners.
What I’ve noticed is that you shift roles. Instead of being the primary generator, you become the director or manager, deciding how you want the LLM to operate. You also take on a role as an editor or co-editor, moving back and forth.
This means humans stay in the loop but in a different way.
Another important aspect is recognizing where humans and AI excel. Not everyone is good at identifying when they’re better at a task versus when the AI is.
For instance, AI can hold a level of cognitive complexity that humans often cannot. In our risk, reward, and resilience framework, humans may overfocus on risk or reward. Some can hold the drivers of risk, reward, and resilience but can’t manage the interconnections.
AI can offload some of this cognitive loa
Information
- Show
- FrequencyUpdated weekly
- Published11 December 2024 at 15:53 UTC
- Length34 min
- RatingClean