RemAIning Human

Cecilia Callas

A podcast dedicated to examining how humans might live in harmony with artificial intelligence. Learn how we can build AI responsibly and ethically to safeguard future generations.  Remain calm, remain human.

Episodes

  1. What happens when AI takes the wheel?

    05/26/2025

    What happens when AI takes the wheel?

    What happens to our ability to react when we know AI might step in to save us?In this episode of RemAIning Human, Stanford Researcher Patrick Bissett delves into his new research that demonstrates how AI-assisted driving impacts our “response inhibition” — our ability to stop, slow down, or just generally react to an external stimulus while driving. Patrick's research reveals that when operating AI-assisted vehicles, we become significantly slower at responding — even when we're fully attentive and engaged.These results challenge our belief that all and any AI assistance is purely helpful, and bring to question if we should automatically be enabling AI-powered features wherever possible without first assessing their impacts on human cognition.Patrick’s research is compelling not just for its implications on how we think about AI-assisted driving, but for what it means for the other domains AI is rapidly infiltrating — particularly around AI’s impacts on human agency and critical thinking.We're entering what Patrick calls a "partially automated" world where neither humans nor AI hold complete responsibility. This transition period poses very real challenges that full automation might eventually resolve, but which we must navigate carefully in the meantime.In this conversation, Patrick and I explore:👉 The hidden cognitive costs of partial automation and why knowing AI might help actually impairs our own response capabilities👉 Why the transition period demands our attention and how current AI development phases pose unique challenges👉 The broader implications for human agency from critical thinking to navigation skills, examining what capabilities we're losing as we increasingly rely on AI assistance👉 The creativity advantage and why human creativity and scientific inquiry remain our competitive edge👉 The urgent need for unbiased research and why academic institutions studying AI's impact on human cognition face unprecedented threatsPatrick’s research comes at such an essential moment: a moment in which the federal government is trying to prevent any state-led AI legislation for the next decade and when we — as people affected by AI tools — desperately need a deeper understanding of how the technology will affect us on deeper levels.In addition, academic institutions conducting this essential research face (like Stanford) are facing significant funding threats, potentially undermining our ability to understand and navigate these transitions safely.As we move forward, Patrick's research reminds us that preserving human agency requires intentional choices about when to engage AI assistance and when to maintain our own cognitive capabilities. The safety and wellbeing of everyone alive today depends on getting this transition right.Have a question for me or Patrick? Let us know in the comments, or email Patrick directly.Important links: Preprint of the study discussed Patrick's website

    39 min
  2. Women more at risk from AI job losses

    12/02/2024

    Women more at risk from AI job losses

    By 2027, AI could displace around 69 million jobs globally — and it’s likely that women will be disproportionately impacted. This is due to women over-indexing in high risk industries (think retail, healthcare, and clerical roles), and because approximately 70% of clerical and administrative jobs globally are held by women.Hannah Maude is sounding the alarm and offering a lifeline.In this episode of RemAining Human, I sit down with Hannah Maude -- founder of Fire Up Skills, which empowers women to move from AI-impacted to AI-empowered -- to discuss everything from her work with Fire Up Skills to her own career pivot to the realities of pending technological disruption. Hannah shares her framework for navigating an AI-driven career pivot and advice on how to get started — even when grappling with massive change can feel overwhelming.Because irregardless of your gender, this conversation may be helpful for any professional feeling uncertain about technological disruption and career changes. I find a certain comfort in knowing we are all feeling the discomfort of this uncertainty. Listen in to learn:👉 How to map your existing professional skills to AI roles, with specific examples of transferable competencies across marketing, sales, HR, and governance.👉 The PLAN B career strategy: Hannahs’s step-by-step framework for identifying your unique strengths, leveraging them in emerging tech fields, and building confidence to make a bold career transition.👉 The importance of developing your AI values so that you can proactively and intentionally design your use of AI to align with what you value.👉 Networking strategies, including advice for building a supportive network, via attending tech events, asking meaningful questions, and connecting with mentors in AI-adjacent fields.More than anything — we stress the importance of supporting each other and of holding AI companies to account to create AI is representative of both male and female perspectives. Because only when a diversity of voices are elevated will we build technology that is representative of all humans.

    26 min
  3. Will humans flourish in the AI age?

    11/25/2024

    Will humans flourish in the AI age?

    What does it mean to truly flourish? One definition states that flourish indicates “a period of thriving" -- which fits the description that many AI company leaders promote as not only possible with AI, but ever-so-desirable: a blissed-out utopia in which the automation of work and an army of AI assistants have created a landscape where humans can sit back, write poetry, and ponder our existence. But would this AI future actually serve us? When we truly break it down, how will AI enable our thriving — and what risks does it pose to our collective flourishing? These are the exact questions that Tamara Lechner — today’s guest — explores on a daily basis through her work as the Chair of the AI for Human Flourishing think tank within the The Human Flourishing Program at Harvard, which strives to quantify and measure human flourishing through five central domains: Happiness and life satisfaction Physical and mental health Meaning and purpose Character and virtue Close social relationships Using the Human Flourishing framework’s science-based methodology is helpful in considering how we want to coexist alongside this new technology. As AI continues to infiltrate our ways of working, relating to one another and parenting — its applications will almost certainly impact all five of the above areas in different capacities and timeframes and, crucially, in different ways for different humans. Because as you’ll hear in this conversation, flourishing means different things to different humans. It’s only when we understand what’s uniquely important to us can we identify the ways in which AI can actually help us to flourish — and where we should maintain firm boundaries in our use of the technology. Just because the technology exists doesn’t mean it’s healthy or beneficial for us to cement it fully into every task — especially if we derive joy and meaning from those tasks. In this conversation, Tamara and I dive into her work as AI Chair for the Human Flourishing Program as we discuss how to use AI to truly help you flourish — a process that begins with deep self-knowledge that then guides our decisions in what to keep for ourselves versus what to delegate to AI tools. Only then can we retain our agency, remain empowered, and remain human. This episode also explores: 👉 Why considering the human flourishing integral framework for thinking about AI, as we consider how to reach our full potential as individuals and as a species. 👉 Why we must shift from being passive "users" of AI to active "consumers" to take control of how we integrate AI into our lives. 👉 Why AI should enhance, not replace, human relationships - it's best used as a tool to help us communicate better with each other rather than as a substitute for human connection. 👉 How the risk of AI systems profiting from human attention and intimacy is growing, making it crucial to establish boundaries and maintain our agency.

    37 min
  4. Your AI model choice is your voice: a Stanford researcher shares why conscious LLM choices matter

    11/13/2024

    Your AI model choice is your voice: a Stanford researcher shares why conscious LLM choices matter

    As the AI landscape has evolved over the past few years, so have our choices. We can now choose from a variety of models for a variety of tasks. And yet, despite the multitude of models at our disposal, it’s easy to reach for the tool that’s simply open on the browser without considering our objective — and if that model is the model that best meets that objective.That’s where new research from Stanford researcher Vasyl Rakivenko comes in. Vasyl’s research uses a three step process to guide AI users in choosing the model that best fits their objectives — helping us to make more informed choices that will inevitably lead to more intentionality in our use of AI. You can view a snapshot of this research here.Because not every little task should be plopped into ChatGPT. There are times when using an open-source model (such as Llama-3) might be a better option, for example. There are times when a less complex task might be easily handled by a smaller, more light-weight model — and consume less energy in the process. And, there are times when it makes more financial sense to use one model over another — when using a free model gets you the exact same result as a paid model, for instance.In this episode, Vasyl breaks down the key differences in competing LLMs including their varying strengths, environmental impacts, costs, and ethical considerations. In doing so, he highlights how our individual choices lead to very real collective outcomes — and, ultimately, influence the development of more responsible AI technology.Listen in to learn:👉 Why your choice of which LLM to use (and when) matters. Different LLMs have varying strengths, environmental impacts, and safety consideration — all factors that must be considered before you type in your prompt and hit ‘enter.’ 👉 Why bigger AI models aren't always better — and how smaller models can often handle simple tasks while using less energy and computing power.👉 How current AI bias issues, if not addressed now, will likely carry forward into future AI agents and applications.👉 About how you can wield your power as a consumer of AI products for good. The models that are used most will influence which AI companies succeed — and shape the industry's future. It’s up to us to be aware of the differences between LLMs, and to make conscious choices around product use that most align with our values.Because as consumers of AI products, our choices make a meaningful difference. The products we choose to use are more likely to be successful, and the companies that produce those products are most likely to be funded and to succeed.In this way, your choice is your voice.And as civil society holding tech companies to account, we need to ensure our individual voices are heard.

    50 min
  5. 11/08/2024

    What a Trump win means for AI policy and regulation

    A second Trump term will have significant and far-reaching consequences on everything from female reproductive rights to foreign policy to immigration to inflation. And, of course, on the tech industry — at the exact moment when AI is poised to fundamentally change our ways of working and being. Trump's administration is expected to lead to a decrease in regulation, including a repeal of Biden's AI Executive Order. But in addition, Elon Musk’s role in the 2024 US election and tech leaders’ support for a Trump presidency has signaled that we can expect a new kind of relationship between tech and government over the next four years.In this podcast, I provide an understanding of the current state of AI regulation in the US, the directions we can expect to see it shift, and how the relationship between tech and government has evolved in recent years to heavily contribute to create a dangerous concentration of power.🎙️ Listen in to learn:👉 The current state of AI policy in the United States, including a breakdown of the Biden-Harris Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.👉 The directional shift we can expect to see in AI US regulation during a Trump presidency, based on past remarks, campaign promises and growing close-knit affiliations with tech leaders.👉 A discussion around the growing relationship between government and tech, including how that relationship has evolved over the past four years and why it’s essential that we, as civil society, are aware of the dangers and complications of Trump-tech alliances.

    31 min
  6. 10/17/2024

    AI Ethics, explained

    Knowing your unique values is essential for leading a fulfilling life. Our internal values can be way-finders — sign-posts that help us to build healthy relationships, undertake projects that align to our purposes, and be the change we want to witness in the world.But knowing our values is becoming increasingly important in relation to another aspect of our shared human experience — the building and using of powerful technologies, such as artificial intelligence.In this episode of the RemAIning Human podcast, I’m joined by ethics consultant Jordan Loewen-Colón, who is a professor of AI Ethics and Policy at Queen’s University Smith School of Business and who spends his time guiding individuals and tech companies towards using and building AI that rooted in ethical frameworks and representative of a diversity of value sets.What I love about Jordan’s work, and about this conversation, is that we focus on understanding which questions to ask — not necessarily pretending that we know the answers. So much of grappling with AI ethics involves understanding what questions should be asked, and how to answer those questions by first knowing our values as individuals, and as a society.🎙️ Tune into the conversation to:👉 Understand the basics of “AI Ethics” — including why understanding one’s values are the foundation for understanding what is or isn’t ethical in the development of AI.👉 Hear examples of the current ethical questions facing an AI-powered society, including: should we replace human workers with AI to increase profit? What are the consequences of removing humans from the screening and hiring process? What are the trade-offs between corporations’ sustainability efforts and the water and energy consumption necessary to power AI systems?👉 Understand how to define and articulate your values so that you use AI in accordance to your own values — rather than living by your employer’s, or by falling in line with what an AI company values.👉 Consider how your values might shape who you hold accountable for ensuring AI is built, deployed and used ethically.And so much more!In this discussion, Jordan and I speak about a few resources that can help you to define your values and bring an ethical mindset to your work, including: The Core Values Finder — a quiz and framework that helps you find and quantify your personal values. The Values Canvas — a practical tool to help you develop Responsible AI strategies and document existing ethics efforts on your team or organization. This first discussion on AI ethics is just the tip of the iceberg — there are so many additional questions to be asked, and pulled apart, as we grapple with AI’s potential to transform our society and ask ourselves in which we ways we want to be changed.How do you go about defining and refining your own values? How do your values shape your relationship to technologies and to your use of AI?

    1h 4m
  7. 09/26/2024

    🎙️ Using AI to live in your purpose

    I’ve got a treat for you today — a (spontaneously recorded) podcast conversation with renowned creative technologist, futurist, author, AI Masterclass instructor and my dear friend Don Allen Stevenson III.Why “spontaneous”? Because Don and I didn’t know we were going to record a podcast when we met in a downtown Mountain View cafe, just a few miles from where we went to high school together in the Bay Area. We were catching up on a buzzy Monday evening when Don (being Don) began unpacking his bag to reveal that an entire suite of gadgets — including a mini podcast studio — contained within the flaps of his small satchel. 🤯Our conversation had been moving into such expansive spaces that we couldn’t not attach Don’s nifty mics to collars, roll up our shirt sleeves and dig into areas we’re both so passionate about: AI’s potential to expand creativity, the necessary guardrails that can ensure we don’t become overly reliant on AI, how AI can help us to live more fully in our unique purposes and so much more.🎙️ Listen in to hear more about:👉 Using the IKIGAI philosophy to get clear on which tasks you should do without AI to stay true to your life’s purpose — while being intentional around which tasks you SHOULD outsource to AI, based on what does not bring you fulfillment.👉 The importance of envisioning a positive human-AI future through using Don’s technique of "possibility perception goggles.”👉 Collaborating with AI as a "creative partner” and tool, shifting the focus towards emphasizing your beautiful, innate and irreplaceable creativity.👉 Innovative applications of AI pioneered by Don, including AI-assisted book writing, unbiased analysis of the presidential debate and Don’s “customized AI critic,” in which Don trained a custom AI model to act as a personalized critic to help him build better content.👉 The necessity for guardrails such as “critical thinking mode” to ensure we preserve human agency, remain autonomous and don’t become overly reliant on AI.I hope you enjoy listening to this convo as much as Don and I had recording it!Don Allen Stevenson is a futurist, educator and author of the newly released Make a Seat. Don recently taught a Masterclass alongside Ethan Mollick called Achieve More with GenAI, which is available now. You can find him on Instagram at @DonAlleniii, where he recently launched Meta AI’s new avatars alongside Mark Zuckerberg at Meta Connect 2024.

    1h 16m

Ratings & Reviews

5
out of 5
4 Ratings

About

A podcast dedicated to examining how humans might live in harmony with artificial intelligence. Learn how we can build AI responsibly and ethically to safeguard future generations.  Remain calm, remain human.