Robots for the Rest of Us Podcast

David Berreby

All about the robots and AI that are appearing in our day-to-day lives, doing things that people used to do. robots4therestofus.substack.com

  1. What Does a Humanoid Robot Do? I Asked a Humanoid Robot

    05/22/2025

    What Does a Humanoid Robot Do? I Asked a Humanoid Robot

    I am at ICRA – the IEEE International Conference on Robotics and Automation, aka 7,000 roboticists from all over the world converging to discuss every robot-related topic on Earth. More posts are coming, but for now, day-of, here is a quick podcast interview (the first with video) with, and about, one of the robots I saw in action today – Ameca, by Engineered Arts. After many an online video, this was my first time interacting with an Ameca robot in the (gray rubber) flesh. It was a different experience than watching a video, in much the same way a conversation in-person is different from watching a video of other people talking. I don’t mean to suggest that the robot is the equivalent of a person or spurs the same thoughts and emotions. But it spurs some emotions and reactions, outside of conscious control, either innate or conditioned (which is not a subject for this post), that usually come up in a human conversation. Yet I walked away cautiously optimistic about my ability to cope with people-like objects, on screens and in 3D life, as they proliferate. There is little danger of anyone confusing an Ameca with a real person – not just because of its inhuman skin and eye color, but because (in this incarnation) it did not act like a person, with its eerily calm voice, unnaturally even intonation, and somewhat pat responses. (Others’ experience will vary, by design: Ameca has a number of available personas, and Engineered Arts sees it as a platform for developers to put their own AI into. I interacted today with one persona, at the Engineered Arts booth in the exhibitor’s hall.) Still, the clarity of the boundary between human and robot (which Ameca brings up) gives me hope that people will not be easily fooled or manipulated into treating robots as people. In fact, I find myself wondering if human-ish robots may actually be safer on that score than AIs, simply because AIs that “sound human” come to us over the same media as real people do. The medium that is a text from a sophisticated AI doesn’t look or feel different than does a text from me (though I am still better at the content part of the message). But a robot body can’t offer that sense of “this is the same as with a person.” The illusion – that machine and human are the same type of being – is hard to maintain. So my guess is that conventions will soon develop for dealing with robots that act human. These conventions will be based on human-to-human norms, but they won’t be the same. Maybe, as some have suggested, those norms will include built-in distancing gestures by the robot, like Bertolt Brecht’s “distancing effect” in the theater, so the audience doesn’t lulled into complacency and daydreams. For example, at Kodaiji temple in Kyoto, the Mindar “talking statue” robot was deliberately designed to show off a mechanical body under its human-like face. It’s a reminder that the robot is not human. Ameca looks rather similar to Mindar, actually. Engineered Arts plans to give Ameca functional arms and hands (good enough to pick up a chess piece) and, later, legs it can walk on. But the company has built in other distancing effects – including that non-human-looking skin. I think people will continue to feel sure that robots are not people – that the machines are, instead, representations of people. Like puppets, video game characters and characters in novels. (This is an idea that a number of thinkers have arrived at lately, from different starting points and in different disciplines. I’ll have more to say about this soon.) So go my day-of thoughts about the experience of Ameca. As with most encounters with a complex robot, it was much less cartoonish, and more complex, than what I imagined. See what you think. Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    5 min
  2. The Work AI Can't Help With

    03/31/2025

    The Work AI Can't Help With

    Why hasn’t workplace artificial intelligence made life easier for everyone? Some surveys see a vast majority of employees saying AI adds to their workload, for a variety of reasons (having to check what the AI does, having bosses who expect increased output). And the extra stress of AI adoption is often worse for women, says leadership consultant Julie Donley. Why? Because AI is often oriented toward things men tend to focus on (like maximizing efficiency at all costs) and not the jobs that fall much more on women (for example, the emotional labor of office politics, and the running of both work and home lives). Because women are rightfully more worried about being judged by office culture — for using AI (“did I cheat?”) but also for not using it (“do I look like I’m not keeping up?”). Because the sped-up expectations that AI can create put even more pressure on women at work, even as they still do more than their share at home. I spoke with Donley about these and other ways AI adoption can make for more burnout and exhaustion among women workers. But this isn’t a gloomy conversation. She believes AI can be made to work for us all, if it’s adopted in a way that fosters human flourishing. We talked about her new book, Leading at the Speed of People, in which she discusses how leaders can shape AI to make work more humane, not less. Give us a listen! As always, please note the transcript is AI generated and could contain mistakes. Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    55 min
  3. How to Manage AI Therapists, Robot Friends, AI Ghosts

    02/16/2025

    How to Manage AI Therapists, Robot Friends, AI Ghosts

    Most people agree there’s something wrong with an AI pretending to be a human being. And especially wrong about an AI telling you it’s a human therapist. In fact, a California legislator introduced a bill the other day to ban AIs from doing that. It stemmed in part from an incident where a mental health platform had users thinking they were getting counsel from humans when in fact the messages were written by GPT-3. Here’s the thing, though: The patsies (I mean, users) actually rated the responses quite highly – until they learned they were machine-made. This “empathy paradox,” as Anat Perry calls it, revealed an aspect of life where human beings don’t just want a product (like good advice); they also want human effort. Perry, a neuroscientist who is professor of psychology at the Hebrew University in Jerusalem, has a lot of insight into why this is so, and what it could mean for AI as a tool in therapy. We talked about that in this podcast. It’s a rich topic, because many instances of machine pretense aren’t as clear-cut as the fake-therapy incident. Fact is, any sophisticated AI or robot gives people double vision: You know it’s just a machine but other parts of your mind feel that it has emotions and thoughts. So how do you manage that experience? How much can you control it, and how much help do you need from society, in the form of new laws and norms? Now that we have AI therapist apps, AI romances, AI friends and even AI ghosts, people should be talking about these matters. So check out our conversation! Or have a look at the AI-generated transcript. Bonus: Some relevant links to things that came up: “Considering AI-driven therapy: When does human empathy matter?” (a recent paper by Perry and her colleagues) “Generative Ghosts: Anticipating Benefits and Risks of AI Afterlives.” Thoughtful paper about the artificially resurrected. “Eternal You” Great new documentary about ordinary people reckoning with AI recreations of deceased loved ones. “An overview of the application of artificial intelligence in psychotherapy: A systematic review.” (Links to a pdf) Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    52 min
  4. Before AI Permeates Our Lives, We Need a New Deal for Privacy

    01/14/2025

    Before AI Permeates Our Lives, We Need a New Deal for Privacy

    Data scientist Wes Chaar was at home a few years ago, working with the tools of his trade, when he had an “aha!” moment. He realized the information he was looking at would let him predict not only what people would buy and how they’d vote, but also how they’d feel about their choices. Those people might not have shared those feelings with anyone, but Chaar could foresee they’d be there. How are people supposed to defend their privacy in the face of that power? All those user agreements we click without reading, and those assurances that our data is “anonymized” before it’s crunched, are no protection at all. It’s a more urgent and pervasive problem now, as AI and robots enter take more tasks in people’s lives (whether they want them or not). Knowing that Google or Amazon can figure out so much about you from your searches on their sites, imagine how much more they’ll know as they’re connected to AIs that write your emails, give you vacation ideas and track your sleep and mood; or to robots that clean the house and entertain the kids. Chaar believes we’re at a crucial juncture. Now, before AI and robots are woven into everything, is our collective chance to reshape the terms of the bargain with all those data hungry services that companies are so eager for us to use. In his new book, Data Independence, Chaar proposes a new deal for information: A system for keeping control of information even as we share it so that our AI and robot assistants can do their work. We talked about this proposal, as well as the overarching issues — what is privacy? How can we get around a paralyzed political system to enact changes? What’s the strat for getting around the opposition of companies that like today’s wide-open data-plucking regime? And a lot of other vital topics. Give a listen! As always, the transcript is AI generated and may contain some errors. Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    51 min
  5. Sometimes You Just Need a Robot to Scratch That Spot You Can't Reach

    11/13/2024

    Sometimes You Just Need a Robot to Scratch That Spot You Can't Reach

    We’re an aging society, with a shortage of young-ish people to do the work, paid and unpaid, of helping others get through life. (Immigration crackdowns will make this shortage more acute.) Inevitably, robots are coming to help us in the home. Maybe they’ll be 5-foot, 8-inch tall imitation people, as Elon Musk predicts. But arms, legs and torsos are expensive to build, complex to operate and kind of frightening/creepy. So in the years just ahead home robots will likely be simpler devices. They’ll be designed to do useful jobs in their own robot way, not imitate a person doing the job in a human way. But how do you make robot that’s actually helpful, that people will want to use, that doesn’t cost a fortune? What’s the right balance between versatility and simplicity? Who gets to decide what “helpful” means? How do engineers – always eager to do cool never-before-seen things – find common ground with us civilians, who want something that will work without confusing us, frustrating us or scaring us? Charlie Kemp has been thinking about those questions for a long time. Formerly a professor at Georgia Tech, he’s now CTO at Hello Robot, which makes Stretch, 50-pound mobile manipulator that has been used for many more purposes than its creators imagined when they started. These include farming tomatoes, running physical therapy exercises and finding missing stuff. The background visual for this episode is another use: It’s a photo of Henry Evans, who is quadriplegic, using a Stretch to play with his granddaughter. Users have also taught engineers that they sometimes want a robot to do much simpler things than the robot makers thought of. For example, being able to scratch an itch via robot can give a paralyzed person more autonomy, and makes for one less request for help to a human caregiver. We talked about this and other lessons from real-world uses – and about the near and farther future of assistive robots. Give a listen! As always, the transcript here is AI-generated and may contain errors. Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    59 min
  6. Making the Robots People Actually Need

    10/17/2024

    Making the Robots People Actually Need

    Robot-makers often test the limits of the possible and make ingenious new technology. Can we make this? Can we make it do that twice as fast? Robot users, on the other hand, have different concerns, like “is it simple enough for my kid to understand?” or “if it gets knocked over, will it break?” or “how do I tell it not to come so close?” or “what happens if we move the table to the other side of the room?” The difference in those worldviews has sometimes led to robots that looked great in the lab but that, in the outside world, make more work for humans rather than less, or that do tasks people don’t need done. Or instances of what I call MORD — the moment of robotic disappointment, when a hyped-up user finds a robot to be much less than she expected. As robots appear more often in the lives of ordinary people, it becomes ever more important to connect the worlds of makers and users. That was one of the main subjects of my conversation with Odest Chadwicke Jenkins of the University of Michigan. He wants to make robots that people can understand and really want to use — robots that can clean up a messy room; robots you can show how to vacuum; robots that are designed to do what elderly people want, not what engineers imagine elderly people want. That’s an engineering challenge, but also a psychological and cultural one. Jenkins and I talked about the reasons humanoid robots are having a moment, why he expects their growth to increase, and about how and engineers need to listen to non-roboticists. We also talked about his own experiences with MORD. Give us a listen! Caveat: As always, the transcript is AI generated, and thus contains some errors. Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    1h 1m
  7. How 'Acting Human' Is Like Human Acting

    09/28/2024

    How 'Acting Human' Is Like Human Acting

    People often see double when they see a robot. They know it’s just a collection of metal and plastic parts. And they think it might be sad after working so hard all day. Or mad. Or glad. Now that AI can talk, sing, laugh and cry, people are having the same kind of double sight when they talk to their virtual assistants and virtual friends. How can people perceive two beings in one body? One source of insights is to look at other times in life when people “know” two things at once about the same entity: When we’re watching a movie, TV show, or stage play. When you watch Beetlejuice, you know you’re also watching Michael Keaton, who plays him. Where’s the boundary between actor and role? Is it the same for everyone in the audience? How much control do we have over that perception? These kinds of questions apply to robots and AI more and more, as machines become better at playing the part of people – or at least, of characters based on people. The answers are already influencing how designers think about future devices, and how people experience “intelligent” machines. Which is why I was eager to speak with Christopher Grobe, a Professor of English at Johns Hopkins University, who has studied many different kinds of performance – including the performance of machines that act like people. (One of his current projects is a look at the way 20th century theories of acting intertwined with theories of computers and robots “playing” human beings. In this podcast, we discussed those connections, as well as ways performance theory can inform robot design. (For example, we considered that maybe it doesn’t matter if a robot can feel emotions – if it can “play” those emotions enough to move you, its “audience.”) It was fascinating to see a perspective on AI and robotics that sees these technologies as new forms of an age-old practice: Making one thing act like another. Robots are a new technology that draws on the same human desires and skills that made puppets, toys, stuffies, and theater. Caveat: As always, the transcript is AI-generated, and contains a few errors. Robots for the Rest of Us is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    54 min
  8. When You Can't Talk to Me, Talk to "Me"

    09/08/2024

    When You Can't Talk to Me, Talk to "Me"

    Soon, AI companies expect to be selling “agents” — AIs that reserve tickets, cancel subscriptions and take other actions for their users. The companies are also talking up the power of generative AI to hold a natural conversation in real time. And, of course, AIs are also getting better at creating videos of real people. Combine all three, and you get the the new realm of the “digital human avatar” — an AI creation that can represent you, speak for you, take action on your behalf. It looks and sounds and acts as close to you as an AI can make it. While a lot of robot and AI visions remain science fiction for now, the digital avatar is already a real product, with a multibillion-dollar global market that is expected to grow a lot in coming years. It’s a wild vision: Imagine your avatar looking for a dental appointment, talking to the dentist’s avatar about whether Tuesday has an opening — while you and the dentist do (one hopes) more fulfilling things. Or think about a version of you that can talk in a meeting (same obsession with the deadlines, same way of nodding your head after you’ve finished talking, but real-you is elsewhere). Maybe you could have a version of you that you bounce ideas off — taking “talking to yourself” into a new dimension of meta. It’s not hard to imagine things going wrong with this scenario, but like it or hate it, these products are already here, and rapidly evolving. So I was happy to have a chance to talk with Hassaan Raza, co-founder and CEO of Tavus, a startup that has been making digital twins — of real people and of fictional characters — since 2020. We talked about the steps digital twins must take to win users’ confidence, the challenges of making a twin seem real, if and why he expects avatars eventually to leave the screen and become robots moving in the real world. And of course we also looked what could go wrong — deception, non-consensual recreations of people, privacy violations, the limits of AI’s language abilities, and other issues. Caveat: As always, remember that the transcript is AI-generated and. might contain errors. Get full access to Robots for the Rest of Us at robots4therestofus.substack.com/subscribe

    50 min

Ratings & Reviews

5
out of 5
2 Ratings

About

All about the robots and AI that are appearing in our day-to-day lives, doing things that people used to do. robots4therestofus.substack.com