1h 40 min

Ayanna Howard: Human-Robot Interaction and Ethics of Safety-Critical Systems Lex Fridman Podcast

    • Tecnologia

Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments.



This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.



This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". 



Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.



00:00 - Introduction

02:09 - Favorite robot

05:05 - Autonomous vehicles

08:43 - Tesla Autopilot

20:03 - Ethical responsibility of safety-critical algorithms

28:11 - Bias in robotics

38:20 - AI in politics and law

40:35 - Solutions to bias in algorithms

47:44 - HAL 9000

49:57 - Memories from working at NASA

51:53 - SpotMini and Bionic Woman

54:27 - Future of robots in space

57:11 - Human-robot interaction

1:02:38 - Trust

1:09:26 - AI in education

1:15:06 - Andrew Yang, automation, and job loss

1:17:17 - Love, AI, and the movie Her

1:25:01 - Why do so many robotics companies fail?

1:32:22 - Fear of robots

1:34:17 - Existential threats of AI

1:35:57 - Matrix

1:37:37 - Hang out for a day with a robot

Ayanna Howard is a roboticist and professor at Georgia Tech, director of Human-Automation Systems lab, with research interests in human-robot interaction, assistive robots in the home, therapy gaming apps, and remote robotic exploration of extreme environments.



This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon.



This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast". 



Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time.



00:00 - Introduction

02:09 - Favorite robot

05:05 - Autonomous vehicles

08:43 - Tesla Autopilot

20:03 - Ethical responsibility of safety-critical algorithms

28:11 - Bias in robotics

38:20 - AI in politics and law

40:35 - Solutions to bias in algorithms

47:44 - HAL 9000

49:57 - Memories from working at NASA

51:53 - SpotMini and Bionic Woman

54:27 - Future of robots in space

57:11 - Human-robot interaction

1:02:38 - Trust

1:09:26 - AI in education

1:15:06 - Andrew Yang, automation, and job loss

1:17:17 - Love, AI, and the movie Her

1:25:01 - Why do so many robotics companies fail?

1:32:22 - Fear of robots

1:34:17 - Existential threats of AI

1:35:57 - Matrix

1:37:37 - Hang out for a day with a robot

1h 40 min

Top de podcasts em Tecnologia

IA: A Próxima Vaga
Francisco Pinto Balsemão
Lex Fridman Podcast
Lex Fridman
O Futuro do Futuro
Hugo Séneca
Practical AI: Machine Learning, Data Science
Changelog Media
SuperToast
Instinct
Acquired
Ben Gilbert and David Rosenthal