2 Folgen

From educational institutions to healthcare professionals, from employers to governing bodies, artificial intelligence technologies and algorithms are increasingly used to assess and decide upon various aspects of our lives. However, the question arises: are these systems truly impartial and just in their judgments when they read humans and their behaviour? Our answer is that they are not. Despite their purported aim to enhance objectivity and efficiency, these technologies paradoxically harbor systemic biases and inaccuracies, particularly in the realm of human profiling. The Human Error Project has investigated how journalists, civil society organizations and tech entrepreneurs in Europe make sense of AI errors and how they are negotiating and coexisting with the human rights implications of AI. With the aim of fostering debate between academia and the public, the “Machines That Fail Us” podcast series will host the voices of some of the most engaged individuals involved in the fight for a better future with artificial intelligence.

“Machines That Fail Us” is made possible thanks to grant provided by the Swiss National Science Foundation (SNSF)’s “Agora” scheme. The podcast is produced by The Human Error Project Team. Dr. Philip Di Salvo, the main host, works as a researcher and lecturer in the HSG’s Institute for Media and Communications Management.

https://mcm.unisg.ch/
https://www.unisg.ch/

Machines that fail us University of St. Gallen, Philip Di Salvo

    • Bildung

From educational institutions to healthcare professionals, from employers to governing bodies, artificial intelligence technologies and algorithms are increasingly used to assess and decide upon various aspects of our lives. However, the question arises: are these systems truly impartial and just in their judgments when they read humans and their behaviour? Our answer is that they are not. Despite their purported aim to enhance objectivity and efficiency, these technologies paradoxically harbor systemic biases and inaccuracies, particularly in the realm of human profiling. The Human Error Project has investigated how journalists, civil society organizations and tech entrepreneurs in Europe make sense of AI errors and how they are negotiating and coexisting with the human rights implications of AI. With the aim of fostering debate between academia and the public, the “Machines That Fail Us” podcast series will host the voices of some of the most engaged individuals involved in the fight for a better future with artificial intelligence.

“Machines That Fail Us” is made possible thanks to grant provided by the Swiss National Science Foundation (SNSF)’s “Agora” scheme. The podcast is produced by The Human Error Project Team. Dr. Philip Di Salvo, the main host, works as a researcher and lecturer in the HSG’s Institute for Media and Communications Management.

https://mcm.unisg.ch/
https://www.unisg.ch/

    Machines That Fail Us #2: Following the AI beat – algorithms making the news

    Machines That Fail Us #2: Following the AI beat – algorithms making the news

    What’s the role of journalism in making sense of AI and its errors? With Melissa Heikkilä, senior reporter at the MIT Technology Review. Host: Dr. Philip Di Salvo.

    • 26 Min.
    Machines That Fail Us #1: Making sense of the human error of AI

    Machines That Fail Us #1: Making sense of the human error of AI

    What are the errors that artificial intelligence systems can make and what’s their impact on humans? The Human Error Project team discusses the results of their own research into AI errors and algorithmic profiling.

    • 39 Min.

Top‑Podcasts in Bildung

G Spot mit Stefanie Giesinger
Stefanie Giesinger & Studio Bummens
Eine Stunde History - Deutschlandfunk Nova
Deutschlandfunk Nova
Easy German: Learn German with native speakers | Deutsch lernen mit Muttersprachlern
Cari, Manuel und das Team von Easy German
ZEIT Sprachen – English, please!
ZEIT ONLINE
Quarks Science Cops
Quarks
Gehirn gehört - Prof. Dr. Volker Busch
Prof. Dr. Volker Busch