32 min

Why Responsible AI is Needed in Explainable AI Systems with Christoph Lütge of TUM HumAIn Podcast - Artificial Intelligence, Data Science, Developer Tools, and Technical Education

    • Technology

Bias in AI is becoming a concern as algorithms cause unfairness in many areas including hiring, loan applications and autonomous vehicles. Everybody expects AI to be accountable and calls for developing standards and governance systems to create balance.

The idea of black boxes demonstrates the flaws of using AI since this technology cannot be scrutinized. Humans want an accountable technology and with AI being a black box, this means responsibility to control how algorithms work for better outcomes.

AI can also cause destruction and make secret decisions, which cause negative implications on people’s lives and translates to using responsible AI systems. By integrating explainable AI into their AI models, businesses make accurate decisions, map patterns and optimize operations.

Listen in, as I discuss Why Responsible AI is needed in Explainable AI Systems

In this episode: Prof. Christoph Lutge, Director of TUM Institute for Ethics and AI (Germany)

This episode is brought to you by For The People. You can grab your copy of For the People on Amazon today, or visit SIMONCHADWICK.US to learn more about Simon.

Learn more about your ad-choices at www.humainpodcast.com/advertise

You can support the HumAIn podcast and receive subscriber-only content at http://humainpodcast.com/newsletter

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

Bias in AI is becoming a concern as algorithms cause unfairness in many areas including hiring, loan applications and autonomous vehicles. Everybody expects AI to be accountable and calls for developing standards and governance systems to create balance.

The idea of black boxes demonstrates the flaws of using AI since this technology cannot be scrutinized. Humans want an accountable technology and with AI being a black box, this means responsibility to control how algorithms work for better outcomes.

AI can also cause destruction and make secret decisions, which cause negative implications on people’s lives and translates to using responsible AI systems. By integrating explainable AI into their AI models, businesses make accurate decisions, map patterns and optimize operations.

Listen in, as I discuss Why Responsible AI is needed in Explainable AI Systems

In this episode: Prof. Christoph Lutge, Director of TUM Institute for Ethics and AI (Germany)

This episode is brought to you by For The People. You can grab your copy of For the People on Amazon today, or visit SIMONCHADWICK.US to learn more about Simon.

Learn more about your ad-choices at www.humainpodcast.com/advertise

You can support the HumAIn podcast and receive subscriber-only content at http://humainpodcast.com/newsletter

Advertising Inquiries: https://redcircle.com/brands

Privacy & Opt-Out: https://redcircle.com/privacy

32 min

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
TED Radio Hour
NPR