Babbage: The science that built the AI revolution—part four Economist Podcasts

    • News

Listen on Apple Podcasts
Requires subscription and macOS 11.4 or higher

What made AI models generative? In 2022, it seemed as though the much-anticipated AI revolution had finally arrived. Large language models swept the globe, and deepfakes were becoming ever more pervasive. Underneath it all were old algorithms that had been taught some new tricks. Suddenly, artificial intelligence seemed to have the skill of creativity. Generative AI had arrived and promised to transform…everything.

This is the final episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?

Host: Alok Jha, The Economist’s science and technology editor. Contributors: Lindsay Bartholomew of the MIT Museum; Yoshua Bengio of the University of Montréal; Fei-Fei Li of Stanford University; Robert Ajemian and Greta Tuckute of MIT; Kyle Mahowald of the University of Texas at Austin; Daniel Glaser of London’s Institute of Philosophy; Abby Bertics, The Economist’s science correspondent.

On Thursday April 4th, we’re hosting a live event where we’ll answer as many of your questions on AI as possible, following this Babbage series. If you’re a subscriber, you can submit your question and find out more at economist.com/aievent.

Listen to what matters most, from global politics and business to science and technology—subscribe to Economist Podcasts+

For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.

What made AI models generative? In 2022, it seemed as though the much-anticipated AI revolution had finally arrived. Large language models swept the globe, and deepfakes were becoming ever more pervasive. Underneath it all were old algorithms that had been taught some new tricks. Suddenly, artificial intelligence seemed to have the skill of creativity. Generative AI had arrived and promised to transform…everything.

This is the final episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?

Host: Alok Jha, The Economist’s science and technology editor. Contributors: Lindsay Bartholomew of the MIT Museum; Yoshua Bengio of the University of Montréal; Fei-Fei Li of Stanford University; Robert Ajemian and Greta Tuckute of MIT; Kyle Mahowald of the University of Texas at Austin; Daniel Glaser of London’s Institute of Philosophy; Abby Bertics, The Economist’s science correspondent.

On Thursday April 4th, we’re hosting a live event where we’ll answer as many of your questions on AI as possible, following this Babbage series. If you’re a subscriber, you can submit your question and find out more at economist.com/aievent.

Listen to what matters most, from global politics and business to science and technology—subscribe to Economist Podcasts+

For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.

Top Podcasts In News

Global News Podcast
BBC World Service
The Morning Brief
The Economic Times
ANI Podcast with Smita Prakash
Asian News International (ANI)
3 Things
Express Audio
In Focus by The Hindu
The Hindu
ThePrint
ThePrint

More by The Economist

Economist Podcasts
The Economist
The Intelligence from The Economist
The Economist
The World in Brief from The Economist
The Economist
Money Talks from The Economist
The Economist
Editor's Picks from The Economist
The Economist
Babbage from The Economist
The Economist