Babbage: The science that built the AI revolution—part two Babbage from The Economist

    • Tech News

Listen on Apple Podcasts
Requires subscription and macOS 11.4 or higher

How do machines learn? Learning is fundamental to artificial intelligence. It’s how computers can recognise speech or identify objects in images. But how can networks of artificial neurons be deployed to find patterns in data, and what is the mathematics that makes it all possible?

This is the second episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?

Host: Alok Jha, The Economist’s science and technology editor. Contributors: Pulkit Agrawal and Gabe Margolis of MIT; Daniel Glaser, a neuroscientist at London’s Institute of Philosophy; Melanie Mitchell of the Santa Fe Institute; Anil Ananthaswamy, author of “Why Machines Learn”.

On Thursday April 4th, we’re hosting a live event where we’ll answer as many of your questions on AI as possible, following this Babbage series. If you’re a subscriber, you can submit your question and find out more at economist.com/aievent.

Get a world of insights for 50% off—subscribe to Economist Podcasts+

If you’re already a subscriber to The Economist, you’ll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.

How do machines learn? Learning is fundamental to artificial intelligence. It’s how computers can recognise speech or identify objects in images. But how can networks of artificial neurons be deployed to find patterns in data, and what is the mathematics that makes it all possible?

This is the second episode in a four-part series on the evolution of modern generative AI. What were the scientific and technological developments that took the very first, clunky artificial neurons and ended up with the astonishingly powerful large language models that power apps such as ChatGPT?

Host: Alok Jha, The Economist’s science and technology editor. Contributors: Pulkit Agrawal and Gabe Margolis of MIT; Daniel Glaser, a neuroscientist at London’s Institute of Philosophy; Melanie Mitchell of the Santa Fe Institute; Anil Ananthaswamy, author of “Why Machines Learn”.

On Thursday April 4th, we’re hosting a live event where we’ll answer as many of your questions on AI as possible, following this Babbage series. If you’re a subscriber, you can submit your question and find out more at economist.com/aievent.

Get a world of insights for 50% off—subscribe to Economist Podcasts+

If you’re already a subscriber to The Economist, you’ll have full access to all our shows as part of your subscription. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.

More by The Economist

Economist Podcasts
The Economist
The Intelligence from The Economist
The Economist
Money Talks from The Economist
The Economist
The World in Brief from The Economist
The Economist
Babbage from The Economist
The Economist
Checks and Balance from The Economist
The Economist