Babbage: Is GPT-4 the dawn of true artificial intelligence‪?‬ Babbage from The Economist

    • Tech News

Listen on Apple Podcasts
Requires subscription and macOS 11.4 or higher

OpenAI's ChatGPT, an advanced chatbot, has taken the world by storm, amassing over 100 million monthly active users and exhibiting unprecedented capabilities. From crafting essays and fiction to designing websites and writing code, you’d be forgiven for thinking there’s little it can’t do. 
Now it’s had an upgrade. GPT-4 has even more incredible abilities, it can take in photos as an input, and deliver smoother, more natural writing to the user. But it also hallucinates, throws up false answers, and remains unable to reference any world events that happened after September 2021.
Seeking to get under the hood of the Large Language Model that operates GPT-4, host Alok Jha speaks with Maria Laikata, a professor in Natural Language Processing at Queen Mary University of London. We put the technology through its paces with The Economist’s tech-guru Ludwig Seigele, and even run it through something like a Turing Test to give an idea of whether it could pass for human-level-intelligence. 
An Artificial General Intelligence is the ultimate goal of AI research, so how significant will GPT-4 and similar technologies be in the grand scheme of machine intelligence? Not very, suggests Gary Marcus, expert in both AI and human intelligence, though they will impact all of our lives both in good and bad ways. 
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.

Hosted on Acast. See acast.com/privacy for more information.

OpenAI's ChatGPT, an advanced chatbot, has taken the world by storm, amassing over 100 million monthly active users and exhibiting unprecedented capabilities. From crafting essays and fiction to designing websites and writing code, you’d be forgiven for thinking there’s little it can’t do. 
Now it’s had an upgrade. GPT-4 has even more incredible abilities, it can take in photos as an input, and deliver smoother, more natural writing to the user. But it also hallucinates, throws up false answers, and remains unable to reference any world events that happened after September 2021.
Seeking to get under the hood of the Large Language Model that operates GPT-4, host Alok Jha speaks with Maria Laikata, a professor in Natural Language Processing at Queen Mary University of London. We put the technology through its paces with The Economist’s tech-guru Ludwig Seigele, and even run it through something like a Turing Test to give an idea of whether it could pass for human-level-intelligence. 
An Artificial General Intelligence is the ultimate goal of AI research, so how significant will GPT-4 and similar technologies be in the grand scheme of machine intelligence? Not very, suggests Gary Marcus, expert in both AI and human intelligence, though they will impact all of our lives both in good and bad ways. 
For full access to The Economist’s print, digital and audio editions subscribe at economist.com/podcastoffer and sign up for our weekly science newsletter at economist.com/simplyscience.

Hosted on Acast. See acast.com/privacy for more information.

More by The Economist

Economist Podcasts
The Economist
The Intelligence from The Economist
The Economist
The World in Brief from The Economist
The Economist
Money Talks from The Economist
The Economist
Babbage from The Economist
The Economist
Checks and Balance from The Economist
The Economist