Gary Marcus: a sceptical take on AI in 2025

المشتركون فقط
Economist Podcasts
From the release of AI agents to claims that artificial general intelligence has (finally!) been achieved, 2025 will probably be another blockbuster year for AI. That sense of continuous progress is not shared by everyone, however. Generative AI, based on large language models (LLMs), struggles with reasoning, reliability and truthfulness. While progress has been made in those domains, sceptics argue that the limitations of LLMs will fundamentally restrict the future of AI. In this episode, Alok Jha, The Economist’s science and technology editor, interviews Gary Marcus, one of modern AI’s most energetic critics. They discuss what to expect in 2025 and why Gary is pushing for researchers to work on a much wider range of scientific ideas (in other words, beyond deep learning) to enable AI to reach its full potential. Gary Marcus is a professor emeritus in cognitive science at New York University and the author of “Taming Silicon Valley”, a book advocating for a more responsible approach to the development of AI. For more on this topic, check out our series on the science that built the AI revolution, as well as our episodes on AGI. Transcripts of our podcasts are available via economist.com/podcasts. Listen to what matters most, from global politics and business to science and technology—subscribe to Economist Podcasts+. For more information about how to access Economist Podcasts+, please visit our FAQs page or watch our video explaining how to link your account.

للاستماع إلى حلقات ذات محتوى فاضح، قم بتسجيل الدخول.

اطلع على آخر مستجدات هذا البرنامج

قم بتسجيل الدخول أو التسجيل لمتابعة البرامج وحفظ الحلقات والحصول على آخر التحديثات.

تحديد بلد أو منطقة

أفريقيا والشرق الأوسط، والهند

آسيا والمحيط الهادئ

أوروبا

أمريكا اللاتينية والكاريبي

الولايات المتحدة وكندا