1 hr 42 min

#68 DR. WALID SABA 2.0 - Natural Language Understanding [UNPLUGGED‪]‬ Machine Learning Street Talk (MLST)

    • Technology

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/HNnAwSduud

YT version: https://youtu.be/pMtk-iUaEuQ



Dr. Walid Saba is an old-school polymath. He has a background in cognitive  psychology, linguistics, philosophy, computer science and logic and he’s is now a Senior Scientist at Sorcero.



Walid is perhaps the most outspoken critic of BERTOLOGY, which is to say trying to solve the problem of natural language understanding with application of large statistical language models. Walid thinks this approach is cursed to failure because it’s analogous to memorising infinity with a large hashtable. Walid thinks that the various appeals to infinity by some deep learning researchers are risible.



[00:00:00] MLST Housekeeping

[00:08:03] Dr. Walid Saba Intro

[00:11:56] AI Cannot Ignore Symbolic Logic, and Here’s Why

[00:23:39] Main show - Proposition: Statistical learning doesn't work

[01:04:44] Discovering a sorting algorithm bottom-up is hard

[01:17:36] The axioms of nature (universal cognitive templates)

[01:31:06] MLPs are locality sensitive hashing tables



References;

The Missing Text Phenomenon, Again: the case of Compound Nominals

https://ontologik.medium.com/the-missing-text-phenomenon-again-the-case-of-compound-nominals-abb6ece3e205



A Spline Theory of Deep Networks

https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf



The Defeat of the Winograd Schema Challenge

https://arxiv.org/pdf/2201.02387.pdf



Impact of Pretraining Term Frequencies on Few-Shot Reasoning

https://twitter.com/yasaman_razeghi/status/1495112604854882304?s=21

https://arxiv.org/abs/2202.07206



AI Cannot Ignore Symbolic Logic, and Here’s Why

https://medium.com/ontologik/ai-cannot-ignore-symbolic-logic-and-heres-why-1f896713525b



Learnability can be undecidable

http://gtts.ehu.es/German/Docencia/1819/AC/extras/s42256-018-0002-3.pdf



Scaling Language Models: Methods, Analysis & Insights from Training Gopher

https://arxiv.org/pdf/2112.11446.pdf



DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning

https://arxiv.org/abs/2006.08381



On the Measure of Intelligence [Chollet]

https://arxiv.org/abs/1911.01547



A Formal Theory of Commonsense Psychology: How People Think People Think

https://www.amazon.co.uk/Formal-Theory-Commonsense-Psychology-People/dp/1107151007



Continuum hypothesis

https://en.wikipedia.org/wiki/Continuum_hypothesis



Gödel numbering + completness theorems

https://en.wikipedia.org/wiki/G%C3%B6del_numbering

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems



Concepts: Where Cognitive Science Went Wrong [Jerry A. Fodor]

https://oxford.universitypressscholarship.com/view/10.1093/0198236360.001.0001/acprof-9780198236368

Patreon: https://www.patreon.com/mlst

Discord: https://discord.gg/HNnAwSduud

YT version: https://youtu.be/pMtk-iUaEuQ



Dr. Walid Saba is an old-school polymath. He has a background in cognitive  psychology, linguistics, philosophy, computer science and logic and he’s is now a Senior Scientist at Sorcero.



Walid is perhaps the most outspoken critic of BERTOLOGY, which is to say trying to solve the problem of natural language understanding with application of large statistical language models. Walid thinks this approach is cursed to failure because it’s analogous to memorising infinity with a large hashtable. Walid thinks that the various appeals to infinity by some deep learning researchers are risible.



[00:00:00] MLST Housekeeping

[00:08:03] Dr. Walid Saba Intro

[00:11:56] AI Cannot Ignore Symbolic Logic, and Here’s Why

[00:23:39] Main show - Proposition: Statistical learning doesn't work

[01:04:44] Discovering a sorting algorithm bottom-up is hard

[01:17:36] The axioms of nature (universal cognitive templates)

[01:31:06] MLPs are locality sensitive hashing tables



References;

The Missing Text Phenomenon, Again: the case of Compound Nominals

https://ontologik.medium.com/the-missing-text-phenomenon-again-the-case-of-compound-nominals-abb6ece3e205



A Spline Theory of Deep Networks

https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf



The Defeat of the Winograd Schema Challenge

https://arxiv.org/pdf/2201.02387.pdf



Impact of Pretraining Term Frequencies on Few-Shot Reasoning

https://twitter.com/yasaman_razeghi/status/1495112604854882304?s=21

https://arxiv.org/abs/2202.07206



AI Cannot Ignore Symbolic Logic, and Here’s Why

https://medium.com/ontologik/ai-cannot-ignore-symbolic-logic-and-heres-why-1f896713525b



Learnability can be undecidable

http://gtts.ehu.es/German/Docencia/1819/AC/extras/s42256-018-0002-3.pdf



Scaling Language Models: Methods, Analysis & Insights from Training Gopher

https://arxiv.org/pdf/2112.11446.pdf



DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning

https://arxiv.org/abs/2006.08381



On the Measure of Intelligence [Chollet]

https://arxiv.org/abs/1911.01547



A Formal Theory of Commonsense Psychology: How People Think People Think

https://www.amazon.co.uk/Formal-Theory-Commonsense-Psychology-People/dp/1107151007



Continuum hypothesis

https://en.wikipedia.org/wiki/Continuum_hypothesis



Gödel numbering + completness theorems

https://en.wikipedia.org/wiki/G%C3%B6del_numbering

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems



Concepts: Where Cognitive Science Went Wrong [Jerry A. Fodor]

https://oxford.universitypressscholarship.com/view/10.1093/0198236360.001.0001/acprof-9780198236368

1 hr 42 min

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
This Week in XR Podcast
Charlie Fink Productions