14 episodes

About AI progress. You can watch the video recordings and check out the transcripts at theinsideview.ai

The Inside View Michaël Trazzi

    • Technology

About AI progress. You can watch the video recordings and check out the transcripts at theinsideview.ai

    Connor Leahy–EleutherAI, Conjecture

    Connor Leahy–EleutherAI, Conjecture

    Connor was the first guest of this podcast. In the last episode, we talked a lot about EleutherAI, a grassroot collective of researchers he co-founded, who open-sourced GPT-3 size models such as GPT-NeoX and GPT-J.  Since then, Connor co-founded Conjecture, a company aiming to make AGI safe through scalable AI Alignment research.

    One of the goals of Conjecture is to reach a fundamental understanding of the internal mechanisms of current deep learning models using interpretability techniques.  In this episode, we go through the famous AI Alignment compass memes, discuss Connor’s inside views about AI progress, how he approaches AGI forecasting, his takes on Eliezer Yudkowsky’s secret strategy, common misconceptions and EleutherAI, and why you should consider working for his new company Conjecture.

    youtube: https://youtu.be/Oz4G9zrlAGs

    transcript: https://theinsideview.ai/connor2

    twitter: https:/twitter.com/MichaelTrazzi


    (00:00) Highlights

    (01:08) AGI Meme Review 

    (13:36) Current AI Progress

    (25:43) Defining AG

    (34:36) AGI Timelines

    (55:34) Death with Dignity

    (01:23:00) EleutherAI

    (01:46:09) Conjecture

    (02:43:58) Twitter Q&A

    • 2 hr 57 min
    Raphaël Millière Contra Scaling Maximalism

    Raphaël Millière Contra Scaling Maximalism

    Raphaël Millière is a Presidential Scholar in Society and Neuroscience at Columbia University. He has previously completed a PhD in philosophy in Oxford, is interested in the philosophy of mind, cognitive science, and artificial intelligence, and has recently been discussing at length the current progress in AI with popular Twitter threads on GPT-3, Dalle-2 and a thesis he called “scaling maximalism”. Raphaël is also co-organizing with Gary Marcus a workshop about compositionality in AI at the end of the month. 

    Transcript: https://theinsideview.ai/raphael

    Video: https://youtu.be/2EHWzK10kvw

    Host: https://twitter.com/MichaelTrazzi

    Raphaël : https://twitter.com/raphaelmilliere 

    Workshop: https://compositionalintelligence.github.io 


    (00:36) definitions of artificial general intelligence

    (7:25) behavior correlates of intellience, chinese room

    (19:11) natural language understanding, the octopus test, linguistics, semantics

    (33:05) generating philosophy with GPT-3, college essays grades, b******t

    (42:45) Stochastic Chameleon, out of distribution generalization

    (51:19) three levels of generalization, the Wozniak test

    (59:38) AI progress spectrum, scaling maximalism

    (01:15:06) Bitter Lesson

    (01:23:08) what would convince him that scale is all we need

    (01:27:04) unsupervised learning, lifelong learning

    (01:35:33) goalpost moving

    (01:43:30) what researchers "should" be doing, nuclear risk, climate change

    (01:57:24) compositionality, structured representations

    (02:05:57) conceptual blending, complex syntactic structure, variable binding

    (02:11:51) Raphaël's experience with DALL-E

    (02:19:02) the future of image generation

    • 2 hr 27 min
    Blake Richards–AGI Does Not Exist

    Blake Richards–AGI Does Not Exist

    Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme. When people asked on Twitter who was the edgiest people at MiLA, his name got actually more likes than Ethan, so  hopefully, this podcast will help re-establish the truth.

    Transcript: https://theinsideview.ai/blake

    Video: https://youtu.be/kWsHS7tXjSU


    (01:03) Highlights

    (01:03) AGI good / AGI not now compass

    (02:25) AGI is not a coherent concept

    (05:30) you cannot build truly general AI

    (14:30) no "intelligence" threshold for AI

    (25:24) benchmarking intelligence

    (28:34) recursive self-improvement

    (34:47) scale is something you need

    (37:20) the bitter lesson is only half-true

    (41:32) human-like sensors for general agents

    (44:06) the credit assignment problem

    (49:50) testing for backpropagation in the brain

    (54:42) burstprop (bursts of action potentials), reward prediction errors

    (01:01:35) long-term credit-assignment in reinforcement learning

    (01:10:48) what would change his mind on scaling and existential risk

    • 1 hr 15 min
    Ethan Caballero–Scale is All You Need

    Ethan Caballero–Scale is All You Need

    Ethan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known as the Edward Snowden moment of Deep Learning. On his free time, Ethan is a Master’s degree student at MILA in Montreal, and has published papers on out of distribution generalization and robustness generalization, accepted both as oral presentations and spotlight presentations at ICML and NeurIPS. Ethan has recently been thinking about scaling laws, both as an organizer and speaker for the 1st Neural Scaling Laws Workshop.

    Transcript: https://theinsideview.github.io/ethan

    Youtube: https://youtu.be/UPlv-lFWITI

    Michaël: https://twitter.com/MichaelTrazzi

    Ethan: https://twitter.com/ethancaballero


    (00:00) highlights

    (00:50) who is Ethan, scaling laws T-shirts

    (02:30) scaling, upstream, downstream, alignment and AGI

    (05:58) AI timelines, AlphaCode, Math scaling, PaLM

    (07:56) Chinchilla scaling laws

    (11:22) limits of scaling, Copilot, generative coding, code data

    (15:50) Youtube scaling laws, constrative type thing

    (20:55) AGI race, funding, supercomputers

    (24:00) Scaling at Google

    (25:10) gossips, private research, GPT-4

    (27:40) why Ethan was did not update on PaLM, hardware bottleneck

    (29:56) the fastest path, the best funding model for supercomputers

    (31:14) EA, OpenAI, Anthropics, publishing research, GPT-4

    (33:45) a zillion language model startups from ex-Googlers

    (38:07) Ethan's journey in scaling, early days

    (40:08) making progress on an academic budget, scaling laws research

    (41:22) all alignment is inverse scaling problems

    (45:16) predicting scaling laws, useful ai alignment research

    (47:16) nitpicks aobut Ajeya Cotra's report, compute trends

    (50:45) optimism, conclusion on alignment

    • 51 min
    10. Peter Wildeford on Forecasting

    10. Peter Wildeford on Forecasting

    Peter is the co-CEO of Rethink Priorities, a fast-growing non-profit doing research on how to improve the long-term future. On his free time, Peter makes money in prediction markets and is quickly becoming one of the top forecasters on Metaculus. We talk about the probability of London getting nuked, Rethink Priorities and why EA should fund projects that scale.

    Check out the video and transcript here: https://theinsideview.github.io/peter

    • 51 min
    9. Emil Wallner on Building a €25000 Machine Learning Rig

    9. Emil Wallner on Building a €25000 Machine Learning Rig

    Emil is a resident at the Google Arts & Culture Lab were he explores the intersection between art and machine learning. He recently built his own Machine Learning server, or rig, which costed him €25000.

    Emil's Story: https://www.emilwallner.com/p/ml-rig

    Youtube: https://youtu.be/njbPpxhE6W0

    00:00 Intro

    00:23 Building your own rig

    06:11 The Nvidia GPU rder hack

    15:51 Inside Emil's rig

    21:31 Motherboard

    23:55 Cooling and datacenters

    29:36 Deep Learning lessons from owning your hardware

    36:20 Shared resources vs. personal GPUs

    39:12 RAM, chassis and airflow

    42:42 Amd, Apple, Arm and Nvidia

    51:15 Tensorflow, TPUs, cloud minsdet, EleutherAI

    • 56 min

Top Podcasts In Technology

Lex Fridman
Jason Calacanis
Jack Rhysider
Recode & The Verge
Ben Gilbert and David Rosenthal

You Might Also Like

Fin Moorhouse and Luca Righetti
The Gradient
Will Jarvis
Dwarkesh Patel
Spencer Greenberg