Generally Intelligent

Kanjun Qiu
Generally Intelligent

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.

  1. SEP 18

    Episode 37: Rylan Schaeffer, Stanford: On investigating emergent abilities and challenging dominant research ideas

    Rylan Schaeffer is a PhD student at Stanford studying the engineering, science, and mathematics of intelligence. He authored the paper “Are Emergent Abilities of Large Language Models a Mirage?”, as well as other interesting refutations in the field that we’ll talk about today. He previously interned at Meta on the Llama team, and at Google DeepMind. Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks. About Imbue Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one. We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research. Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research. Website: https://imbue.com LinkedIn: https://www.linkedin.com/company/imbue_ai/ Twitter/X: @imbue_ai

    1h 3m
  2. JUL 11

    Episode 36: Ari Morcos, DatologyAI: On leveraging data to democratize model training

    Ari Morcos is the CEO of DatologyAI, which makes training deep learning models more performant and efficient by intervening on training data. He was at FAIR and DeepMind before that, where he worked on a variety of topics, including how training data leads to useful representations, lottery ticket hypothesis, and self-supervised learning. His work has been honored with Outstanding Paper awards at both NeurIPS and ICLR. Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks. About Imbue Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one. We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research. Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research. Website: ⁠https://imbue.com/⁠ LinkedIn: ⁠https://www.linkedin.com/company/imbue-ai/⁠ Twitter: @imbue_ai

    1h 34m
  3. MAY 9

    Episode 35: Percy Liang, Stanford: On the paradigm shift and societal effects of foundation models

    Percy Liang is an associate professor of computer science and statistics at Stanford. These days, he’s interested in understanding how foundation models work, how to make them more efficient, modular, and robust, and how they shift the way people interact with AI—although he’s been working on language models for long before foundation models appeared. Percy is also a big proponent of reproducible research, and toward that end he’s shipped most of his recent papers as executable papers using the CodaLab Worksheets platform his lab developed, and published a wide variety of benchmarks. Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks. About Imbue Imbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research. Website: ⁠https://imbue.com/⁠ LinkedIn: ⁠https://www.linkedin.com/company/imbue-ai/⁠ Twitter: ⁠@imbue_ai

    1h 2m
  4. MAR 12

    Episode 34: Seth Lazar, Australian National University: On legitimate power, moral nuance, and the political philosophy of AI

    Seth Lazar is a professor of philosophy at the Australian National University, where he leads the Machine Intelligence and Normative Theory (MINT) Lab. His unique perspective bridges moral and political philosophy with AI, introducing much-needed rigor to the question of what will make for a good and just AI future. Generally Intelligent is a podcast by Imbue where we interview researchers about their behind-the-scenes ideas, opinions, and intuitions that are hard to share in papers and talks. About ImbueImbue is an independent research company developing AI agents that mirror the fundamentals of human-like intelligence and that can learn to safely solve problems in the real world. We started Imbue because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research. Website: https://imbue.com/LinkedIn: https://www.linkedin.com/company/imbue-ai/Twitter: @imbue_ai

    1h 56m
  5. 06/22/2023

    Episode 32: Jamie Simon, UC Berkeley: On theoretical principles for how neural networks learn and generalize

    Jamie Simon is a 4th year Ph.D. student at UC Berkeley advised by Mike DeWeese, and also a Research Fellow with us at Generally Intelligent. He uses tools from theoretical physics to build fundamental understanding of deep neural networks so they can be designed from first-principles. In this episode, we discuss reverse engineering kernels, the conservation of learnability during training, infinite-width neural networks, and much more. About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent

    1h 2m
  6. 03/29/2023

    Episode 31: Bill Thompson, UC Berkeley, on how cultural evolution shapes knowledge acquisition

    Bill Thompson is a cognitive scientist and an assistant professor at UC Berkeley. He runs an experimental cognition laboratory where he and his students conduct research on human language and cognition using large-scale behavioral experiments, computational modeling, and machine learning. In this episode, we explore the impact of cultural evolution on human knowledge acquisition, how pure biological evolution can lead to slow adaptation and overfitting, and much more. About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent

    1h 15m
  7. 03/23/2023

    Episode 30: Ben Eysenbach, CMU, on designing simpler and more principled RL algorithms

    Ben Eysenbach is a PhD student from CMU and a student researcher at Google Brain. He is co-advised by Sergey Levine and Ruslan Salakhutdinov and his research focuses on developing RL algorithms that get state-of-the-art performance while being more simple, scalable, and robust. Recent problems he’s tackled include long horizon reasoning, exploration, and representation learning. In this episode, we discuss designing simpler and more principled RL algorithms, and much more. About Generally Intelligent  We started Generally Intelligent because we believe that software with human-level intelligence will have a transformative impact on the world. We’re dedicated to ensuring that that impact is a positive one.   We have enough funding to freely pursue our research goals over the next decade, and our backers include Y Combinator, researchers from OpenAI, Astera Institute, and a number of private individuals who care about effective altruism and scientific research.   Our research is focused on agents for digital environments (ex: browser, desktop, documents), using RL, large language models, and self supervised learning. We’re excited about opportunities to use simulated data, network architecture search, and good theoretical understanding of deep learning to make progress on these problems. We take a focused, engineering-driven approach to research.   Learn more about us Website: https://generallyintelligent.com/ LinkedIn: linkedin.com/company/generallyintelligent/  Twitter: @genintelligent

    1h 46m

Ratings & Reviews

4.8
out of 5
16 Ratings

About

Technical discussions with deep learning researchers who study how to build intelligence. Made for researchers, by researchers.

To listen to explicit episodes, sign in.

Stay up to date with this show

Sign in or sign up to follow shows, save episodes, and get the latest updates.

Select a country or region

Africa, Middle East, and India

Asia Pacific

Europe

Latin America and the Caribbean

The United States and Canada