130 episodes

Note: The TDS podcast's current run has ended.

Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.

Towards Data Science The TDS team

    • Technology
    • 5.0 • 4 Ratings

Note: The TDS podcast's current run has ended.

Researchers and business leaders at the forefront of the field unpack the most pressing questions around data science and AI.

    130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

    130. Edouard Harris - New Research: Advanced AI may tend to seek power *by default*

    Progress in AI has been accelerating dramatically in recent years, and even months. It seems like every other day, there’s a new, previously-believed-to-be-impossible feat of AI that’s achieved by a world-leading lab. And increasingly, these breakthroughs have been driven by the same, simple idea: AI scaling.

    For those who haven’t been following the AI scaling sage, scaling means training AI systems with larger models, using increasingly absurd quantities of data and processing power. So far, empirical studies by the world’s top AI labs seem to suggest that scaling is an open-ended process that can lead to more and more capable and intelligent systems, with no clear limit.

    And that’s led many people to speculate that scaling might usher in a new era of broadly human-level or even superhuman AI — the holy grail AI researchers have been after for decades.

    And while that might sound cool, an AI that can solve general reasoning problems as well as or better than a human might actually be an intrinsically dangerous thing to build.

    At least, that’s the conclusion that many AI safety researchers have come to following the publication of a new line of research that explores how modern AI systems tend to solve problems, and whether we should expect more advanced versions of them to perform dangerous behaviours like seeking power.

    This line of research in AI safety is called “power-seeking”, and although it’s currently not well understood outside the frontier of AI safety and AI alignment research, it’s starting to draw a lot of attention. The first major theoretical study of power seeking was led by Alex Turner, who’s appeared on the podcast before, and was published in NeurIPS (the world’s top AI conference), for example.

    And today, we’ll be hearing from Edouard Harris, an AI alignment researcher and one of my co-founders in the AI safety company (Gladstone AI). Ed’s just completed a significant piece of AI safety research that extends Alex Turner’s original power-seeking work, and that shows what seems to be the first experimental evidence suggesting that we should expect highly advanced AI systems to seek power by default.

    What does power seeking really mean though? And does all this imply for the safety of future, general-purpose reasoning systems? That’s what this episode will be all about.

    ***

    Intro music:

    - Artist: Ron Gelinas

    - Track Title: Daybreak Chill Blend (original mix)

    - Link to Track: https://youtu.be/d8Y2sKIgFWc

    *** 

    Chapters:

    - 0:00 Intro

    - 4:00 Alex Turner's research

    - 7:45 What technology wants

    - 11:30 Universal goals

    - 17:30 Connecting observations

    - 24:00 Micro power seeking behaviour

    - 28:15 Ed's research

    - 38:00 The human as the environment

    - 42:30 What leads to power seeking

    - 48:00 Competition as a default outcome

    - 52:45 General concern

    - 57:30 Wrap-up

    • 58 min
    129. Amber Teng - Building apps with a new generation of language models

    129. Amber Teng - Building apps with a new generation of language models

    It’s no secret that a new generation of powerful and highly scaled language models is taking the world by storm. Companies like OpenAI, AI21Labs, and Cohere have built models so versatile that they’re powering hundreds of new applications, and unlocking entire new markets for AI-generated text.

    In light of that, I thought it would be worth exploring the applied side of language modelling — to dive deep into one specific language model-powered tool, to understand what it means to build apps on top of scaled AI systems. How easily can these models be used in the wild? What bottlenecks and challenges do people run into when they try to build apps powered by large language models? That’s what I wanted to find out.

    My guest today is Amber Teng, and she’s a data scientist who recently published a blog that got quite a bit of attention, about a resume cover letter generator that she created using GPT-3, OpenAI’s powerful and now-famous language model. I thought her project would be make for a great episode, because it exposes so many of the challenges and opportunities that come with the new era of powerful language models that we’ve just entered.

    So today we’ll be exploring exactly that: looking at the applied side of language modelling and prompt engineering, understanding how large language models have made new apps not only possible but also much easier to build, and the likely future of AI-powered products.

    ***

    Intro music:

    - Artist: Ron Gelinas

    - Track Title: Daybreak Chill Blend (original mix)

    - Link to Track: https://youtu.be/d8Y2sKIgFWc

    ***

    Chapters:

    - 0:00 Intro

    - 2:30 Amber’s background

    - 5:30 Using GPT-3

    - 14:45 Building prompts up

    - 18:15 Prompting best practices

    - 21:45 GPT-3 mistakes

    - 25:30 Context windows

    - 30:00 End-to-end time

    - 34:45 The cost of one cover letter

    - 37:00 The analytics

    - 41:45 Dynamics around company-building

    - 46:00 Commoditization of language modelling

    - 51:00 Wrap-up

    • 51 min
    128. David Hirko - AI observability and data as a cybersecurity weakness

    128. David Hirko - AI observability and data as a cybersecurity weakness

    Imagine you’re a big hedge fund, and you want to go out and buy yourself some data. Data is really valuable for you — it’s literally going to shape your investment decisions and determine your outcomes.

    But the moment you receive your data, a cold chill runs down your spine: how do you know your data supplier gave you the data they said they would? From your perspective, you’re staring down 100,000 rows in a spreadsheet, with no way to tell if half of them were made up — or maybe more for that matter.

    This might seem like an obvious problem in hindsight, but it’s one most of us haven’t even thought of. We tend to assume that data is data, and that 100,000 rows in a spreadsheet is 100,000 legitimate samples.

    The challenge of making sure you’re dealing with high-quality data, or at least that you have the data you think you do, is called data observability, and it’s surprisingly difficult to solve for at scale. In fact, there are now entire companies that specialize in exactly that — one of which is Zectonal, whose co-founder Dave Hirko will be joining us for today’s episode of the podcast.

    Dave has spent his career understanding how to evaluate and monitor data at massive scale. He did that first at AWS in the early days of cloud computing, and now through Zectonal, where he’s working on strategies that allow companies to detect issues with their data — whether they’re caused by intentional data poisoning, or unintentional data quality problems. Dave joined me to talk about data observability, data as a new vector for cyberattacks, and the future of enterprise data management on this episode of the TDS podcast.

    ***
    Intro music:

    - Artist: Ron Gelinas

    - Track Title: Daybreak Chill Blend (original mix)

    - Link to Track: https://youtu.be/d8Y2sKIgFWc

    ***
    Chapters:

    0:00 Intro
    3:00 What is data observability?
    10:45 “Funny business” with data providers
    12:50 Data supply chains
    16:50 Various cybersecurity implications
    20:30 Deep data inspection
    27:20 Observed direction of change
    34:00 Steps the average person can take
    41:15 Challenges with GDPR transitions
    48:45 Wrap-up

    • 49 min
    127. Matthew Stewart - The emerging world of ML sensors

    127. Matthew Stewart - The emerging world of ML sensors

    Today, we live in the era of AI scaling. It seems like everywhere you look people are pushing to make large language models larger, or more multi-modal and leveraging ungodly amounts of processing power to do it.

    But although that’s one of the defining trends of the modern AI era, it’s not the only one. At the far opposite extreme from the world of hyperscale transformers and giant dense nets is the fast-evolving world of TinyML, where the goal is to pack AI systems onto small edge devices.

    My guest today is Matthew Stewart, a deep learning and TinyML researcher at Harvard University, where he collaborates with the world’s leading IoT and TinyML experts on projects aimed at getting small devices to do big things with AI. Recently, along with his colleagues, Matt co-authored a paper that introduced a new way of thinking about sensing.

    The idea is to tightly integrate machine learning and sensing on one device. For example, today we might have a sensor like a camera embedded on an edge device, and that camera would have to send data about all the pixels in its field of view back to a central server that might take that data and use it to perform a task like facial recognition. But that’s not great because it involves sending potentially sensitive data — in this case, images of people’s faces — from an edge device to a server, introducing security risks.

    So instead, what if the camera’s output was processed on the edge device itself, so that all that had to be sent to the server was much less sensitive information, like whether or not a given face was detected? These systems — where edge devices harness onboard AI, and share only processed outputs with the rest of the world — are what Matt and his colleagues call ML sensors.

    ML sensors really do seem like they’ll be part of the future, and they introduce a host of challenging ethical, privacy, and operational questions that I discussed with Matt on this episode of the TDS podcast.

    *** 

    Intro music:

    - Artist: Ron Gelinas

    - Track Title: Daybreak Chill Blend (original mix)

    - Link to Track: https://youtu.be/d8Y2sKIgFWc

    ***

    Chapters:
    - 3:20 Special challenges with TinyML

    - 9:00 Most challenging aspects of Matt’s work

    - 12:30 ML sensors

    - 21:30 Customizing the technology

    - 24:45 Data sheets and ML sensors

    - 31:30 Customers with their own custom software

    - 36:00 Access to the algorithm

    - 40:30 Wrap-up

    • 41 min
    126. JR King - Does the brain run on deep learning?

    126. JR King - Does the brain run on deep learning?

    Deep learning models — transformers in particular — are defining the cutting edge of AI today. They’re based on an architecture called an artificial neural network, as you probably already know if you’re a regular Towards Data Science reader. And if you are, then you might also already know that as their name suggests, artificial neural networks were inspired by the structure and function of biological neural networks, like those that handle information processing in our brains.

    So it’s a natural question to ask: how far does that analogy go? Today, deep neural networks can master an increasingly wide range of skills that were historically unique to humans — skills like creating images, or using language, planning, playing video games, and so on. Could that mean that these systems are processing information like the human brain, too?

    To explore that question, we’ll be talking to JR King, a CNRS researcher at the Ecole Normale Supérieure, affiliated with Meta AI, where he leads the Brain & AI group. There, he works on identifying the computational basis of human intelligence, with a focus on language. JR is a remarkably insightful thinker, who’s spent a lot of time studying biological intelligence, where it comes from, and how it maps onto artificial intelligence. And he joined me to explore the fascinating intersection of biological and artificial information processing on this episode of the TDS podcast.

    ***

    Intro music:

    - Artist: Ron Gelinas

    - Track Title: Daybreak Chill Blend (original mix)

    - Link to Track: https://youtu.be/d8Y2sKIgFWc 

    ***

    Chapters:

    2:30 What is JR’s day-to-day?
    5:00 AI and neuroscience
    12:15 Quality of signals within the research
    21:30 Universality of structures
    28:45 What makes up a brain?
    37:00 Scaling AI systems
    43:30 Growth of the human brain
    48:45 Observing certain overlaps
    55:30 Wrap-up

    • 55 min
    125. Ryan Fedasiuk - Can the U.S. and China collaborate on AI safety?

    125. Ryan Fedasiuk - Can the U.S. and China collaborate on AI safety?

    It’s no secret that the US and China are geopolitical rivals. And it’s also no secret that that rivalry extends into AI — an area both countries consider to be strategically critical.

    But in a context where potentially transformative AI capabilities are being unlocked every few weeks, many of which lend themselves to military applications with hugely destabilizing potential, you might hope that the US and China would have robust agreements in place to deal with things like runaway conflict escalation triggered by an AI powered weapon that misfires. Even at the height of the cold war, the US and Russia had robust lines of communication to de-escalate potential nuclear conflicts, so surely the US and China have something at least as good in place now… right?

    Well they don’t, and to understand the reason why — and what we should do about it — I’ll be speaking to Ryan Fedashuk, a Research Analyst at Georgetown University’s Center for Security and Emerging Technology and Adjunct Fellow at the Center for a New American Security. Ryan recently wrote a fascinating article for Foreign Policy Magazine, where he outlines the challenges and importance of US-China collaboration on AI safety. He joined me to talk about the U.S. and China’s shared interest in building safe AI, how reach side views the other, and what realistic China AI policy looks like on this episode of the TDs podcast.

    • 48 min

Customer Reviews

5.0 out of 5
4 Ratings

4 Ratings

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
Waveform: The MKBHD Podcast
Vox Media Podcast Network
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lenny's Podcast: Product | Growth | Career
Lenny Rachitsky
Darknet Diaries
Jack Rhysider

You Might Also Like

Super Data Science: ML & AI Podcast with Jon Krohn
Jon Krohn
Practical AI: Machine Learning, Data Science
Changelog Media
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
Sam Charrington
DataFramed
DataCamp
The Data Scientist Show
Daliana Liu
Data Science at Home
Francesco Gadaleta