88 episodes

A Medium publication sharing concepts, ideas, and codes.

Towards Data Science The TDS team

    • Technology
    • 4.6 • 44 Ratings

A Medium publication sharing concepts, ideas, and codes.

    88. Oren Etzioni - The case against (worrying about) existential risk from AI

    88. Oren Etzioni - The case against (worrying about) existential risk from AI

    Few would disagree that AI is set to become one of the most important economic and social forces in human history.

    But along with its transformative potential has come concern about a strange new risk that AI might pose to human beings. As AI systems become exponentially more capable of achieving their goals, some worry that even a slight misalignment between those goals and our own could be disastrous. These concerns are shared by many of the most knowledgeable and experienced AI specialists, at leading labs like OpenAI, DeepMind, CHAI Berkeley, Oxford and elsewhere.

    But they’re not universal: I recently had Melanie Mitchell — computer science professor and author who famously debated Stuart Russell on the topic of AI risk — on the podcast to discuss her objections to the AI catastrophe argument. And on this episode, we’ll continue our exploration of the case for AI catastrophic risk skepticism with an interview with Oren Etzioni, CEO of the Allen Institute for AI, a world-leading AI research lab that’s developed many well-known projects, including the popular AllenNLP library, and Semantic Scholar.

    Oren has a unique perspective on AI risk, and the conversation was lots of fun!

    • 53 min
    87. Evan Hubinger - The Inner Alignment Problem

    87. Evan Hubinger - The Inner Alignment Problem

    How can you know that a super-intelligent AI is trying to do what you asked it to do?

    The answer, it turns out, is: not easily. And unfortunately, an increasing number of AI safety researchers are warning that this is a problem we’re going to have to solve sooner rather than later, if we want to avoid bad outcomes — which may include a species-level catastrophe.

    The type of failure mode whereby AIs optimize for things other than those we ask them to is known as an inner alignment failure in the context of AI safety. It’s distinct from outer alignment failure, which is what happens when you ask your AI to do something that turns out to be dangerous, and it was only recognized by AI safety researchers as its own category of risk in 2019. And the researcher who led that effort is my guest for this episode of the podcast, Evan Hubinger.

    Evan is an AI safety veteran who’s done research at leading AI labs like OpenAI, and whose experience also includes stints at Google, Ripple and Yelp. He currently works at the Machine Intelligence Research Institute (MIRI) as a Research Fellow, and joined me to talk about his views on AI safety, the alignment problem, and whether humanity is likely to survive the advent of superintelligent AI.

    • 1 hr 9 min
    86. Andy Jones - AI Safety and the Scaling Hypothesis

    86. Andy Jones - AI Safety and the Scaling Hypothesis

    When OpenAI announced the release of their  GPT-3 API last year, the tech world was shocked. Here was a language model, trained only to perform a simple autocomplete task, which turned out to be capable of language translation, coding, essay writing, question answering and many other tasks that previously would each have required purpose-built systems.

    What accounted for GPT-3’s ability to solve these problems? How did it beat state-of-the-art AIs that were purpose-built to solve tasks it was never explicitly trained for? Was it a brilliant new algorithm? Something deeper than deep learning?

    Well… no. As algorithms go, GPT-3 was relatively simple, and was built using a by-then fairly standard transformer architecture. Instead of a fancy algorithm, the real difference between GPT-3 and everything that came before was size: GPT-3 is a simple-but-massive, 175B-parameter model, about 10X bigger than the next largest AI system.

    GPT-3 is only the latest in a long line of results that now show that scaling up simple AI techniques can give rise to new behavior, and far greater capabilities. Together, these results have motivated a push toward AI scaling: the pursuit of ever larger AIs, trained with more compute on bigger datasets. But scaling is expensive: by some estimates, GPT-3 cost as much as $5M to train. As a result, only well-resources companies like Google, OpenAI and Microsoft have been able to experiment with scaled models.

    That’s a problem for independent AI safety researchers, who want to better understand how advanced AI systems work, and what their most dangerous behaviors might be, but who can’t afford a $5M compute budget. That’s why a recent paper by Andy Jones, an independent researcher specialized in AI scaling, is so promising: Andy’s paper shows that, at least in some contexts, the capabilities of large AI systems can be predicted from those of smaller ones. If the result generalizes, it could give independent researchers the ability to run cheap experiments on small systems, which nonetheless generalize to expensive, scaled AIs like GPT-3. Andy was kind enough to join me for this episode of the podcast.

    • 1 hr 25 min
    85. Brian Christian - The Alignment Problem

    85. Brian Christian - The Alignment Problem

    In 2016, OpenAI published a blog describing the results of one of their AI safety experiments. In it, they describe how an AI that was trained to maximize its score in a boat racing game ended up discovering a strange hack: rather than completing the race circuit as fast as it could, the AI learned that it could rack up an essentially unlimited number of bonus points by looping around a series of targets, in a process that required it to ram into obstacles, and even travel in the wrong direction through parts of the circuit.

    This is a great example of the alignment problem: if we’re not extremely careful, we risk training AIs that find dangerously creative ways to optimize whatever thing we tell them to optimize for. So building safe AIs — AIs that are aligned with our values — involves finding ways to very clearly and correctly quantify what we want our AIs to do. That may sound like a simple task, but it isn’t: humans have struggled for centuries to define “good” metrics for things like economic health or human flourishing, with very little success.

    Today’s episode of the podcast features Brian Christian — the bestselling author of several books related to the connection between humanity and computer science & AI. His most recent book, The Alignment Problem, explores the history of alignment research, and the technical and philosophical questions that we’ll have to answer if we’re ever going to safely outsource our reasoning to machines. Brian’s perspective on the alignment problem links together many of the themes we’ve explored on the podcast so far, from AI bias and ethics to existential risk from AI.

    • 1 hr 6 min
    84. Eliano Marques - The (evolving) world of AI privacy and data security

    84. Eliano Marques - The (evolving) world of AI privacy and data security

    We all value privacy, but most of us would struggle to define it. And there’s a good reason for that: the way we think about privacy is shaped by the technology we use. As new technologies emerge, which allow us to trade data for services, or pay for privacy in different forms, our expectations shift and privacy standards evolve. That shifting landscape makes privacy a moving target.

    The challenge of understanding and enforcing privacy standards isn’t novel, but it’s taken on a new importance given the rapid progress of AI in recent years. Data that would have been useless just a decade ago — unstructured text data and many types of images come to mind — are now a treasure trove of value, for example. Should companies have the right to use data they originally collected at a time when its value was limited, when it no longer is? Do companies have an obligation to provide maximum privacy without charging their customers directly for it? Privacy in AI is as much a philosophical question as a technical one, and to discuss it, I was joined by Eliano Marques, Executive VP of Data and AI at Protegrity, a company that specializes in privacy and data protection for large companies. Eliano has worked in data privacy for the last decade.

    • 53 min
    83. Rosie Campbell - Should all AI research be published?

    83. Rosie Campbell - Should all AI research be published?

    When OpenAI developed its GPT-2 language model in early 2019, they initially chose not to publish the algorithm, owing to concerns over its potential for malicious use, as well as the need for the AI industry to experiment with new, more responsible publication practices that reflect the increasing power of modern AI systems.

    This decision was controversial, and remains that way to some extent even today: AI researchers have historically enjoyed a culture of open publication and have defaulted to sharing their results and algorithms. But whatever your position may be on algorithms like GPT-2, it’s clear that at some point, if AI becomes arbitrarily flexible and powerful, there will be contexts in which limits on publication will be important for public safety.

    The issue of publication norms in AI is complex, which is why it’s a topic worth exploring with people who have experience both as researchers, and as policy specialists — people like today’s Towards Data Science podcast guest, Rosie Campbell. Rosie is the Head of Safety Critical AI at Partnership on AI (PAI), a nonprofit that brings together startups, governments, and big tech companies like Google, Facebook, Microsoft and Amazon, to shape best practices, research, and public dialogue about AI’s benefits for people and society. Along with colleagues at PAI, Rosie recently finished putting together a white paper exploring the current hot debate over publication norms in AI research, and making recommendations for researchers, journals and institutions involved in AI research.

    • 52 min

Customer Reviews

4.6 out of 5
44 Ratings

44 Ratings

Dino De La O ,

How does this podcast ONLY have 27 ratings

Genuinely shocked that this podcast only has 27 ratings as of writing this review. Incredible content, amazingly smart hosts, valuable guests … what else could you want!

Please keep doing what you’re doing. I really hope you get the attention you greatly deserve.

Churros4Burros ,

Should be called “Beyond AI”

The authors seem to think that AI is the only thing happening in Data Science. They need more diverse topics.

Fiona啦啦啦 ,

I like this Podcast

I enjoy this podcast so much! Please continue doing it!

Top Podcasts In Technology

Listeners Also Subscribed To