10 Folgen

Knowledge Distillation is the podcast that brings together a mixture of experts from across the Artificial Intelligence community. We talk to the world’s leading researchers about their experiences developing cutting-edge models as well as the technologists taking AI tools out of the lab and turning them into commercial products and services.  Knowledge Distillation also takes a critical look at the impact of artificial intelligence on society – opting for expert analysis instead of hysterical headlines.We are committed to featuring at least 50% female voices on the podcast – elevating the many brilliant women working in AI.
Host Helen Byrne is a VP at the British AI compute systems maker Graphcore where she leads the Solution Architects team, helping innovators build their AI solutions using Graphcore’s technology.  
Helen previously led AI Field Engineering and worked in AI Research, tackling problems in distributed machine learning. 
Before landing in Artificial Intelligence, Helen worked in FinTech, and as a secondary school teacher. Her background is in mathematics and she has a MSc in Artificial Intelligence. 
Knowledge Distillation is produced by Iain Mackenzie. 

Knowledge Distillation with Helen Byrne Helen Byrne

    • Nachrichten

Knowledge Distillation is the podcast that brings together a mixture of experts from across the Artificial Intelligence community. We talk to the world’s leading researchers about their experiences developing cutting-edge models as well as the technologists taking AI tools out of the lab and turning them into commercial products and services.  Knowledge Distillation also takes a critical look at the impact of artificial intelligence on society – opting for expert analysis instead of hysterical headlines.We are committed to featuring at least 50% female voices on the podcast – elevating the many brilliant women working in AI.
Host Helen Byrne is a VP at the British AI compute systems maker Graphcore where she leads the Solution Architects team, helping innovators build their AI solutions using Graphcore’s technology.  
Helen previously led AI Field Engineering and worked in AI Research, tackling problems in distributed machine learning. 
Before landing in Artificial Intelligence, Helen worked in FinTech, and as a secondary school teacher. Her background is in mathematics and she has a MSc in Artificial Intelligence. 
Knowledge Distillation is produced by Iain Mackenzie. 

    Neuroscience and AI with Basis co-founder Emily Mackevicius

    Neuroscience and AI with Basis co-founder Emily Mackevicius

    Emily Mackevicius is a co-founder and director of Basis, a nonprofit applied research organization focused on understanding and building intelligence while advancing society’s ability to solve intractable problems.
    Emily is a member of the Simons Society of Fellows, and a postdoc in the Aronov lab and the Center for Theoretical Neuroscience at Columbia’s Zuckerman Institute.
    Her research uncovers how complex cognitive behaviors are generated by networks of neurons through local interactions and learning mechanisms.

    Links to work mentioned in this episode: 
    Basis, the research institute co-founded by Emily: basis.aiEmily's work with Fang et. al. relating brain computations to AI/ML algorithms: https://elifesciences.org/articles/80680Basis blog post about this work (Fang et. al.): https://www.basis.ai/blog/sr-fang2023/Stachenfeld et al. paper: https://www.nature.com/articles/nn.4650 Emily's work with Michale Fee relating Reinforcement Learning algorithms to brain areas that birds use when they learn to sing: https://www.sciencedirect.com/science/article/abs/pii/S0959438817302349Emily's work with Aronov lab colleagues on how the hippocampus forms one-shot/episodic memory 'barcodes' in food-caching birds: https://www.cell.com/cell/fulltext/S0092-8674(24)00235-6NPR story about this work: https://www.npr.org/2024/04/05/1198909635/chickadee-bird-brain-memory-brain-pattern-foodGithub collab-creatures repo for the Basis collaborative intelligent systems project: https://github.com/BasisResearch/collab-creaturesBasis's core open-source code repository for causal reasoning, ChiRho: https://basisresearch.github.io/chirho/getting_started.htmlBasis's city policy dashboard, polis: http://polis.basis.ai/

    • 35 Min.
    Stable Diffusion 3 with Stability AI's Kate Hodesdon

    Stable Diffusion 3 with Stability AI's Kate Hodesdon

    Stability AI’s Stable Diffusion model is one of the best known and most widely used text-to-image systems.
    The decision to open-source both the model weights and code has ensured its mass adoption, with the company claiming more than 330 million downloads.
    Details of the latest version - Stable Diffusion 3 - were revealed in a paper, published by the company in March 2024.
    In this episode, Stability AI’s Kate Hodesdon joins Helen to discuss some of SD3’s new features, including improved capabilities for generating text within images and overall image quality.
    Kate also talks about developments to the underlying model structure of Stable Diffusion, as well as the challenges associated with creating models that deliver more efficient inference.
    The Stable Diffusion 3 paper can be found here: https://arxiv.org/pdf/2403.03206.pdf

    • 32 Min.
    Inside OpenAI's trust and safety operation - with Rosie Campbell

    Inside OpenAI's trust and safety operation - with Rosie Campbell

    No organisation in the AI world is under more intense scrutiny than OpenAI. The maker of Dall-E, GPT4, ChatGPT and Sora is constantly pushing the boundaries of artificial intelligence and has supercharged the enthusiasm of the general public for AI technologies.
    With that elevated position come questions about how OpenAI can ensure its models are not used for malign purposes.
    In this interview we talk to Rosie Campbell from OpenAI’s policy research team about the many processes and safeguards in place to prevent abuse. Rosie also talks about the forward-looking work of the policy research team, anticipating longer-term risks that might emerge with more advanced AI systems.
    Helen and Rosie discuss the challenges associated with agentic systems (AI that can interface with the wider world via APIs and other technologies), red-teaming new models, and whether advanced AIs should have ‘rights’ in the same way that humans or animals do.

    You can read the paper referenced in this episode ‘Practices for Governing Agentic AI Systems’ co-written by Rosie and her colleagues: https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf

    Watch the video of the interview here: https://www.youtube.com/watch?v=81LNrlEqgcM 

    • 45 Min.
    Deepfakes deep dive with Nina Schick

    Deepfakes deep dive with Nina Schick

    Nina Schick is a leading commentator on Artificial Intelligence and its impact on business, geopolitics and humanity. 
    Her book ‘Deepfakes and the Infocalypse’ charts the early use of gen AI to create deepfake pornography and the technology’s subsequent use as a tool of political manipulation. 
    With over two decades of geopolitical experience, Nina has long been focused on macro-trends for society. She has advised global leaders, including Joe Biden, the President of the United States, and Anders Fogh Rasmussen, the former Secretary General of NATO. 
    She has also worked with some of the world’s premier companies and organisations, including Microsoft, Adobe, DARPA, and the UN.   
    A familiar face at technology conferences such as CES, TEDx, CogX and WebSummit, Nina is also a regular contributor to discussions about AI on the BBC, CNN, Sky News, Bloomberg and more. 
    In her conversation with Helen, Nina outlines the continuing risks posed by deepfake technologies and the technological counter-measures that can be used to safeguard against them. 
    You can watch the video of her interview on YouTube:  https://youtu.be/f4zTbGWYan8
     

    • 38 Min.
    Papers of the Month with Charlie Blake, Research Engineer at Graphcore

    Papers of the Month with Charlie Blake, Research Engineer at Graphcore

    Charlie Blake from Graphcore’s research team discusses their AI Papers of the Month for January 2024. 

    Graphcore research has been collating and sharing a review of the most consequential AI papers internally, every month, for a number of years. 

    Now – for the first time – the research team is making this valuable resource public, to help the wider AI community keep up-to-date with the most exciting breakthroughs. 

    Papers of the Month for January 2024 (with some work from December 2023) includes: 

    Bad Students Make Great Teachers: Active Learning Accelerates Large-Scale Visual Understanding 
    https://arxiv.org/abs/2312.05328
    Authors: Talfan Evans, Shreya Pathak, Hamza Merzic, et al. (Google DeepMind, UCL) 

    Beyond Chinchilla-Optimal: Accounting for Inference in Language Model Scaling Laws
    https://arxiv.org/abs/2401.00448
    Authors: Nikhil Sardana and Jonathan Frankle (MosaicML) 

    Analyzing and Improving the Training Dynamics of Diffusion Models
    https://arxiv.org/abs/2312.02696
    Authors: Tero Karras et al. (Nvidia, Aalto University) 

    Solving olympiad geometry without human demonstrations
    https://www.nature.com/articles/s41586-023-06747-5
    Authors: Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He and Thang Luong (Google DeepMind, New York University) 

    To read about January’s Papers of the Month, visit the Graphcore blog.
    https://www.graphcore.ai/posts/great-teachers-and-beyond-chinchilla-papers-of-the-month-jan-2024

    • 43 Min.
    The rise of synthetic data with Florian Hönicke from Jina AI

    The rise of synthetic data with Florian Hönicke from Jina AI

    Data is the fuel that is powering the AI revolution - but what do we do when there's just not enough data to satisfy the insatiable appetite of new model training?

    In this episode, Florian Hönicke, Principal AI Engineer at Jina AI, discusses the use of LLMs to generate synthetic data to help solve the data bottleneck. He also addresses the potential risks associated with an over-reliance on synthetic data. 

    German startup Jina AI is one of the many exciting companies coming out of Europe, supporting the development and commercialisation of generative AI. 

    The team at Jina AI gained widespread attention in late 2023 for the release of the first open-source text embedding model with an 8192 token length. Jina-embeddings-v2 achieves state-of-the-art performance on a range of embedding-related tasks and matches the performance of OpenAI's proprietary ada-002 model.

    Watch the video of our interview: https://youtu.be/AP80hZajk5w

    • 40 Min.

Top‑Podcasts in Nachrichten

Echo der Zeit
Schweizer Radio und Fernsehen (SRF)
Les Grosses Têtes
RTL
Apropos – der tägliche Podcast des Tages-Anzeigers
Tamedia
NZZ Akzent
NZZ – täglich ein Stück Welt
News Plus
Schweizer Radio und Fernsehen (SRF)
LANZ & PRECHT
ZDF, Markus Lanz & Richard David Precht

Das gefällt dir vielleicht auch