1 hr 5 min

AI Safety with Francesco Mosconi - Searching for the Question Live #95 Searching For The Question with David Orban

    • Technology

In this live conversation we look into the complexities, challenges, and cutting-edge practices shaping the future of AI. What does it take to align LLMs with ethical standards and user safety? How is the balance between accuracy, safety, and efficiency achieved in the development of the latest models? What are the contradictions faced by designers when training and aligning LLMs, and how can they navigate them?
The concept of "nerfing" in LLMs: Why is it necessary, and what implications does it have for model performance and safety? How can sophisticated users adjust LLMs to navigate controversial topics more effectively? What is the role of a Red Team in ensuring LLMs are robust, ethical, and aligned with societal values? How can LLM safety be increased to prevent misuse and ensure compliance with ethical guidelines?
Discover the strategies for balancing customization with generalization, and learn about the trade-offs between innovation and reliability.
Designing, training, and evaluating LLMs, with a focus on ensuring these powerful tools contribute positively to society.
Francesco Mosconi is an AI expert, with a deep experience in data science. He is the author of the book Zero to Deep Learning.
He is currently part of the AI safety team at Anthropic, and previously he served as head of analytics at You.com.
Francesco also invests in AI companies as a General Partner of Pioneer Fund.
You can find him on:http://www.mosconi.mehttp://x.com/framosconis

In this live conversation we look into the complexities, challenges, and cutting-edge practices shaping the future of AI. What does it take to align LLMs with ethical standards and user safety? How is the balance between accuracy, safety, and efficiency achieved in the development of the latest models? What are the contradictions faced by designers when training and aligning LLMs, and how can they navigate them?
The concept of "nerfing" in LLMs: Why is it necessary, and what implications does it have for model performance and safety? How can sophisticated users adjust LLMs to navigate controversial topics more effectively? What is the role of a Red Team in ensuring LLMs are robust, ethical, and aligned with societal values? How can LLM safety be increased to prevent misuse and ensure compliance with ethical guidelines?
Discover the strategies for balancing customization with generalization, and learn about the trade-offs between innovation and reliability.
Designing, training, and evaluating LLMs, with a focus on ensuring these powerful tools contribute positively to society.
Francesco Mosconi is an AI expert, with a deep experience in data science. He is the author of the book Zero to Deep Learning.
He is currently part of the AI safety team at Anthropic, and previously he served as head of analytics at You.com.
Francesco also invests in AI companies as a General Partner of Pioneer Fund.
You can find him on:http://www.mosconi.mehttp://x.com/framosconis

1 hr 5 min

Top Podcasts In Technology

Acquired
Ben Gilbert and David Rosenthal
Search Engine
PJ Vogt, Audacy, Jigsaw
Lex Fridman Podcast
Lex Fridman
Romkapsel
Bauer Media
Darknet Diaries
Jack Rhysider
Hard Fork
The New York Times