In this episode of Scott and Mark Learn To, Scott Hanselman and Mark Russinovich dive into the chaotic world of large language models, hallucinations, and grounded AI. Through hilarious personal stories, they explore the difference between jailbreaks, induced hallucinations, and factual grounding in AI systems. With live prompts and screen shares, they test the limits of AI's reasoning and reflect on the evolving challenges of trust, creativity, and accuracy in today's tools.
Takeaways:
- AI is getting better, but we still need to be careful and double check our work
- AI sometimes gives wrong answers confidently
- Jailbreaks break the rules on purpose, while hallucinations are just AI making stuff up
Who are they?
View Scott Hanselman on LinkedIn
View Mark Russinovich on LinkedIn
Watch Scott and Mark Learn on YouTube
Listen to other episodes at scottandmarklearn.to
Discover and follow other Microsoft podcasts at microsoft.com/podcasts
Hosted on Acast. See acast.com/privacy for more information.
信息
- 节目
- 频率两周一更
- 发布时间2025年5月28日 UTC 16:15
- 长度25 分钟
- 季1
- 单集17
- 分级儿童适宜