Creating tested, reliable AI applications

Practical AI: Machine Learning, Data Science, LLM

It can be frustrating to get an AI application working amazingly well 80% of the time and failing miserably the other 20%. How can you close the gap and create something that you rely on? Chris and Daniel talk through this process, behavior testing, and the flow from prototype to production in this episode. They also talk a bit about the apparent slow down in the release of frontier models.

Join the discussion

Changelog++ members save 10 minutes on this episode because they made the ads disappear. Join today!

Sponsors:

  • Fly.io – The home of Changelog.com — Deploy your apps close to your users — global Anycast load-balancing, zero-configuration private networking, hardware isolation, and instant WireGuard VPN connections. Push-button deployments that scale to thousands of instances. Check out the speedrun to get started in minutes.
  • Timescale – Purpose-built performance for AI Build RAG, search, and AI agents on the cloud and with PostgreSQL and purpose-built extensions for AI: pgvector, pgvectorscale, and pgai.
  • Eight Sleep – Up to $600 off Pod 4 Ultra Go to eightsleep.com/changelog and use the code CHANGELOG. You can try it for free for 30 days - but we’re confident you will not want to return it (we love ours). Once you experience AI-optimized sleep, you’ll wonder how you ever slept without it. Currently shipping to: United States, Canada, United Kingdom, Europe, and Australia.

Featuring:

  • Chris Benson – Twitter, GitHub, LinkedIn, Website
  • Daniel Whitenack – Twitter, GitHub, Website

Show Notes:

  • MLOps Community “Agents in Production” event

Something missing or broken? PRs welcome!

若要收听包含儿童不宜内容的单集,请登录。

关注此节目的最新内容

登录或注册,以关注节目、存储单集,并获取最新更新。

选择国家或地区

非洲、中东和印度

亚太地区

欧洲

拉丁美洲和加勒比海地区

美国和加拿大