AI is lying to you—here’s why. Retrieval-Augmented Generation (RAG) was supposed to fix AI hallucinations, but it’s failing. In this episode, we break down the limitations of naïve RAG, the rise of dense retrieval, and how new approaches like Agentic RAG, RePlug, and RAG Fusion are revolutionizing AI search accuracy.
🔍 Key Insights:
- Why naïve RAG fails and leads to bad retrieval
- How Contriever & Dense Retrieval improve accuracy
- RePlug’s approach to refining AI queries
- Why RAG Fusion is a game-changer for AI search
- The future of AI retrieval beyond vector databases
If you’ve ever wondered why LLMs still struggle with real knowledge retrieval, this is the episode you need!
🎧 Listen now and stay ahead in AI!
References:
[2005.11401] Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
[2112.09118] Unsupervised Dense Information Retrieval with Contrastive Learning
[2301.12652] REPLUG: Retrieval-Augmented Black-Box Language Models
[2402.03367] RAG-Fusion: a New Take on Retrieval-Augmented Generation
[2312.10997] Retrieval-Augmented Generation for Large Language Models: A Survey
정보
- 프로그램
- 주기매주 업데이트
- 발행일2025년 3월 19일 오전 2:30 UTC
- 길이23분
- 시즌2
- 에피소드67
- 등급전체 연령 사용가
