This is the audio breakdown of our highly-requested GPU benchmark: Ollama versus Llama.cpp for local Large Language Model (LLM) inference on the powerful AMD Instinct MI60. We're testing a single, massive 70-billion-parameter model to measure which framework delivers the absolute highest tokens per second (t/s). The results are not what you expect! If you're running local AI or looking at the AMD ROCm stack for performance, you need to hear this speed comparison. We discuss the granular control you get with Llama.cpp versus the ease-of-use of Ollama and show exactly how to compile for peak efficiency.
For the full video and visual benchmark tables: https://youtube.com/live/CRqHIVR6PDk https://www.ojambo.com/web-ui-for-ai-deepseek-r1-32b-model
#AI #LLM #AMD #Ollama #LlamaCPP #ROCm #GPU #Benchmarking #LocalAI #TechPodcast
資訊
- 節目
- 發佈時間2025年11月1日 上午2:54 [UTC]
- 長度1 小時 25 分鐘
- 年齡分級兒少適宜
