This research paper introduces DeepSeek-R1, a large language model enhanced for reasoning capabilities using reinforcement learning (RL). Two versions are presented: DeepSeek-R1-Zero, trained purely via RL without supervised fine-tuning, and DeepSeek-R1, which incorporates additional multi-stage training and cold-start data for improved readability and performance. DeepSeek-R1 achieves results comparable to OpenAI's o1-1217 on various reasoning benchmarks. The study also explores distilling DeepSeek-R1's reasoning capabilities into smaller, more efficient models, achieving state-of-the-art results. Finally, the paper discusses unsuccessful attempts using process reward models and Monte Carlo Tree Search, providing valuable insights for future research.
https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf
Thông Tin
- Chương trình
- Tần suấtHằng ngày
- Đã xuất bảnlúc 09:44 UTC 26 tháng 1, 2025
- Thời lượng19 phút
- Xếp hạngSạch