This research paper introduces DeepSeek-R1, a large language model enhanced for reasoning capabilities using reinforcement learning (RL). Two versions are presented: DeepSeek-R1-Zero, trained purely via RL without supervised fine-tuning, and DeepSeek-R1, which incorporates additional multi-stage training and cold-start data for improved readability and performance. DeepSeek-R1 achieves results comparable to OpenAI's o1-1217 on various reasoning benchmarks. The study also explores distilling DeepSeek-R1's reasoning capabilities into smaller, more efficient models, achieving state-of-the-art results. Finally, the paper discusses unsuccessful attempts using process reward models and Monte Carlo Tree Search, providing valuable insights for future research.
https://github.com/deepseek-ai/DeepSeek-R1/blob/main/DeepSeek_R1.pdf
정보
- 프로그램
- 주기매일 업데이트
- 발행일2025년 1월 26일 오전 9:44 UTC
- 길이19분
- 등급전체 연령 사용가
