• Turing Post
  • Posts
  • How to Fine-tune DeepSeek with Reinforcement Learning [webinar]

How to Fine-tune DeepSeek with Reinforcement Learning [webinar]

DeepSeek-R1 is the first open-source model to close the performance gap with the best commercial models. But the question remains, ā€œHow can I customize and fine-tune DeepSeek?ā€ 

Fine-tuning reasoning models like DeepSeek-R1 and its distillations is uncharted territory, with no established training recipes ā€“ until now. Join us on Feb. 12 at 10 am PT to get a behind-the-scenes look at a new framework for efficient LoRA-based reinforcement learning (RL), enabling you to customize DeepSeek-R1 for your data and use case.

Topics include:

  1. How to fine-tune DeepSeek-R1-Qwen-7B: Customize DeepSeek with RL-based techniques.

  2. Performance benchmarks: Quantify the impact of fine-tuning on reasoning tasks.

  3. When to fine-tune DeepSeek: Know when to fine-tune a reasoning model vs. stick with a standard SLM.

All attendees will receive free credits to get started on their own. 

Reply

or to participate.