• Turing Post
  • Posts
  • 11 tips for distilling smaller models with GPT / Webinar

11 tips for distilling smaller models with GPT / Webinar

Go from OpenAI to Open-Source

Date: December 14th, 10:00 am PST / 7:00 pm CET

As LLMs rapidly gain adoption, the need for smaller, more efficient models has never been greater. This shift is driven by the compelling performance of open-source LLMs, juxtaposed with the high costs and closed nature of larger commercial models like GPT-4. 

We’re excited to support an upcoming webinar with Predibase during which you’ll learn how to graduate from OpenAI to open-source with model distillation. Drawing on their experience distilling models at Google and Predibase, the presenters will share 11 best practices for distilling large models into smaller, more cost-effective LLMs that can be fine-tuned for your use case. Code snippets included. Must see!

Join the conversation

or to participate.