- Turing Post
- Posts
- Recap#1 of FMOps: key concepts, techniques, and resources
Recap#1 of FMOps: key concepts, techniques, and resources
Systematizing the knowledge about foundation models that are the backbone of generative AI
On August 30, 2023, we embarked on a journey that lasted seven months and, to be totally honest with you, is not even remotely close to being done. We attempted to systematize the knowledge about foundation models that are the backbone of generative AI. Sorry for the pathos, but it indeed feels like crossing the waters of an unknown ocean because this whole industry is in constant and very rapid development. Systematizing such fluid material was really tough. And incredibly interesting.
We think we succeeded to an extent, and though the work on systematizing this new knowledge will continue, we want to finalize the FMOps series and give you a recap of it. Actually, it will be two recaps (that's how rich the topic is!).
By the end of today's recap, you'll have a solid understanding of what foundation models are, how to work with them, how to choose them, and what to pay attention to. You'll end up with the key concepts, techniques, and resources needed to navigate the rapidly evolving world of foundation models. And next week, you will receive a visualized foundation model Infrastructure stack with a list of the most prominent companies. We hope it will help you to think clearer about the FMOps.
Basics about foundation models and how to make the right choice for your project
What are the foundation models + an interview with Rishi Bommasani, one of the authors of the “On the Opportunities and Risks of Foundation Models”, Society Lead at the Stanford Center for Research on Foundation Models (CRFM) (Token 1.4)
Use cases for and key benefits + FM vs traditional models (Token 1.2)
Transformer (the king of text modality) and diffusion-based models (the queen of image modality) – the architectures explained (Token 1.6)
Going multimodal – why 2024 would become the year of multimodal models (Token 1.24)
Thinking about size: large vs. small + converting big models into small or making them more efficient (model compression techniques explained) (Token 1.10)
Availability: Open vs. closed (Token 1.9)
Techniques for models’ adaptation:
The rest of this article is available to our Premium users only. It’s packed with great explanations, valuable resources, useful tools, and libraries, please →
Share with at least three of your peers and receive one month of Premium subscription for free 🤍
Reply