- Turing Post
- Posts
- [Webinar] Cut storage and processing costs for vector embeddings
[Webinar] Cut storage and processing costs for vector embeddings
Innovative leaders such as NielsenIQ are increasingly turning to a data lakehouse approach to power their Generative AI initiatives amidst rising vector database costs. Join us for a technical deep dive into the pivotal role of vector embeddings in AI and a demo of how you can generate and manage vector embeddings with the cost and scale efficiencies of your lakehouse.
What You Will Learn:
Real-World Applications: In this talk, we’ll cover the challenges of generating, storing, and retrieving high-dimensional embeddings, including high computational costs and scalability issues for production workloads. Kaushik Muniandi, engineering manager at NielsenIQ, will explain how he leveraged a data lakehouse to overcome these challenges for a text-based search application, and the performance improvements he measured.
Introduction to AI Vector Embedding Generation Transformer: Discover how Onehouse solves the above challenges by enabling users to automatically create and manage vector embeddings from near real-time data ingestion streams to lakehouse tables without adding complex setups and extra tools.
Technical Deep Dive: Get into the nitty-gritty of Onehouse stream captures and how they integrate with leading vector databases, enabling a single source of truth for AI model training, inference, and serving.
Can't make it? Register anyway to receive the recording!
*This post is created by Onehouse. We thank the Onehouse team for their insights and ongoing support of Turing Post.
Reply