Explore how MLLMs can visually "think" step-by-step
Explore how OpenAI made their automatic speech recognition (ASR) model multilingual and multitasking
The recent Nobel Prizes in Chemistry and Physics were actually awarded for Deep Learning! Time to update your 'playing deck' of key ML concepts
we compare three fine-tuning methods – DoRA, QLoRA, and QDoRA – each designed to improve model performance and memory efficiency in different ways.
Explore how open-source idea enhances MoE architecture
The first set of cards in a 'playing deck' of key ML concepts!
we discuss the innovative combination of VectorRAG and GraphRAG in HybridRAG, its impact on financial document analysis and other areas of implementation, and clarify related terms for better understanding
we compare three distinct approaches, all called Chain of Knowledge, and suggest how they can be combined for better reasoning
We discuss the innovation suggested by the DeepSeek team, how it improves the models' performance, and dive into the architectures and implementation of the models
We explore the recent Speculative RAG idea, highlighting where it excels, and discuss the limitations of other RAG systems. At the end, you'll find a list of useful resources
we review what we know about LSTM networks and explore their new promising development – xLSTM
we discuss the limitations of RAG with a long-context window and explore the intuition behind the LongRAG framework to address these limitations, along with a list of resources for further learning