- Turing Post
- Posts
- Token 1.7: What Are Chain-of-Verification, Chain of Density, and Self-Refine?
Token 1.7: What Are Chain-of-Verification, Chain of Density, and Self-Refine?
Your Expert Guide to Seminal Concepts in AI Chaining
Introduction
Youâve likely heard about Chain-of-Thought (CoT) prompting methods in Large Language Models (LLMs), which have ignited an entirely new area of study. This, in return, has led to a myriad of "chain" spin-offs and related research in "âchain reasoning.â We diligently covered the multifaceted world of CoT in Token 1.5 (do check it out!).
But what about other concepts that are related to machine learning, have 'chain' in their name but hail from a different universe?
In this Token, we aim to name and clarify these other âchainâ concepts to complete your vocabulary and make your comprehension crystal clear. We will:
discuss a seminal concept involving the chaining of LLMs;
introduce Chain-of-Verification (CoVe) and the concept of self-verification, inspired by a chain of thought;
present another prompting technique that, as opposed to CoT, is not about reasoning but entirely about summarization â Chain of Density (CoD).
Hope this adds some clarity!
Chaining LLMs
You've likely come across LangChain, the new framework for building LLM chains and related applications. Curious about the origins of this chaining concept and why it's important? Letâs dive in.
October 2021: AI Chains
In Token 1.5, we've explored the concept of chain-of-thought prompting, designed to elicit complex reasoning from LLMs. While revolutionary, it wasn't without precedent. A pivotal paper from 2021 titled "AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts" made a seminal contribution to this arena. The paper's authors demonstrated that "Chaining," a methodology that connects multiple tasks within LLMs, significantly enhances user interaction in several ways:
The following explanation is hidden for free subscribers and is available to Premium users only â please Upgrade to have full access to this and other articles
By employing Chaining, the research suggests that LLMâs limitations can be mitigated, thereby improving modelsâ utility and reliability. Two case studies further explored how this method could be applied in real-world scenarios, indicating a promising avenue for future LLM applications.
March 2022: PromptChainer: Chaining Through Visual Programming
This paper follows up on the work weâve just discussed. While the previous paper defined the concept of LLM Chains, the process of authoring these chains remained a challenge. It involved not just individual LLM prompts but also a nuanced understanding of how to decompose the overarching task.
The following explanation is hidden for free subscribers and is available to Premium users only â please Upgrade to have full access to this and other articles
Weâve explained the chaining mechanism behind such frameworks as Langchain. We will cover this and other frameworks in more detail in a separate Token.
Now to â
Fighting Hallucinations with Self-Refine and Self-Verification on Chain
LLMs, trained on vast text datasets, excel in tasks like closed-book QA as their parameter count scales up. However, they stumble over obscure or "tail distribution" facts, frequently generating convincing yet erroneous answers âwhat we call "hallucinations." The researchers, inspired by CoT came up with a few interesting ideas on how to deal with that.
Subscribe to Annually to read the rest.
Become a paying subscriber of Annually to get access to this post and other subscriber-only content.
Already a paying subscriber? Sign In.
A subscription gets you:
- ⢠Save $20 on the annual subscription
- ⢠Detailed AI series (AI 101, Unicorns, History) that you can find nowhere else
- ⢠Suggest a guest post
- ⢠Partners' offers
Reply