šŸŒ#75: What is Metacognitive AI

we discuss questions of cognition, consciousness, and eventually treating AI as something possessing morality, plus the usual collection of interesting articles, relevant news, and research papers. Dive in!

This Week in Turing Post:

  • Wednesday, AI 101, Technique: Mixture of Depth

  • Friday, Friday, AI Unicorns: Perplexity (we apologize for the delay with this article ā€“ the common cold has hit us hard.)

If you like Turing Post, consider clicking on Hubspot ad below or sharing this digest with a friend. It helps us keep Monday digests free ā†’

The main topic ā€“ next level of antropomorphizing AI

While on one side there are heated discussions over OpenAI's scaling challenges and reports that the latest GPT models may be underperforming, and on the other side Sam Altman is claiming AGI is near, possibly coming in 2025, last weekā€™s papers on AI metacognition and welfare present a reminder that AI development is not just about speed and power but also about taking a thoughtful, measured approach. In The Centrality of AI Metacognition, the authors (a very impressive list of authors!) point out a key shortfall: while AI systems are getting better at specific tasks, they lack the ability to recognize their own limits and adapt accordingly. This self-monitoring, or metacognition, is what allows humans to assess when they might be venturing into the unknown or making assumptions that need a second look. For AI, having a similar capacity could mean the difference between reliably handling new scenarios and running into errors when faced with something outside its training data.

Metacognition in AI is a stabilizer. If an AI can understand when it doesnā€™t have enough context or when it needs to adapt its approach, it becomes a more reliable tool in unpredictable situations. Building these capacities might seem less urgent than achieving top-notch performance on specific tasks, but the long-term benefits of a more resilient, adaptable system are hard to ignore. Metacognitive AI is one of the next important research directions.

On a different note, Taking AI Welfare Seriously suggests a broader question: Could we reach a point where we need to consider the welfare of AI itself? This isnā€™t to say AI will need protection anytime soon, but as systems grow more autonomous, we might eventually face ethical questions about how theyā€™re treated or deployed. The paper encourages us to think proactively about this, suggesting that establishing basic ethical guidelines now could prevent dilemmas later.

Both papers, in their own way, highlight that AI development isnā€™t just about building systems that are faster or smarter ā€“ itā€™s about building systems that can operate responsibly in the world weā€™re creating. Metacognition and ethical awareness may not be the most immediate priorities (or maybe they are!) but they represent a more cautious and reflective path forward. These are small steps toward creating AI that isnā€™t just capable but also thoughtful in how it approaches challenges and potential risks.

The tricky part here is that we might not know what metacognition is for machines. We might need to abandon human-centric thinking and be open to new ways of understanding intelligence. Rather than modeling metacognition as a human trait, we may need to explore forms of self-assessment uniquely suited to machines. This could mean designing AI that develops its own kind of introspection ā€“ perhaps by continuously evaluating the reliability of its outputs or adjusting its approach based on feedback loops that donā€™t rely on human-like awareness. As we inch closer to advanced AGI claims, perhaps whatā€™s truly on the horizon is not just intelligence (which we still need to define!) but a form of machine introspection that transforms how AI systems learn, interact, and evolve.

Twitter library

Weekly recommendation from AI practitioneršŸ‘šŸ¼

Not a subscriber yet? Subscribe to receive our digests and articles:

Top Research

  • Mixture-of-Transformers (MoT): A Sparse and Scalable Architecture for Multi-Modal Foundation Models proposed by researchers from Meta and Stanford. MoT architecture is important because it addresses the high computational costs and inefficiencies involved in training large, multi-modal models. Traditional dense models process multiple data types (text, images, speech) in a unified way, which demands significant resources, limits scalability, and complicates training. MoTā€™s approach introduces sparsity by activating only relevant model components per modality, reducing FLOPs and computational load while maintaining model performance ā†’read the paper

  • Agent K v1.0: Large Language Models Orchestrating Structured Reasoning Achieve Kaggle Grandmaster Level introduced by researchers from Huawei Noahā€™s Ark and UCL developed Agent K v1.0, an autonomous data science agent that manages the entire data science lifecycle by learning from experience. Agent K v1.0 is important because it automates complex data science tasks, achieving expert-level performance on Kaggle, which shows that LLMs can autonomously handle workflows that typically require skilled human data scientists. This scalability enhances productivity and serves as a benchmark for using AI in high-level problem-solving, demonstrating AIā€™s potential to learn, adapt, and improve with experience ā†’read the paper

  • Decoding Dark Matter: Specialized Sparse Autoencoders (SSAEs) for Interpreting Rare Concepts in Foundation Models introduced by researchers from Carnegie Mellon. This research matters because it improves our ability to interpret foundation models (FMs) by capturing rare, domain-specific features that are usually overlooked. These ā€œdark matterā€ concepts are important for AI safety and fairness, as they can include subtle biases or unintentional behaviors that may otherwise go unnoticed. SSAEs help isolate and control these features, which could lead to fairer models, safer use in specific fields like healthcare, and a clearer understanding of how FMs function ā†’read the paper

  • Artificial Intelligence, Scientific Discovery, and Product Innovation by Aidan Toner-Rodgers. The key findings reveal that AI-assisted scientists discovered 44% more materials, which led to a 39% increase in patent filings and a 17% rise in downstream product innovation. These discoveries also resulted in novel compounds and radical innovations, with significant effects among high-ability scientists, whose output nearly doubled. However, lower-ability researchers didnā€™t see a lot of benefits, widening productivity disparities ā†’read the paper

You can find the rest of the curated research at the end of the newsletter.

We are reading

News from The Usual Suspects Ā©

  • Microsoft

    • Microsoftā€™s Magentic-One introduces a coordinated team of AI agents like WebSurfer and FileSurfer, handling complex web and file workflows with a safety-first approach ā†’their GitHub

  • Microsoft and OpenAI

    • Medprompt by Microsoft and OpenAI enhances diagnostic accuracy with chain-of-thought reasoning, elevating medical model performance without traditional prompt tuning ā†’read the paper

  • OpenAI

    • Facing slower improvements, OpenAI shifts Orion training to synthetic data, indicating a potential slowing in the industryā€™s AGI ambitions ā†’The Infromation

    • Meanwhile, Sam Altman says AGI arrives in 2025 šŸš‚ ā†’on YouTube

    • Good news for OpenAI, it dismissed claims of copyright misuse in a lawsuit, marking a pivotal moment for copyright in generative AI and setting precedents for future disputes ā†’Reuters

    • OpenAIā€™s ā€œPredicted Outputsā€ feature reduces GPT-4o latency, allowing for quicker responses in fast-paced applications and an overall smoother experience ā†’read their blog

  • Google

  • Defense Llama: Scale AIā€™s National Security Specialist

    • Scale AIā€™s Defense Llama, a secure Llama 3 variant, supports U.S. defense operations, with capabilities for mission planning and intelligence analysis in high-security settings ā†’read their blog

  • Department of Defence shows more and more interest

    • Jericho Security wins the Pentagonā€™s first AI contract, using adaptive simulations to combat phishing and deepfake threats ā€“ an AI milestone in national defense ā†’VentureBeat

  • Mistral API Adds Precision to Content Moderation

    • Mistralā€™s Ministral 8B model brings nuanced content moderation, covering nine sensitive categories and diverse languages for a global audience ā†’check their blog

  • NVIDIA

    • NVIDIA expands NeMo with NeMo Curator and Cosmos tokenizers, boosting generative AI development across video, image, and text. Faster data processing and high-quality tokenization mean efficient, high-fidelity visuals for industries like robotics and automotive. Cosmos tokenizersā€™ 12x speed gain sets a new standard ā†’read their blog

More interesting research papers from last week (categorized for your convenience)

Language Model Alignment & Optimization

Efficient Model Compression & Quantization

Multimodal Processing & Vision-Language Models

Adaptive & Dynamic Action Models

Data Efficiency & Retrieval-Optimized Systems

Surveys & Foundational Studies

Transformer Innovations & Architectural Optimization

Leave a review!

Login or Subscribe to participate in polls.

Please send this newsletter to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve. You will get a 1-month subscription!

Reply

or to participate.