If Turing Post is part of your weekly routine, please share it with one smart friend. It’s the simplest way to keep the Monday digests free.

This Week in Turing Post:

  • Wednesday / AI 101 series: What Is a Token (and why it runs AI)?

  • Friday / Interview: an amazing conversation on AI literacy and rethinking education with Neeru Khosla, co-founder of the CK-12 Foundation

📆 From our partners: How to govern multi-agent systems at scale?

We know that the hardest part of multi-agent systems isn't building them, but governing them at scale. Join Galileo co-founder Yash Sheth and CrewAI founder Joao Moura for a live session on April 21st to learn how to run multi-agent systems safely at scale, covering behavior, cost, and compliance across first and third-party agents. 

You'll learn how to:

  • Enforce safety and security policies in agents. 

  • Steer agents to the best models and fallback tools at runtime to improve accuracy and control token costs

  • Govern all your agents, whether CrewAI, internal or third-party, with one centralized set of policies

  • Include non-technical stakeholders (such as risk and compliance) in writing or maintaining policies – no coding required

To the main topic →

GPT Meets GPT: The Economics Nobody's Talking About

Economists have been studying GPTs for decades. They just meant a different kind. In economics, a General Purpose Technology is one that restructures entire economies rather than single industries – steam, electricity, computing. Carlota Perez mapped their lifecycle into a pattern: a turbulent installation phase full of speculation and uneven adoption, followed by a deployment phase where the gains actually compound. The interesting part is the gap between the two. That's where we are right now – except this time the General Purpose Technology shares its acronym with the thing that's causing all the trouble (GPT in AI – generative pre-trained transformer).

This week offered three dispatches from that gap, and together they paint a picture that is worth our attention.

On April 6, OpenAI published a 13-page policy blueprint called Industrial Policy for the Intelligence Age: Ideas to Keep People First. It proposes public wealth funds, portable benefits, worker participation in AI deployment, and a rethinking of payroll-based tax systems for an economy where wages may no longer be the primary source of national income. OpenAI frames these as "ambitious, but intentionally early and exploratory." Whatever you think of the messenger, the underlying logic tracks directly to a real problem in economics known as Baumol's cost disease: sectors where productivity is hard to improve – healthcare, education, government – see costs rise relentlessly relative to everything else. AI potentially cures Baumol's disease by making intelligence-intensive services scalable for the first time. And that changes the fiscal math of entire nations.

Ensuring that AI expands access, agency, and opportunity is a central challenge as we move towards superintelligence. We should aim for a future where superintelligence benefits everyone.

OpenAI

They are not alone in trying to grasp our future economics in a more tangible way. Today, Workshop Labs announced it's joining Thinking Machines, Mira Murati's lab. Workshop Labs grew out of The Intelligence Curse, an essay series arguing that when AI replaces labor as the dominant factor of production, powerful actors – states, corporations – lose their structural incentive to invest in people. Their founders write that they "started by asking what happens if AI takes everyone's job" and "didn't like that answer." In Solow growth model terms, they're arguing the elasticity of substitution between capital and labor is approaching infinity – and that this breaks the social contract, not just the job market. They suggest the following solution: AI systems aligned to individual users, decentralizing ownership rather than concentrating it. That Murati's lab absorbed this team tells you where at least one frontier lab thinks the real problem lives.

There's a path for AI to make humans matter more.

Mira Murati

Meanwhile, the Stanford HAI 2026 AI Index reported that generative AI reached 53% population adoption within three years – faster than the PC or the internet. The estimated value of these tools to US consumers reached $172 billion annually by early 2026, with the median value per user tripling in a single year. Paul Romer won the Nobel Prize for showing that ideas are non-rival goods – one person using an idea doesn't stop another from using it. That was elegant theory in 2018. In 2026, it's an observable market condition with a $172 billion price tag.

But Perez's framework predicts something else about the installation phase: adoption is wildly uneven. And it is. Sometimes in very unexpected places. Steve Yegge shared a conversation with a long-time Google tech director revealing that Google's internal AI adoption looks roughly like any average enterprise –20% power users, 60% on chat tools, 20% refusers. The Googler explains to him that it happened because of an 18-month hiring freeze: no one moves between companies, so no one arrives from the outside to calibrate how far behind Google are. Yegge calls it “the Great Siloing.”

So here's the picture. The original GPT economists warned us: General Purpose Technologies don't distribute their gains evenly or on schedule. The new GPTs are confirming the theory in real time. We are nowhere close to understanding the economic reality that awaits around the corner. But at least the abundance frameworks are now being drafted in real time.

This is both exciting and deeply concerning, because we're entering genuinely uncharted territory. Throughout history, only a thin stratum of the population ever had to figure out how to live in abundance. The aristocracy was, in its own imperfect way, an experiment in what humans do when freed from the necessity of labor – and the results were a mixed bag of patronage, philosophy, and spectacular dysfunction. But we have never been in a position where abundance could plausibly be spread across an entire population. There is no playbook for this. We don't even have the right mindset for it yet – centuries of equating human value with economic output don't dissolve overnight. (I wrote about this before – the concept of an "aristocracy of all" that AI could make possible, if we choose to build for it.)

The frameworks are being written. I’m not sure any of us are ready (yet) for what they describe. We are in the messy middle of installation.

→ If any of those thoughts resonate with you – share them across your social networks. Let’s keep the conversation going.

Topic 2: Meta AI and KAUST just proposed a new kind of machine: a Neural Computer  –where the neural network doesn't use a computer, it IS the computer. Computation, memory, and I/O all collapse into one learned runtime. Sounds revolutionary. But is it? And will it even work? Let’s discuss  

Follow us on 🎥 YouTube Twitter Hugging Face 🤗

Twitter Library

We are reading/watching/learning:

News from the usual suspects ™ (rivals rivals rivals)

  • AI Giants Call a Truce (Sort Of)
    OpenAI, Anthropic, and Google – usually sparring in public – are sharing intel to stop rivals from copying their models on the cheap. The culprit: “adversarial distillation,” a clever shortcut that risks gutting profits and safety guardrails alike. When competitors start learning from your answers instead of your code, even rivals become allies.

  • Anthropic’s Glasswing Gambit
    Anthropic unveils Project Glasswing, rallying tech heavyweights from AWS to Microsoft and Google (but not OprnAI) in a race against their own creation. Its Mythos model can uncover – and exploit – software flaws better than most humans, already exposing thousands of critical vulnerabilities →check their write-up and our video about it:

  • Meanwhile, OpenAI:

    • Codex Finds Its Way into Claude’s Workflow
      OpenAI has released a plugin that lets developers call Codex directly from within Anthropic’s Claude Code environment—an unexpected bit of interoperability in a competitive landscape. The tool handles reviews, debugging, and delegated tasks without leaving the workflow. In a world of walled gardens, this feels more like a side door—practical, if not entirely philosophical.

    • sharpens its enterprise pitch
      An internal memo from OpenAI’s chief revenue officer, Denise Dresser, lays out a familiar ambition with renewed urgency: keep users inside the ecosystem and deepen ties with enterprise clients. The note stresses that a broader product suite makes OpenAI harder to displace, while also acknowledging fiercer competition – especially from Anthropic. In AI, being the best this week is charming, but being hard to replace is business.

  • Google Doubles Down on Custom Silicon
    Broadcom and Google have inked a long-term deal to build the next generations of AI chips through 2031, signaling Google’s intent to rely less on Nvidia’s costly GPUs. Meanwhile, Anthropic secures access to massive compute capacity tied to these chips. The takeaway: in AI, owning the silicon – or at least securing it early – matter as much as the models themselves.

🔦 Models and Agents Highlight

  • OpenClaw 2026.3.28 – new provider support, approval hooks, richer channel bindings, image tooling →check their GitHub

  • MiniMax M2.7 — self-evolving agent model, open-sourced. Improves behavior from experience, not static fine-tuning. Competitive with top closed models on agentic benchmarks →read their announcement

  • LFM2.5-VL-450M – Liquid AI. 450M parameter vision-language model. Multilingual support, bounding box prediction, sub-250ms edge inference. Step toward practical on-device multimodal AI →read their blog

  • GLM-5.1 – topped open-source coding benchmarks. Alibaba closing gap with Western alternatives →read their blog

  • A1 – fully transparent open-source vision-language-action model for robotics. Uses adaptive truncated inference to cut latency and compute while keeping strong manipulation performance, with the full training and evaluation stack released for reproducibility →read the paper

Research

Trends we see:

  • Test-time adaptation is becoming a first-class paradigm
    Not just fine-tuning, but models that keep updating while running.

  • The system boundary is moving inward
    Neural Computers, world models, agent environments all point to the same direction: the model is no longer just reasoning about a system – it is the system.

Foundations, Interpretability, and Limits

  • The Illusion of Stochasticity in LLMs
    Shows that language models fail to produce true stochastic samples despite representing probability distributions →read the paper

  • Learning is Forgetting: LLM Training as Lossy Compression
    Frames model training as information compression and links representation efficiency to downstream performance →read the paper

World Models, Simulation, and Embodied Representations

  • A Frame is Worth One Token: Efficient Generative World Modeling with Delta Tokens
    Compresses video dynamics into delta tokens to enable efficient generative world modeling with diverse futures →read the paper

  • INSPATIO-WORLD: A Real-Time 4D World Simulator via Spatiotemporal Autoregressive Modeling
    Generates consistent and interactive 4D environments from video using spatiotemporal autoregressive modeling →read the paper

  • Neural Computers
    Unifies computation, memory, and execution within a learned model to move toward fully neural computing systems →read the paper

Reasoning, Reinforcement Learning, and Failure Modes

  • Vero: An Open RL Recipe for General Visual Reasoning
    Scales reinforcement learning across diverse visual tasks to build broadly capable multimodal reasoning systems →read the paper

  • RAGEN-2: Reasoning Collapse in Agentic RL
    Diagnoses reasoning failures by measuring cross-input distinguishability and improving training with signal-aware filtering →read the paper

Compute, Memory, and System Efficiency

  • TriAttention: Efficient Long Reasoning with Trigonometric KV Compression
    Compresses KV cache using positional structure to preserve reasoning quality while reducing memory and increasing throughput →read the paper

  • MegaTrain: Full Precision Training of 100B+ Parameter Large Language Models on a Single GPU
    Streams parameters between CPU and GPU to enable training extremely large models on minimal hardware →read the paper

Learning Dynamics, Adaptation, and Test-Time Learning-

  • In-Place Test-Time Training
    Adapts model weights during inference by updating fast weights aligned with next-token prediction objectives →read the paper

  • Fast Spatial Memory with Elastic Test-Time Training
    Stabilizes test-time learning with elastic constraints to balance adaptation and memory retention in long sequences →read the paper

Agents, Environments, and Real-World Interaction

  • Gym-Anything: Turn any Software into an Agent Environment
    Transforms arbitrary software into interactive environments to scale training of long-horizon computer-use agents →read the paper

  • SkillClaw: Let Skills Evolve Collectively with Agentic Evolver
    Evolves reusable agent skills by aggregating multi-user interaction trajectories into shared improvements. →read the paper

Multi-Agent Systems and AI Workflows

  • PaperOrchestra: A Multi-Agent Framework for Automated AI Research Paper Writing
    Coordinates specialized agents to synthesize research materials into structured, publication-ready papers →read the paper

That’s all for today. Thank you for reading! Please send this newsletter to colleagues if it can help them enhance their understanding of AI and stay ahead of the curve.

Upgrade now

Reply

Avatar

or to participate

Keep Reading