• Turing Post
  • Posts
  • 🦸🏻#15: Humans as Tools? The Surprising Evolution of HITL in Agentic Workflows

🦸🏻#15: Humans as Tools? The Surprising Evolution of HITL in Agentic Workflows

how human-AI co-agency is shaping the future of intelligent systems

“But lo! men have become the tools of their tools.”

Walden by Henry David Thoreau

In our ongoing series exploring agentic workflows, we’ve covered reasoning, memory, reflection, action execution, tool integration, and such practical things as MCP (still a trending article on Hugging Face). Today, we turn to something equally foundational: how humans participate in these workflows: Human-AI Co-Agency.

Thoreau wrote this phrase “men have become the tools of their tools,” in 1845-1847, talking about 19th-century technologies like the railroad, telegraph, and farming equipment. These tools were meant to serve us, but he saw people reorganizing their lives around them. The fear wasn’t new tech – it was losing agency to the systems we create. His quote is a reminder that such fears is nothing new and echoed through every wave of new technology. What’s different now is that, for the first time in history, our tools can actually meaningfully reply and make decisions. Which is why we need to think about co-agency – how we live, work, and decide with it.

There are two sides of that. Both are absolutely fascinating and have deep histories:

  • First, co-agency as something practical and structural: Human in the loop (HITL). Sometimes it becomes “human as a tool” in the context of tool calling. Yes, you heard that right – from ultimate decision maker to just another callable function in an agent’s toolbox. Not always, not in every industry – but this setup is becoming more common. That’s what we’ll dig into today.

  • Second, co-agency as something experiential and conversational – how we communicate with agentic workflows, and how new interfaces are evolving to support that (with due homage to Vannevar Bush and Douglas Engelbart). We’ll cover all that in the next episode.

Are you ready to unpack HITL and see where in the AI loop a human is? Let’s go.

What’s in today’s episode?

  • What is HITL in Agentic Workflows? And why we cover it after MCP

  • Key Milestones in HITL Evolution

    • HITL 1.0: Humans as Gatekeepers

    • HITL 2.0: The Crowd in the Loop

    • HITL 3.0: From Labels to Feedback to Preferences

    • HITL 4.0: The Human as a Tool. Wait, what?!

    • HITL 5.0: Co-Agency

  • How HITL Shapes the Behavior of Modern AI Agents (two research papers)

  • Where HITL is Going?

  • Concluding Thoughts

  • Resources to dive deeper

What is HITL in Agentic Workflows? (and why we cover it after MCP)

In the last two episodes, we showed how agents act (via UI/API tools) and how those actions are now structured, thanks to MCP (that article is still trending on Hugging Face!). Autonomy is cheap, action is easy – and that means orchestration is now a human problem.

Human-in-the-Loop (HITL) is the safety net that makes agentic AI systems usable in the real world. As AI agents take on more autonomous, multi-step tasks, they also run into familiar issues: hallucinations, shaky reasoning, and unpredictable decisions. HITL is the antidote.

It’s a design pattern (not a quick fix), where humans are built into the decision loop to validate outputs, steer actions, or override the machine when necessary. Think of a chatbot that pauses to ask for clarification (instead of making stuff up) or a workflow where the AI waits for a human sign-off before pulling the trigger, or self-driving cars navigate autonomously but allow human override during complex or unexpected scenarios. As an active user of a self-driving car, I don’t want to be out of the loop but also, it’s quite annoying when the car keeps asking me to take control every few minutes, interrupting what should be a smooth ride. So HITL is also very much about balance.

It might sound trivial, but it’s easy to forget to include HITL as a design element, especially in multi-agent systems.

Key Milestones in HITL Evolution

I was reading J. C. R. Licklider’s Man-Computer Symbiosis from 1960 and found myself thinking again: “Man, we need to reread things from the past.” What strikes me is Licklider’s precisely right focus. He acknowledged that machines might one day surpass human cognitive abilities, but saw symbiosis as an essential interim phase – potentially the most intellectually rich and productive in human history. Why don’t we talk more about this, instead of fussing so much over a vaguely defined AGI? Anyway, Licklider was in pre-HITL era, forming the vision.

Let’s discuss what followed as computer science and machine learning started to evolve →

Upgrade if you want to be the first to receive the full articles with detailed explanations and curated resources directly in your inbox. Simplify your learning journey →

Reply

or to participate.