- Turing Post
- Posts
- 🦸🏻#16: Co-Agency as The Ultimate Extension of Human
🦸🏻#16: Co-Agency as The Ultimate Extension of Human
how AI as a medium shapes our perception, behavior, and culture
The Medium is the Message.
Not so long ago, "talking to a computer" meant typing commands into a terminal or clicking through stiff menus. Today, conversations with AI agents – capable of remembering context, interpreting our intentions, and collaborating on complex tasks – are becoming second nature. This shift is transforming creativity, productivity, and the very nature of our work. We're stepping into the era of human–AI co-agency, where humans and AI act as genuine collaborative partners – or "co-agents" – achieving results neither could reach independently.
Last time, we discussed Human in the Loop (HITL), the practical approach to human–AI collaboration. Today, we'll dive into the experiential side. It’s dense and super interesting! Read along.
What’s in today’s episode?
Mother of All Demos and Human-Computer Interaction Evolution
Where Are We Now with Generative Models?
Extensions of Man and Sorcerer’s Apprentice – Frameworks to Look at Our Co-agency:
Marshall McLuhan’s Media Theory: “The Medium is the Message” and Extensions of Man
Norbert Wiener’s Cybernetics: Feedback, Communication, and Control
Modern Human-AI Communication through McLuhan’s and Wiener’s Lenses
Conversational Systems: AI as a Medium and a Feedback Loop
Agentic Workflows: Extending Action, Sharing Control
Human-Machine Co-Agency and Co-Creativity
Designing for the Future of Human-AI Co-Agency
Looking Ahead: Experimental Interfaces and Speculative Futures
Final Thoughts
Mother of All Demos and Human-Computer Interaction Evolution
On a Monday afternoon, December 9, 1968, at the Fall Joint Computer Conference in San Francisco’s Brooks Hall, Doug Engelbart and his Augmentation Research Center (ARC) compressed the future of personal computing into a 90‑minute, live stage show that still feels visionary. The demo inspired researchers who later built the Alto, Macintosh, and Windows interfaces. Stewart Brand famously dubbed it “the Mother of All Demos,” and Engelbart’s focus on augmenting human intellect – rather than automating it – became a north star for human‑computer interaction research.
What the audience saw – for the very first time

Engelbart’s presentation was a manifesto for human‑computer co‑agency: people and machines solving problems together through rich, real‑time dialogue. Every modern chat interface, collaborative document, or video call echoes that December afternoon in 1968.
But for a long time, that was not a reality, even with all the chatbots and voice assistants. ChatGPT for the first time made it feel quite real. The funny thing is that that jump to the conversational interface in 2022 happened almost by accident:
“We have this thing called The Playground where you could test things on the model, and developers were trying to chat with the model and they just found it interesting. They would talk to it about whatever they would use it for, and in these larval ways of how people use it now, and we’re like, ‘Well that’s kind of interesting, maybe we could make it much better,’ and there were like vague gestures at a product down the line,” said Sam Altman in an interview with Ben Thompson.
Which, of course, makes total sense, considering that Generation Z (born roughly between 1997 and 2012) has grown up in a world where digital communication is the norm. As the first generation of true digital natives, their communication preferences have been shaped by smartphones, social media, and constant connectivity. A defining characteristic of Gen Z's communication style is their strong preference for texting over talking.
So OpenAI built that chatbot and started the GenAI revolution – not as a master plan, but as a casual detour that ended up rerouting the entire map. Tasks that once required navigating software menus or typing structured queries can now be done by simply asking in natural language. This represents a shift toward computers accommodating us, rather than us adapting to them. Here begins the era of dialogue as an interface.

For the pure love of history and to demonstrate the long evolution of human-computer interaction, check out this timeline I created for you in Claude. Click to interact:
Where Are We Now with Generative Models?
We’ve reached a moment where many of us have found our go-to models. One to chat and write with. One to code with. One to make pictures. One to use as an API while building products. Each fits into a different part of our digital routine – not because they’ve been assigned there, but because we’ve come to prefer them for specific things.
Some of these models have already begun to form a kind of memory. That changes everything. The experience becomes more tailored, more grounded. My ChatGPT understands me. I’ve learned how to work with it – and how to make it work for me. For instance, I’ve noticed that it’s better to ask if it knows something before jumping into a task (“Do you understand child psychology?”). That small interaction makes it feel like it’s thinking along with me. Like there’s a rhythm to how we collaborate.
I heard the same from people coding with Claude. It just gets them. It doesn't get me the same way, and that says something about where we are right now: we’re beginning to form these lasting connections, learning along the way what is the best way to address each model, how to form that understanding between us better, where possible – gently filling their nascent memory containers, shaping the way they respond and recall, personalizing them bit by bit.
But there’s a tension too. We’re scattered across so many models and platforms. Each offers a different interaction, a different strength – but also a different memory, or none at all. How do we keep the flow going across all of them? How do we teach the models we use who we are, when we’re constantly jumping between systems that don’t remember us? And how it changes our ways of forming a request and other communication patterns.
This shift in communication preferences has had a significant impact on how technology companies design their products, particularly in the AI space.
The companies also consider what their audiences preferences are. The above mentioned GenZs are digital native and prefer the following:
Brevity and Visual Orientation: Gen Z communicates in concise, "bite-sized" messages, often just a few words paired with strong imagery.
Multitasking Across Screens: They seamlessly switch between devices and applications while communicating.
Immediate Response Expectation: Having grown up with instant messaging, they expect rapid responses.
Visual Communication: They often use images, emojis, and videos to express themselves rather than text alone.
Extensions of Man and Sorcerer’s Apprentice – Frameworks to Look at Our Co-agency
Lately, I’ve been thinking about different communication approaches and would like to offer a new perspective on Human-AI co-agency – through the works of Norbert Wiener and Marshall McLuhan. Two very different frameworks that, together, might help us navigate our new communication reality more effectively.
Turing Post is like a University course. Upgrade if you want to be the first to receive the full articles with detailed explanations and curated resources directly in your inbox. Simplify your learning journey →
Want a 1-month subscription? Invite three friends to subscribe and get a 1-month subscription free! 🤍
Reply