- Turing Post
- Posts
- FOD#93: When AI meant Ambient Intelligence
FOD#93: When AI meant Ambient Intelligence
and other stories from the past future, including assistants lived in desks, and the future ran on buttons – a look back at the digital dreams that shaped today
This Week in Turing Post:
Wednesday, AI 101, Technique: More Attention: 3 types to discover – Slim attention, Kolmogorov attention and Xattention
Friday, Agentic Workflow: Human-AI communication and Human-in-the-Loop (HITL) integration
LAST 5 HOURS to Upgrade for only $41 PER YEAR!
Just a heads-up: I won’t be offering any discounts in the future.
If you’re not looking for deeper dives but still want to show appreciation for our work, here’s a link to do that.
Imagined Futures, Remembered
It’s always a curious kind of fun to look back at the futures we once imagined – to sift through the wild sketches, grand claims, and ambitious prototypes, and see what stuck. What did we think knowledge machines would look like? How did we picture our schools, our offices, our cities, when AI still lived mostly in diagrams and sci-fi dreams?
This picture inspired me for this Monday’s edition:

Facetime with style, 1920s
So let’s trace a century of imagined digital futures. It turns out the past was remarkably good at anticipating the world we’re now building.
Start with Vannevar Bush’s Memex (introduced in the article As We May Think), that 1945 vision of an electromechanical desk that could pull up documents on microfilm and link ideas at the speed of thought. It was bulky, mechanical, and analog – yet its spirit lives on in hypertext, personal knowledge bases, and even in the way we now use AI to summarize and connect our information flows. Bush didn’t invent the internet; but he helped imagine it.

In the 1950s, the future arrived with buttons. From 1958 to 1963, Arthur Radebaugh's Sunday comic, Closer Than We Think predicted the future to sheer enjoyment of the readers. In one the first issues, he drew classrooms with console desks and teacher broadcasts, students responding via push-buttons and cameras. The “Push-Button School of Tomorrow” may have looked kitschy, but its premise – personalized, machine-aided learning – is at the heart of today’s edtech and intelligent tutoring systems. His direction was eerily on track!

Then came the 1960s, when the World’s Fair gave the public a taste of interactive computing. Auto-Tutors. Fingertip shopping. Consoles for remote learning and video calls. HAL 9000 made his debut in 1968 in “2001: A Space Odyssey”, embodying disembodied AI – a concept that still affects how we think about assistant technologies. Behind the fiction were serious thinkers like J.C.R. Licklider, envisioning “man-computer symbiosis” (the paper) before most homes even had a TV remote.

Looks like created by Midjourney but it’s a photo of an auto-tutor
By the 1970s, Xerox PARC was designing the Dynabook — a proto-tablet for kids to learn, create, and explore. It never shipped, but it lit the path for the iPad, the laptop, and the digital classroom. Here is a paper “A Personal Computer for Children of All Ages”, published by Alan Key where he envisioned it to work.

In the 1980s, Apple released its Knowledge Navigator video — a folding tablet with a bow-tied, conversational AI that helped a professor prep for a lecture. It featured voice recognition, touch input, and seamless video calling. It looked fantastical. Now, it looks funny and too square.
In the early ’90s, AT&T’s iconic “You Will” campaign wrapped corporate futurism in sleek, cinematic charm. Narrated by Tom Selleck, the ads posed a simple question: “Have you ever…?” – followed by eerily prescient glimpses of life powered by invisible intelligence. Borrow a book from 1,000 miles away? Navigate cross-country without asking for directions? Pay a toll without stopping? Send a fax from the beach? You will. No robots, no androids – just everyday people using disembodied, networked intelligence. The campaign accurately forecast e-books, GPS, telemedicine, video calls, even smartwatches – long before any of it existed.
And by the early 2000s, Ambient Intelligence entered the scene. This was the era of the smart home, the digital city, and the intelligent billboard. MIT’s Project Oxygen described AI as freely available and always-on – like oxygen itself. The big shift was subtle: intelligence moved from the foreground (desks, gadgets, screens) into the background. It became environmental. It became invisible.
What’s striking, in all these retro visions, is how many of the core ideas have persisted. The interfaces changed. The form factors shrank. But the goals – augmenting memory, easing knowledge work, making environments responsive – remain steady.
Some ideas, of course, still haven’t landed. The fully automated teacher-less classroom? Still pedagogically thorny. The intelligent city that responds to our every need? A work in progress, often with more bureaucracy than brilliance. And that charming digital butler who anticipates your needs without being asked? Well, it’s complicated. Right, Apple?
But these old visions matter. Not because they got every detail right, but because they dared to imagine what digital assistance could mean at a human level. They gave designers, engineers, and researchers something to shoot for – a vocabulary of the possible.
We smile now at push-button classrooms and bow-tied agents. With a mix of affection, admiration – and a feeling that the future isn’t built from scratch. It’s composed, recomposed, and refined from the futures we once imagined. History, you are an endless source of inspiration.
I personally would like to see more of ambient intelligence. AI, we all might really need.
Welcome to Monday. Let’s build the next one. (And wonder what people in 40 years from now will smile at looking back at us).
Curated Collections
🔳 Turing Post is now on 🤗 Hugging Face! You can read the rest of this article there (it’s free!) →
Leave a review! |
Reply