Chasing Robots
I'm Hosting a New Podcast, The Last Invention
Standing in line for coffee in Geneva, I realized a robot was following me. This was the seventh robot I’d met that day, at a UN summit called AI for Good. Most of those other robots were quite chatty. Some cracked jokes, some offered hugs and therapy. This one was brightly colored and spoke only in musical beeps, like a cross between R2-D2 and a picnic cooler. It had been trained to follow its owner through a crowd. It did this by learning your unique kinesthetic signature: your step, your gait. It could ‘read your skeleton.’ Indeed, read it so well that it could even follow you from the front — predict where you were going to go before you got there. Which seemed cool in practice but alarming as metaphor and loaded with questions:
Our skeletons are… data?
The robot we seem to be chasing is actually following us?
Is this the future? I asked the designer.
I hope so! he said.
Further down the hall, a mother lifted her baby up towards a tangerine-colored fox. This AI creature had a face straight out of manga: wide eyes taking up half the head, exaggerated button nose. The baby giggled. The robot reached out. The baby extended a finger. They almost touched. A real Michelangelo moment. Cameras clicked.
“Is THIS the future?” I asked.
Everyone was too busy taking selfies to answer.
The whole summit felt like that. Full of wonder and hype but with an edge of dread. The UN booths, the men in sunglasses pacing the halls, the sense that something big was about to happen that no one was ready for. Even the summit's name projected uncertainty: Did AI for Good mean that AI Is Good? Or that we must Make It Good?
On UN stages, sober people with PowerPoints gave presentations that sounded like Ridley Scott film scripts. A Harvard scientist described a pill he was working on to reset your age by 10 years. A neurotechnologist showed a headband made of plastic that could read your brainwaves. This little crown had enabled a paraplegic man to drive an actual F1 race car using only his mind. On another stage, a tech executive outlined a future with no human work at all.
Why I was in Geneva
I’m hosting a new podcast! It’s called The Last Invention. Listen to the trailer here:
Spotify • Apple Podcasts • YouTube
The podcast is brought to you by a company called Longview, launched by Andy Mills and Matt Boll, and features an impressive production team and interview cast. Read all about Longview’s mission here. I’ve known Andy Mills for over a decade. I first worked with him at Radiolab, where he produced one of the truest stories about lying I’ve ever had a chance to report, all about the truth-smudging smog that can fall on a city in the wake of a mass shooting. That story, and another Radiolab story on translation that aired a few weeks earlier, inspired the launch of my podcast Rough Translation, and Andy produced the first pilot while we were waiting for the green light from NPR. So I was happy to return the favor, eleven years later, when Andy called to tell me he was obsessed with the existential debate around AI and would I help him on this new show. Sure, I said, having no idea how far back in history a podcast about the AI future would take us.
Take the title: The Last Invention comes from a 1965 essay by mathematician I.J. Good, a colleague of the legendary computer scientist Alan Turing. In Good’s essay, he said that once we build a machine smart enough to invent smarter machines, we won’t need to invent anything ever again.
“The first ultra-intelligent machine is the last invention man need ever make.”
Walking around the expo in Geneva, I kept thinking about the double edge of that prediction. The last invention: machines will do everything for us from now on. The last invention: no more human ingenuity required. So what happens to us when we stop being the smartest species in the room? Do we feel relief? Do we feel terror? Do we feel small, like children again? It’s one thing to lose a job to a machine. It’s another to wonder if humanity is no longer in charge.
Is that our future?
A Rough Translation lens on AI
If you know me from my NPR podcast Rough Translation, you know my brain is wired to spot cultural misunderstandings. In Geneva, I watched technologists and diplomats stand on stage and talk right past each other. I saw people at war with their own passion for science and progress. "All day I proselytize about how great this is,” one inventor said, “and then at night I lay in bed and say... what am I doing?"
So over next few months, I’ll be writing here a lot about artificial intelligence. Not as just another tech or business story. More as a cultural encounter between humanity and this new landscape. (Coming up in a future post: why we should think of AI not as a person, or a thing, but a place.)
So listen. Tell me what you think of the podcast. Share this post. Tell us all how you’re doing in the comments below. What does AI look like in your corner of the world? How does it feel? This is such a remarkable group so I’d be very curious to hear your stance on superintelligence if you take 3 seconds to fill out this poll:
P.S. And speaking of remarkable readers, let me not leave without mentioning the astonishing generosity of a select few of you who quite literally changed Farmer Lydia’s life. I’ll say more about that next week when I’m not racing to launch, but wow. You rock.






I'm so tired of listening to Tech aspect of AI. (I'm in tech, so it's everywhere and yes I use it) So I can't wait to hear your cultural take on it. Cheers for the new adventure!
If you don't already know about it, you should read (and interview) Madhumita Murgia's excellent book on AI, Code Dependent: Living in the Shadow of AI. To me, it feels like an anthropology of AI--about the humans who make it possible and its impacts on humans. It's about the relationship between people and AI. I picked it up to confirm one sentence in a paper I was fact-checking and ended up reading the whole thing.
Look forward to the new podcast.