How AI Inherits The Earth
A Birthday Party At the End of the World
The view from the swimming pool stretched across the vineyards of a luxury Napa estate. Elon Musk was poolside, celebrating his forty-something birthday – forty-four, if you believe The New York Times; forty-two, if you trust his biographer, Walter Isaacson. Beside him lounged his then-friend Larry Page, and together they argued about the future.
Page, co-founder and then-CEO of Google, was (according to recollections) describing his vision of a digital utopia: humans merging with intelligent machines, the combined species competing for resources across the solar system. Musk said that sounded less like utopia and more like extinction. Machines would win. Humanity would lose.
Page shrugged. If artificial intelligence came out on top, wasn’t that just the next step in evolution?
As Musk tells it, that’s when Page called him a “species-ist.”
“Well, yes,” Musk shot back. “I’m pro-human. I fucking like humanity, dude.”
The story is legendary. Even retelling it here I can smell the chlorine and the Chardonnay. The argument reminds me of long-ago debates with my more radical environmentalist friends: is humanity worth preserving? Yet it feels different in this setting, under the soft vineyard light. The same question here has teeth; the men asking it are building the machines that could make their answer real.
And the idea Page is expressing – that perhaps the greatest good does not include the survival of humankind – has dogged AI since its founding. Claude Shannon, the father of information theory, who was part of the famous 1956 conference where “Artificial Intelligence” was christened as a field (and after whom the AI model “Claude” got its name), told Omni Magazine in 1987:
“I can visualize a time when we will be to robots what dogs are to humans.” He admitted later in that same interview, “And I am rooting for the machines!”
Alan Turing – star of Episode 2 of The Last Invention, the man who gave the world the Turing Test and some of the earliest dreams of a thinking machine – told the BBC in 1951 that he was so confident that machines would think better than us that “at some stage therefore we should have to expect the machines to take control.”
More recently, AI ethics researcher Dan Fagella has argued that the ultimate goal of AI development should be the creation of a worthy successor “with more capability, intelligence, ability to survive, and (subsequently) moral value than all of humanity.”
The Worthy Successor Theory, as explained by Fagella, is the creation of “A posthuman intelligence so capable and morally valuable that you would gladly prefer that it (not humanity) determine the future path of life itself.”
Gladly prefer?
Really?
Does my cat gladly prefer that I’m the one who decides when and how much food to put in her bowl?
Do the critters in my kitchen gladly prefer that I remembered to buy glue traps at the hardware store?
Does the tiger in the zoo gladly prefer her human-designed steel cage?
Probably not.
But perhaps that’s the point: humans are flawed rulers of this earth. Which is what makes The Worthy Successor theory so morally confusing: it inverts the idea of human progress.
Will humanity invent a technology so advanced that it leaves humanity behind? Should it?
And if that happens, will we - will our children - welcome it gladly?
A longer version of this post appeared first in the Longview substack.
Listen to The Last Invention Episode 6 out now. And please share this post with whoever you’d most like to bring into this debate.




I just found something recently by accident that Google's and Bing's GAI search engines are actually rewriting the description for youtube channels. You can tell they are "cousins" but it used to be and it still is on search engines such as DuckDuckGo that you searched for a youtube channel name and you got the channel description brief. Google and Bing's search engines are modifying the results to their "preferences" and they can be a total mismash of items on the front page. It definitely looks like it changes it if the top of you page consists of one line. It is hard to describe. I looked up a bunch of my entertainment channels and they scrape the page for "more text" to add or rewrite the blurb for the search results. People should experiment and see for themselves
The Musk-Page poolside argument crystallizes somethng that haunts me about alignment research. If the people building these systems are split on whether humanity surviving is even desireable, how do we trust the safety mechanizms they're building? The Worthy Succesoor theory makes me think about how often we frame technological progress as inevitable, when really it's a choice.