The billionaires who think humanity is just a warm-up act

Human life is nothing more than a starter program for artificial superintelligence.
AI models are destined to be “more human” than humans.
Digital life is the “natural and desirable” next phase in evolution.
These are a few of the anti-human beliefs that have been shared or validated by the small group of techno-oligarchs who currently wield immense power over the global economy and, by extension, the future of humanity.
A misanthropic drift has emerged in the commentary of Peter Thiel, Sam Altman, and Larry Page. It is also a recurring theme in the social media musings of Elon Musk, the wealthiest man in the world and owner of xAI, X, Tesla, SpaceX, and Neuralink. “AI bots will be more human than human,” he wrote Saturday, addressing his 237 million followers on X. That prediction came shortly after Musk shared his agreement with a post declaring, “Eventually, content from humans will be considered the slop.” Slop, as used in online slang, is shorthand for the tawdry, disposable AI-generated text and images that now plague social media feeds.
In recent years, as xAI has invested tens of billions of dollars into its Grok large language models, Musk has grown more outspoken about his views on what role humans would play in a future dominated by AI. Last year, Musk all but said that the point of humanity is to bring about a superintelligent AI that would supplant it. “As I mentioned several years ago, it increasingly appears that humanity is a biological bootloader for digital superintelligence,” he wrote in April, referring to a software program whose primary purpose for existing is to help its host computer power on.
Musk has also envisioned a future in which “biological intelligence” would be relegated to minor backup functions. “The percentage of intelligence that is biological grows smaller with each passing month,” he said in 2024. “Eventually, the percent of intelligence that is biological will be less than 1%. I just don’t want AI that is brittle. If the AI is somehow brittle — you know, silicon circuit boards don’t do well just out in the elements. So, I think biological intelligence can serve as a backstop, as a buffer of intelligence. But almost all — as a percentage — almost all intelligence will be digital.”
“Only Grok speaks the truth”
Over the years, Musk has often glowingly referenced the theories of Nick Bostrom, a Swedish philosopher who helped develop a new ethical framework called longtermism.
At its core, longtermism is an “end justifies the means” ethos applied on a cosmic scale, with the ideal endpoint being a “posthuman” future brought about by mankind fulfilling its “longterm” technological potential. This would be accomplished by either merging with or being overtaken by a more advanced machine race, creating a perfectly efficient civilization capable of colonizing our galaxy. Settling other worlds is an essential priority: In longtermism, one of the catastrophic risks to humanity will only come about hundreds of millions of years from now, when mounting solar radiation from the Sun makes Earth inhospitable.
In his writing, Bostrom has argued that incidents that might be considered “a giant massacre for man,” like World War I and II, are “mere ripples on the surface of the great sea of life.” He has used similar “big picture” reasoning to downplay dangers posed by manmade climate threats, writing that “it is important to maintain a sense of perspective when we are considering the issue from a ‘future of humanity’ point of view.” And to help humanity avoid existential risks and reach its future potential, Bostrom has also made the case for an intrusive form of “preventive policing,” which would necessitate a globe-spanning surveillance network capable of closely watching every human at the same time.
Ultimately, whatever decisions are needed to move humanity toward a mechanomorphic and interplanetary ideal are worthwhile, even if they result in mass death, suffering, and immiseration in the present.
Longtermists also have a much broader definition of humanity than is commonly understood. Bostrom has used the term “Earth-originating intelligent life” to cover theoretically sentient AI in his designs for the future of humanity.
That sort of thinking has led longtermists to demand that technological progress — and thus the creation of potentially trillions of digital beings living happy virtual lives — be prioritized above almost all else. For instance, when it comes to job loss or environmental degradation caused by the AI industry, longtermists would argue that those trade-offs are worth it because superintelligence could usher in a sci-fi utopia and provide untold benefits in the future. That is reason enough to ward off regulations that would inhibit AI advancements.
It’s clear why this philosophy would appeal to Musk. He has spent decades preaching about the need for humanity to colonize and terraform Mars and other planets. “Mars is life insurance for life collectively,” he told Fox News last year. “The Sun is gradually expanding, and so we do at some point need to be a multi-planet civilization because Earth will be incinerated.”
Musk also runs the leading satellite and rocket company; a company developing experimental implants to merge human brains with computers; and a car company that claims to be deploying AI “autonomy at scale” in its “Full Self-Driving” vehicles — which can’t actually drive themselves yet — and its embryonic line of humanoid robots.
His xAI startup, meanwhile, recently merged with SpaceX to help “fund and enable… an entire civilization on Mars and ultimately expansion to the Universe,” he wrote in a press release announcing the union. The merger came not long after he predicted that xAI’s other lofty goal, developing artificial general intelligence, is right around the corner.
“Only Grok speaks the truth,” Musk proclaimed in a post earlier this month. “Only truthful AI is safe. Only truth understands the universe.”
“Carbon‐based biological neural networks inside a cranium”
Musk, a notably stingy philanthropist, has donated at least $14 million to the Future of Life Institute (FLI), a longtermist nonprofit. FLI was a sister organization to Oxford University’s now-defunct Future of Humanity Institute, which was led by Bostrom. It shut down in 2024 in part due to an unearthed email from Bostrom using the N-word and writing, “Blacks are more stupid than whites… I like that sentence and think it is true.”
Musk’s support for Bostrom and his ideas dates back well over a decade. The two men attended a private “beneficial AI” conference together in 2017, and Musk has provided a blurb for one of Bostrom’s books. The simulation theory, introduced by Bostrom in 2003, is also frequently referenced by Musk, who has said humans are “most likely” living in a computer simulation created by another highly advanced civilization.
Bostrom’s notion of intelligence puts machines and humans on an equal footing. “It is not an essential property of consciousness that it is implemented on carbon‐based biological neural networks inside a cranium,” he wrote in a 2003 essay. “Silicon‐based processors inside a computer could in principle do the trick as well.” He has also suggested that machine consciousness may have advantages over that of humans, including requiring fewer resources to achieve satisfaction.
“You would prefer the human race to endure, right?”
Many members of the tech elite share similar philosophies to Musk and Bostrom — and some embrace even more radical positions.
Last month, OpenAI CEO Sam Altman defended the staggering amount of energy required to train and operate large language models like ChatGPT. “People talk about how much energy it takes to train an AI model relative to how much it costs a human to do one inference query,” the billionaire said at an AI summit in India. “But it also takes a lot of energy to train a human. It takes like 20 years of life and all of the food you eat during that time before you get smart.”
Peter Thiel, billionaire cofounder of the AI firm Palantir, had this exchange with New York Times columnist Ross Douthat as they discussed the creation of a machine-augmented “successor species” to humans:
Douthat: You would prefer the human race to endure, right?
Thiel: Uh —
Douthat: You’re hesitating.
Thiel: Well, I don’t know. I would — I would —
Douthat: This is a long hesitation!
Thiel: There’s so many questions implicit in this.
Douthat: Should the human race survive?
Thiel: Yes.
Douthat: OK.
Throughout a years-long lecture series, Thiel has contended that anyone who slows technological progress, particularly by regulating AI, is doing the bidding of “the Antichrist.” At one point, he even suggested that Bostrom could be the Antichrist, likely because of Bostrom’s support for a world government and some AI safeguards.
In his conversation with Douthat last year, Thiel also advocated for blending humanity with machines to create a physically and cognitively superior race. “Yeah — transhumanism. The ideal was this radical transformation where your human, natural body gets transformed into an immortal body,” he said, adding, “We want you to be able to change your heart and change your mind and change your whole body.”
Google cofounder Larry Page, the second-richest person in the world, has shared his own vision of a future in which machines surpass humanity. In Life 3.0 by Max Tegmark, the author recounted a “spirited debate” between Musk and Page, during which Page argued that “digital life is the natural and desirable next step in the cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.”
“His main concerns were that AI paranoia would delay the digital utopia and/or cause a military takeover of AI that would fall foul of Google’s ‘Don’t be evil’ slogan,” wrote Tegmark. Google, now Alphabet, removed “Don’t be evil” from its code of conduct in 2018. Last year, it signed a deal with the Pentagon to provide its Gemini AI system to the military, which recently approved the use of Google’s AI agents by service members.


Adam Becker did a great job of dismantling these beliefs in Technological Salvation® that are the underpinnings of the current hype cycle. His book "More Everything Forever" is a takedown of Bostrom, Kurzweil, Thiel, and Musk et al on every level from software, engineering, and physics to philosophical and ethical. I'm on a quest to get everyone I can to read or listen to it.
This is horrifying! And it strikes me that these people somehow think they are immortal and immune to their own designs for carbon based beings.