The AI Escapism Loop: The Quest to Flee Earth and Responsibility
A Review of Adam Becker's "More Everything Forever".
Introduction
The world is being rapidly reshaped by the decisions of a few powerful AI tech leaders and their philosophies. But what do they actually believe? In this review of Adam Becker’s (2025) “More Everything Forever”, we take a look at the core belief systems and philosophies driving Silicon Valley’s elite AI tech leaders. Becker (2025) brilliantly analyzes Silicon Valley’s AI tech leaders shared narrative, a belief in godlike AI, intergalactic empires, and a quest to conquer death, and argues these are not just harmless fantasies [1]. Becker (2025) makes the case that this “ideology of technological salvation” is a strategic tool, one that uses immense wealth to justify its own power, escape regulation, and ignore the very real, human problems their technologies create today, in what I call, “the AI Escapism Loop”. This analysis examines Becker’s critique and explores the profound implications this worldview has for the future of human agency and a democratic society, culminating in ways humanity can break free from the AI Escapism Loop.
Image Source: Amazon, “More Everything Forever”.
The AI Escapism Loop
The AI Escapism Loop is basically the systemic, cyclical nature of AI and tech irresponsibility purposefully created by architects of AI to maximize user engagement for maximal profit. The AI Escapism Loop builds on existing ideas from a well-known problem, the “AI Responsibility Gap” [2], and showing that the “AI Escapism Loop” is an explanation for the arrangement that creates and maintains the AI Responsibility Gap. In their 2021 paper, Santoni de Sio and Mecacci described the AI Responsibility Gap as not one issue but four distinct problems [2]. They identify three backward-looking gaps: (1) the Culpability Gap, where it’s hard to blame any person for an AI’s wrongdoing; (2) the Moral Accountability Gap, where a human operator cannot explain the reasons for an AI’s decision; and (3) the Public Accountability Gap, where public bodies avoid scrutiny for their use of opaque AI systems. They also identify one forward-looking gap: (4) the Active Responsibility Gap, where designers and managers fail to take proactive responsibility for preventing future harms. The authors propose “Meaningful Human Control” as the comprehensive solution to address all four gaps [2].
The AI Escapism Loop reframes these four gaps not as separate problems, but as the interconnected stages of a single, self-reinforcing system of irresponsibility, at the AI operational and tactical strategic levels. The AI Escapism Loop includes the following “escapism” mechanisms: the AI architects ideological escape which creates the Active Responsibility Gap by ignoring present-day harms. The Corporations structural escape, which uses lobbying and plausible deniability, exploits and widens the Public Accountability Gap. The Platforms economic escape of blaming the algorithm hard-codes the Moral Accountability Gap. Finally, the Users social escape, where the human becomes the “blame it all” for the system’s failures, is the Culpability Gap. The Escapism Loop thus shows that these gaps are not just passive failures but the functional, and often intended, outcomes of a system designed to distribute denial, while allowing the maximization of profit.
The AI Escapism Loop Philosophy
As such, the AI Escapism Loop philosophy is an ideology, often implicit, held by a powerful segment of the AI tech elite, as described by Becker (2025). It is not about using AI to escape per se, but about building AI to enable a grand, permanent escape from human-centric responsibility. The AI Escape Loop isn’t just an ideological fantasy by some AI architects; it’s a practical, multi-layered system of irresponsibility being pushed into the entire AI tech ecosystem. Borrowing from Becker (2025)’s instructive, the philosophy manifests in four key ways: Escaping the Present: It strategically devalues today’s real-world problems (like algorithmic bias, labor exploitation, or inequality) by framing them as trivial “distractions” compared to a speculative, long-term AI-driven future (either utopia or apocalypse). Escaping the Planet: As noted by Becker (2025), it obsesses over “dark utopian” and transhumanist visions of transcending physical limits, from conquering death to colonizing space, viewing Earth itself as a finite problem to be “solved” or abandoned. Escaping Ethics: As described by Becker (2025), it champions a “humanities denial”, willfully ignoring the complex lessons of history, ethics, and social science. This “engineer’s disease” provides a “rhetoric of inevitability” that absolves creators from moral responsibility for their inventions. Escaping Accountability: Furthermore, as instructed by Becker (2025), it is a fantasy mechanism that seeks to create a new frontier either in Artificial General Intelligence (AGI), Super-AI, or in space, that is, by design, beyond the reach of democratic oversight, social limits, and public accountability.
The AI Escapism Loop
The AI Escapism Loop is a systemic and multi-layered phenomenon of accountability evasion, enabling and incentivizing all actors in the AI ecosystem to evade responsibility for real-world harm. This philosophy operates on four distinct levels, creating a chain loop of mutual denial where harm is widely distributed but responsibility is never claimed:
The AI Architects (Ideological Escape): This is the philosopher-king level. Tech leaders, their intellectuals, and accelerationists escape responsibility for present-day harms (like bias or labor exploitation) by justifying their work with speculative future goals. They focus on saving humanity from AGI or transcending Earth, framing any attempt at present-day regulation as a distraction or, worse, an existential threat to progress.
The Corporations (Structural Escape): This is the CEO and policy level. Tech companies escape responsibility by designing for plausible deniability. They lobby for ethics-washed regulations that lack enforcement, champion self-governance to avoid real laws, and build products (like addictive social feeds or misinformation-prone models) for which they disclaim liability, all while pointing to the user as the one responsible for misuse.
The Platforms (Economic Escape): This is the ad-tech and monetization level. Platforms and their partners (like ad agencies) escape responsibility for the content they amplify by claiming to be neutral conduits. They are economically incentivized by the harm, since misinformation, outrage, and deepfakes generate millions of views but they evade all ethical responsibility for the societal damage they profit from.
The Users (Social Escape): This is the final link in the chain that loops back to the AI Architects and their platforms. The general users are by default caught in the escapist narrative that views AI as an “inevitable,” “godlike,” or “neutral” force, thus eliminating human agency. Any harm such as, harassment, deepfakes, propaganda, is rationalized by the platforms hiding behind notions of “freedom”, “free expression”, and “fighting censorship”, and that it is the responsibility of users to utilize so-called community content moderation and fact checking. Users usually become victims, and caught up in an endless loop apparatus and algorithms designed to maximize their interactions with the resulting dependency and harm ramifications. This leaves the users powerlessly going back to the AI architects, the corporations, and platforms for recourse, resulting in limited, if not depleted human agency. The system is meant to financially empower the architects not the people.
The AI Escapism Loop is used by the powerful AI elite to avoid any ounce of AI responsibility, while enormously harvesting all the financial benefits of such a fractured system. The AI Escapism Loop is an irresponsibility machine, where no one is at the wheel, with users left to deal with the aftermath from AI architectural designs that cause them harm. It is at this point that we turn our attention to the ideological and philosophical underpinnings of this AI escapism loop that Becker (2025) brilliantly outlines and describes.
Concerning Transhumanism
Becker (2025) analyzes the shared technological philosophy among Silicon Valley leaders like Eliezer Yudkowsky, Sam Altman, Jeff Bezos, and Elon Musk. He portrays a narrative where figures such as Yudkowsky, of the Machine Intelligence Research Institute (MIRI), view the creation of AGI and Super-AI machines capable of outperforming humans at all tasks as inevitable. This belief underpins a broader transhumanist vision aimed at transcending human limitations like death and expanding civilization across the stars. Becker argues this sci-fi-fueled future featuring intergalactic empires and godlike AI is presented by tech billionaires as a moral imperative, the only alternative to stagnation or extinction. He critiques these claims as deeply questionable, suggesting they serve primarily to justify the tech elite’s present-day pursuit of wealth and power, granting them control over the public imagination by framing the debate [3].
“…Transhumanism is the belief that we can and should use advanced technology to transform ourselves, transcending humanity and becoming something more. That generally involves finding ways around the limits of the human body— ending illness, aging, and death— as well as increasing intelligence and other mental capacities. But it also carries with it various promises about the future of humanity (or transhumanity) itself: that our fate lies in the stars, that we will build an intergalactic civilization, that we will reshape the universe to our desires just as we will have reshaped ourselves… Altman’s power fantasies and Yudkowsky’s nightmares are pieces of a bigger picture of the future, one shared by many of the wealthiest and most influential people in the tech industry. That future is straight out of science fiction: people’s minds uploaded into computers to live for all eternity in a silicon paradise, watched over by a benevolent godlike AI; a ceaselessly expanding empire spanning the stars, disassembling planets, and consuming galaxies; all needs satisfied, all fears assuaged, all desires sated through the power of unimaginably advanced technology…” Pages 1-6
Becker (2025) describes several core philosophies common among tech leaders. This includes Transhumanism, the belief that technology should allow humanity to transcend biological limits like aging and death, and a deterministic belief in Super-AI Inevitability, which posits that a superhuman AI is an unavoidable future event. This worldview often embraces Technological Solutionism, the idea that complex social problems like inequality are simply engineering challenges. It is also linked to Cosmic Expansionism, the moral conviction that humanity must become an intergalactic species to avoid extinction. Becker suggests this combination provides a Billionaire Absolution, justifying the accumulation of immense wealth by framing the growth of their businesses as a moral imperative to save humanity [3].
Concerning Effective Altruism
According to Becker (2025), several philosophical concepts combine to form what he calls the “ideology of technological salvation”. This pipeline often starts with Effective Altruism, a utilitarian movement to find optimal ways to help people. A specific strain of this, Longtermism, argues for the moral importance of maximizing the existence of future people, implying a need for cosmic expansion. This connects to the Singularity, the belief that accelerating technology will soon create a superintelligent AI, leading to a utopia. The equivalent of this optimism is AI Alignment, the concern that an unaligned AI poses the single greatest existential threat to humanity. Becker defines this combined worldview as reductive (turning all problems into tech problems), profitable for the tech industry, and offering a form of transcendence from real-world limitations. Becker (2025) analyzes Effective Altruism (EA) as the philosophical root of longtermism, a utilitarian-based ideology that prioritizes maximizing humanity’s long-term future, often justifying cosmic expansion. He links this directly to Silicon Valley’s ideology of technological salvation, which includes concepts like the Singularity and a focus on AI alignment. Becker critiques this entire worldview as a reductive, growth-obsessed philosophy. He argues that by focusing on speculative future apocalypses, it provides a profitable excuse for tech leaders to ignore present-day human suffering and systemic problems [4].
“…a central concept of effective altruism: “Earn to give,” the idea, roughly, that one of the best ways to make the world a better place is to make a large amount of money, and then donate much of that money to worthy causes that help people… Effective altruism seems relatively straightforward on the face of it: evaluate the best ways to make the world a better place, and then devote as much money and time as you can to those efforts…” Pages 10.
The ideology of technological salvation as noted by Becker (2025), is fundamentally opposed to a human-centric perspective on AI. A human-centric approach is grounded in the tangible well-being, dignity, and autonomy of people living now, focusing on immediate issues like fairness, bias, and social impact. In contrast, the longtermism worldview devalues current human suffering by prioritizing the welfare of hypothetical future populations, making present-day problems seem insignificant. Furthermore, narratives like the Singularity or AI alignment frame the future as a technical problem rather than a social space that humans must actively shape. This deterministic view, as Becker notes, allows proponents to focus on abstract, invented problems while ignoring the real-world harm their technologies may be causing today, thus keeping users in the AI Escapism Loop, an impediment to human agency.
Furthermore, Gleiberman (2023) expounds on the term transhumanitarianism, a strategic rebranding of transhumanism, designed to embed its goals within the Effective Altruism (EA) movement. This framework reframes speculative aims, like radical life extension, digital minds, and mitigating existential risks, as humanitarian efforts to “save lives” and ensure “global flourishing”. By positioning these as the logical next step for global aid, transhumanitarianism acts as a bridge, allowing EA to present these long-term, technologically-focused priorities as the most “effective” use of resources. This shows that EA only serves as a Trojan horse, diverting significant funding and attention away from immediate, solvable issues in the Global South toward abstract, future concerns that may primarily benefit a post-human elite [5].
Concerning the Singularity
In his analysis of Kurzweil, Becker (2025) outlines a philosophy centered on the Singularity, the idea that exponentially accelerating technology will soon culminate in a post-human utopia. This concept is driven by Kurzweil’s Law of Accelerating Returns, which posits that progress inherently speeds up as an evolutionary process. The critical goal of this transformation is Cosmic Saturation, a future where merged human-AI intelligence expands to convert all “dumb” matter in the universe into “sublime intelligence”. Becker notes Kurzweil’s rhetoric of Technological Inevitability, which frames this future as a deterministic certainty rather than a choice. According to Becker’s critique, this rhetoric provides Technological Absolution, serving to absolve inventors of moral responsibility for any negative consequences of their creations. The Singularity is another central and foundational philosophy related to transhumanism held by a number of AI architects. As noted by Becker (2025), this law posits that exponential technological progress is inevitable, leading to a utopia where humans merge with AI, end disease, and “wake up” the universe by saturating it with intelligence. Becker critiques this foundation, arguing that Kurzweil’s evidence, such as Moore’s Law, has been misinterpreted and actually demonstrates diminishing returns. He argues Kurzweil’s “rhetoric of inevitability” is a dangerous philosophy that provides absolution for tech creators, removing moral accountability for the effects of their inventions by framing the future as a predetermined, inhuman force [6].
“…Yet Kurzweil, and others of his ilk, frequently discuss technology as if it’s an implacable, inhuman force with its own desires, running down a path that has absolutely nothing to do with the collective choices of humanity. This is present throughout Kurzweil’s claims about the Singularity: we will have nanotechnology, we will saturate the universe with our intelligence, rather than we may or we could choose to. This rhetoric of inevitability serves several convenient purposes. For the people developing their ideas into technology, such rhetoric offers absolution for any unpleasant and unforeseen consequences of their inventions, because they were only uncovering the already extant course of technology, rather than steering it themselves. When there’s no room for human agency, there’s no room for moral responsibility either…” Page 87.
As Becker (2025) shows, the philosophical framework attributed to Kurzweil is fundamentally incompatible with a human-centric perspective. A human-centric approach is defined by the primacy of human agency, democratic governance, and the demand for ethical accountability. Kurzweil’s “Singularity” philosophy is the direct opposite, a form of technological determinism that as Becker notes, “leaves no room for human agency”. By framing the future as an inevitable technological progression, it treats humanity as a transitional phase toward a post-human state. This rhetoric of inevitability serves to remove ethical debate and provides absolution for technologists, a concept antithetical to the human-centric demand for moral responsibility. The implied governance strategy is not human-led steering but simple acceleration, as the goal is to transcend humanity, not empower it.
Concerning the Alignment Problem
Becker (2025) outlines the rationalist movement’s philosophies, a group of thinkers influenced by Eliezer Yudkowsky and Nick Bostrom’s AI Alignment Problem, the fear that a Super-AI will inevitably cause human extinction unless perfectly aligned with our values. This fear is supported by the Orthogonality Thesis, the idea that an AI’s high intelligence is separate from its final goal, meaning it could pursue a trivial objective with world-ending consequences, such as destroying the known world to create paperclips. This worldview presents a stark Extinction-or-Paradise Binary, offering no third option besides utopia or total annihilation; i.e., intelligence and goals are separate, and the binary threat of Super-AI-driven extinction or a transhumanist paradise. Becker (2025) notes that proponents of this view often dismiss current, real-world AI harms like algorithmic bias as distractions from the primary existential threat. Becker critically contrasts this sci-fi fantasy with the present-day, real-world harms of algorithmic bias, which he notes disproportionately affect marginalized groups. He argues the alignment narrative is a harmful intellectual sleight of hand used to ignore systemic issues. Becker traces this ideology’s roots to Extropianism, revealing its deep connections to eugenics, scientific racism, and anti-democratic policies [7].
“…Like Kurzweil and the Extropians and transhumanists before them, they dreamed of a limitless future in space. They saw death as an avoidable evil, or at least one that could be postponed for billions of millennia. Most of all, they shared a belief that they were saving the world from the imminent danger of a misaligned AGI—and ushering in a paradise with the help of an aligned AGI instead…” Page 102.
“…Eliezer Yudkowsky thinks you’re probably going to get murdered. But, he says, it’s nothing personal. “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter,” he writes…Losing a conflict with a high-powered cognitive system looks at least as deadly as ‘everybody on the face of the Earth suddenly falls over dead within the same second… The problem, as Yudkowsky sees it, is that a superintelligent AI is not likely to want the same things we do. Hence his concern with the alignment problem: finding a way to ensure that any future AGI shares human values and goals, including basic ones like not killing off humanity and destroying the world… Timnit Gebru, AI scientist and founder of the Distributed AI Research Institute…dismisses the alignment problem as a way to “think about these cool sci-fi things without having to contend with the real world…” Pages 91-108.
The rationalist AI philosophy is fundamentally hostile to a human-centric approach and does not prioritize solutions to the immediate, tangible harms affecting real people, especially the vulnerable. Rather the rationalist ethic focuses on speculative, long-term existential threats, dismissing present-day issues like algorithmic bias as “distractions”. Becker (2025) argues this worldview subverts human agency by diverting vast resources away from solvable social problems. He also identifies its governance strategy as “ideological capture,” using “sci-fi fantasies” to influence global policy. Critically, Becker’s analysis reveals this ideology as the precise antithesis of a human-centric worldview. He notes that it devalues current human suffering and highlights its roots in eugenics, “human biodiversity” and its tolerance for scientific racism. Becker argues this foundation is built on the premise that “some kinds of people are inherently better than others,” which stands in direct opposition to a human-centric view that demands equity, justice, and a focus on all of humanity, not just a rational and powerful few.
Concerning Longtermism and its Dehumanization
Longtermism is another foundational philosophy held by a number of AI architects, and is related to effective altruism. As Becker (2025) explains, longtermism is a philosophy built on Total View Utilitarianism, which creates a moral imperative to maximize the total number of potential future lives. This framework leads to Existential Risk Prioritization, the belief that hypothetical, far-future risks like misaligned Super-AI are vastly more important than tangible, current problems. Becker notes its methodology relies on Speculative Mathematical Ethics, using vast, hypothetical numbers to guide present actions, e.g., 10^24 to dismiss present problems. This calculation results in the Devaluation of Present Lives, leading to logical conclusions and claims that AI safety research is “trillions of times” more valuable than distributing malaria nets. Becker also highlights its foundational Rich Country Value Hierarchy, which argues that saving lives in richer, “productive” nations is more important for securing the long-term future. He points to Nick Beckstead’s foundational thesis, which concluded that “saving a life in a rich country is substantially more important than saving a life in a poor country”. Becker (2025) terms this worldview Ethical Taylorism, an ideology that sees people as numbers to be optimized, dismissing unquantifiable values [8].
“…Longtermists, then, are making arguments with incredibly strong conclusions—funding AI safety research is trillions of times more cost-effective than preventing the spread of malaria! Saving a billion people today isn’t as good as a minuscule chance of saving 1052 people who might exist someday! —based on arguments that rely on very small probabilities and that fall apart if those probabilities are wrong. And their estimates of those crucial probabilities are based on very little. Weigh that against the overwhelming evidence that there are people alive today who are in need, and the whole idea of longtermism looks shaky… For the longtermists, then, space isn’t just the location of the limitless future; it’s also part of the solution to the risk of human extinction. But on top of all that, the longtermists claim that we need to get to space—that there is “a moral case for space settlement,” as MacAskill puts it—and that failing to do so would be a cosmically grave tragedy…” Pages 170-176.
In “the Case for Strong Longtermism”, Greaves and MacAskill (2019) defend longtermism philosophy by arguing that the expected impact on the far future is the most important feature of our actions today. They claim that this view is based on two premises: the Vastness of the Future, which posits that the potential value of $10^24 or more future lives dwarfs all present concerns, and Tractability, the belief that we can effectively influence this long-term outcome now by reducing existential risks like unaligned AI. The authors defend this conclusion, arguing that its expected value is so high that it demands a radical shift in priorities. This philosophy directly challenges traditional humanitarianism, implying that resources from organizations like the Red Cross should be diverted from immediate suffering to long-term AI risk mitigation, for example, funding AI safety data centers over providing aid in Africa because securing the vast future is considered the dominant moral priority according to Greaves and MacAskill math calculations.
There is no doubt that the philosophical framework of longtermism as laid out by Greaves and MacAskill (2019), is profoundly antithetical to the entire human experience. The longtermist philosophy that Becker (2025) describes values human lives not inherently, but based on their “productivity” and “innovation” toward a hypothetical, far-future Super-AI goals. This framework provides a direct rational justification for the diabolical conclusion that “a life in a rich country is substantially more important than saving a life in a poor country”, a statement that is a complete violation of the humanitarian ethic and what it really means to be human. This ideology’s governance strategy aims to steer global resources away from current global problems to finance grandiose and si-fi Super-AI projects at the expense of humanity. Furthermore, the “need for quantification”, ideology that Becker (2025) exposes, is itself dehumanizing. This dehumanizing Ethical Taylorism undermines human agency by reducing all human experience, such as, culture, art, love, community, to a single quantifiable metric, that can be plugged into some grand longtermist plan. A human-centric view insists that these unquantifiable experiences are, in fact, the very purpose of a just society, not unimportant data points to be dismissed.
Concerning Effective Accelerationists
Related to longtermism, is the effective accelerationist philosophy, that seeks to speed up AI innovation and progress at all costs. As Becker (2025) notes, this philosophy is shared by AI Tech CEO giants Jeff Bezos and Elon Musk, posits a binary choice: cosmic expansion or stagnation. Becker (2025) outlines the Techno-Optimist or Effective Accelerationist philosophy, championed by figures like Marc Andreessen. This view posits that technology, especially AI, is the cure for all ills and must be accelerated at all costs. While distinguishing themselves against AI doomers, they share premises like the law of accelerating returns, concluding that slowing AI is “a form of murder”. This ideology rests on a Stagnation-Expansion Binary, arguing that humanity’s only choices are stagnation and death or perpetual growth through cosmic expansion, a move Elon Musk justifies as creating a “backup” for consciousness. Becker (2025) critiques this as a “fantasy of control” with authoritarian vibes that uses a “blindness to history”, especially colonialism, to justify an anti-regulatory, anti-expert worldview. Becker notes that this “growth at all costs” mentality is no different from AI doomers’ own desire to control humanity’s future [10].
“…Those vibes—vibes of fascism, authoritarianism, and colonialism—are fundamentally about creating a fantasy of control, the ultimate drug offered by the ideology of technological salvation. And once again, the distance between Andreessen’s effective accelerationist camp and that of the effective altruists and rationalists is vanishingly small. The effective altruists and rationalists also want control, control over the superintelligent AI that will set humanity on the best path to the future of highest value. (They have also deployed similar racist and authoritarian rhetoric along the way.) Andreessen seems to want that too; he just differs about how to get there, what constitutes value, and who should be in charge of telling the superintelligent AI what to do as it determines our future. If you want a picture of that future, imagine a billionaire’s digital boot stamping on a human face—forever…” Page 249.
The Techno-Optimist ethical framework is a growth-based consequential idea where accelerating technology is the only moral good, and slowing it is considered unethical or “a form of murder”. This worldview views human agency, collective or democratic oversight, as an impediment, especially if such efforts call for accountability, that slows down any AI Tech related progress. The effective accelerationist philosophy is incompatible with humanity, as it rejects the safety, equity, and expertise central to a human-centric model, that would entail checks and balances, necessitating a slowdown. As Becker (2025) notes, the effective accelerationists vision, like that of the AI doomers it opposes, leaves no room for broad human agency, merely debating who should control an AI-determined future.
Secular Theology and The Ultimate Imperial Ambition
Becker (2025) concludes by contrasting the tech elite’s space-faring ambitions with a humanistic vision of community. Becker (2025) outlines the ideological foundations of this tech worldview, which is described as a Secular Theology, a new faith promising transcendence and immortality through science. Becker (2025) cites what scholar Kate Crawford terms Imperial Utopianism: a desire to “escape all boundaries,” including ecological, ethical, biological limits, and regulation, all driven by a fear of death. Additionally, Becker (2025) draws on scholar, Meghan O’Gieblyn to argue that transhumanism and AI have become a substitute religion for a secular age, turning eternal questions like immortality into mere engineering problems. Becker (2025) diagnoses this flawed thinking as supported by the Humanities Denial, a “willful ignorance” of non-STEM fields, and Engineer’s Disease, the belief that STEM expertise translates to all other fields, a thinking ethic endemic to tech culture. Becker (2025) identifies the Inevitability Rhetoric as a primary tool to suppress the public’s agency to imagine alternatives. He concludes that the core issue is Plutocratic Warping, the “reality-warping power of concentrated wealth” that successfully pulls these “fringe philosophies” into the mainstream. He concludes with the call to remember that human agency can respond overwhelmingly to any dehumanizing power [11].
“…But why does it so often seem that the idea is to go to space in order to live forever? What does space have to do with immortality? The interdisciplinary scholar of AI Kate Crawford suggests an answer: Space has become the ultimate imperial ambition, symbolizing an escape from the limits of Earth, bodies, and regulation. It is perhaps no surprise that many of the Silicon Valley tech elite are invested in the vision of abandoning the planet. Space colonization fits well alongside the other fantasies of life-extension dieting, blood transfusions from teenagers, brain-uploading to the cloud, and vitamins for immortality…”, Page 255.
“…So perhaps there’s something simpler, or at least older, at work here, alongside the ideology of technological salvation. Maybe it’s just that immortality comes from heaven. “Today artificial intelligence and information technologies have absorbed many of the questions that were once taken up by theologians and philosophers: the mind’s relationship to the body, the question of free will, the possibility of immortality,” writes essayist Meghan O’Gieblyn, author of God Human Animal Machine. “All the eternal questions have become engineering problems… What makes transhumanism so compelling is that it promises to restore through science the transcendent—and essentially religious—hopes that science itself obliterated,” she writes...” Pages 256-257.
The TESCREAL Bundle
It is important to note the work of AI ethics scholars Gebru and Torres (2024), who showed that that Effective Altruism (EA), particularly longtermism, acts as a philanthropic front for a problematic set of ideologies they label the TESCREAL Bundle (Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism). These ideologies are held by a number of AI architects in the AI Tech community, knowingly or naively. TESCREAL bundle encapsulate the analysis that Becker (2025) provides into a framework of ideologies. More importantly, Gebru and Torres (2024), critique EA’s underlying utilitarian ethics as fundamentally flawed, asserting this framework prioritizes saving a vast, speculative future of potential digital or super-intelligent beings over addressing current, tangible suffering and systemic injustices. They object that this focus is immoral, diverting vast resources toward a techno-utopian vision for a select few while ignoring the urgent needs of billions. Gebru and Torres (2024) conclude that TESCREAL is an inherently hazardous, colonial, and authoritarian ideology that justifies extreme resource misallocation [12].
The AI Plutocracy Problem
Becker (2025) analysis shows that the core problem is not technological but a plutocracy problem. Becker’s investigation illustrates that the tech elite’s ethical failures stem from humanities denial problem which causes their ethical blind spots. This leads to a rhetoric of AI and tech inevitability as a direct assault on human agency, and thus maintaining the AI Escapism Loop. Becker (2025) proposes a taxation mechanism that would eliminate this plutocracy.
“…So here’s a specific policy proposal, one that’s even endorsed by a major science fiction author. “There should be no such thing as billionaires,” Kim Stanley Robinson tells me. “The Midas touch is not a happy thing—if you touch people and they turn to gold, then this is a serious barrier to intimacy. So it would be doing them a favor to tax them out of existence. The Republican Congress of 1953 under Dwight Eisenhower was pretty good at this—incomes over $300,000 a year were taxed at 92 percent for the overage. This was a society that understood the rich to be parasites and fools. We live in a stupider time, but we could change that.” The fact that our society allows the existence of billionaires is the fundamental problem at the core of this book. They’re the reason this is a polemic rather than a quirky tour of wacky ideas. Without billionaires, fringe philosophies like rationalism and effective accelerationism would stay on the fringe, rather than being pulled into the mainstream through the reality-warping power of concentrated wealth…” Page 287.
While Becker (2025) calls for the taxation of billionaires out of existence, in the meantime, I suggest a more pragmatic approach to break out of the AI Escapism Loop.
Breaking the AI Escapism Loop
For the Architects: Mandate present day accountability. The ideological escape to a sci-fi future is broken by legally and professionally grounding designers in the present. Mandate rigorous, independent, and transparent Human-Centric Impact Assessments before a product is built. This is not a “check-the-box” exercise but a non-negotiable gateway that forces architects to answer for real-world harms before they can even access funding or a license to operate. This makes human-in-the-loop design and responsible checks a core engineering and legal requirement, not a voluntary, feel-good suggestion. It changes the design process from “Can we build it?” to “How will this not harm people when we build it?”
For the Corporations: Impose Structural Liability. The structural escape of ethics-washing and plausible deniability is broken by making accountability structural, legal, and costly. This includes replacing vague self-governance frameworks with hard, enforceable laws on corporate liability. If a company’s product is shown to systematically cause harm that company must be held legally and financially responsible for the damages. This fundamentally changes the leaderships’ risk calculation. When liability is structural, then “escape” is no longer profitable. The CEO’s responsibility shifts from not getting caught to ensuring their products are safe by design to protect the company from existential legal risk.
For the Platforms: End the “Neutral Conduit” Defense: The economic escape of profiting from harm while claiming neutrality is broken by tying profit directly to responsibility. This includes introducing algorithmic liability. The current “neutral conduit” defense was created for a world where platforms hosted content, not one where their own algorithms proactively amplify and promote it for engagement. The new rule must be: “If your algorithm amplifies it, you are responsible for it”. This directly realigns the economic incentive. When platforms are liable for the misinformation or harm they amplify, they will suddenly find it highly profitable to build algorithms that favor safety, truth, and social well-being over raw, destructive engagement.
For Users: Re-center Human Agency: The social escape of “the AI did it” is broken by dismantling the godlike AI myth and empowering the public. This includes a drive for a massive, society-wide shift in digital literacy and education. This means demystifying AI, moving the public narrative away from all-powerful, inevitable magic and toward AI is a tool, built by humans, with specific goals, that you are in control of. An empowered public no longer sees themselves as a passive victim of “inevitable” technology. They are empowered to demand better products, support for stronger regulations, and accept personal responsibility for how they use these tools breaking the final link in the escapist chain.
References
[1] Adam Becker, “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity”, Basic Books, 2025, ISBN-13: 978-1541619593.
[2] Santoni de Sio, Filippo, and Giulio Mecacci. “Four responsibility gaps with artificial intelligence: Why they matter and how to address them.” Philosophy & technology 34.4 (2021): 1057-1084.
[3] Adam Becker, “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity”, Basic Books, 2025, ISBN-13: 978-1541619593. Pages 1-7.
[4] Ibid., pp. Pages 10-37.
[5] Gleiberman, Mollie. “Effective Altruism: doing transhumanism better.” Working papers/University of Antwerp. Institute of Development Policy and Management; Université d’Anvers. Institut de politique et de gestion du développement.-Antwerp (2023).
[6] Adam Becker, “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity”, Basic Books, 2025, ISBN-13: 978-1541619593. Pages 41-90.
[7] Ibid., pp. Pages 91-145.
[8] Ibid., pp. Pages 152-203.
[9] Greaves, Hilary, and William MacAskill. “The case for strong longtermism.” Essays on longtermism: Present action for the distant future (2019).
[10] Adam Becker, “More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity”, Basic Books, 2025, ISBN-13: 978-1541619593. Pages 206-249.
[11] Ibid., pp. Pages 253-289.
[12] Gebru, Timnit, and Émile P. Torres. “The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence.” First Monday (2024).
Image Source: Amazon, “More Everything Forever”.
Commentary: Kato Mivule


Excellent analysis! This article truly articualtes some critical concerns I've had about the direction of AI development. I'm particularly interested in how the 'AI Escapism Loop' specifically explains the maintenance of the 'AI Responsibility Gap' – could you perhaps elaborate on the systemic mechanisms that perpetuate this lack of accountability?