'Foolhardy at best, and deceptive and dangerous at worst': Don't believe the hype — here's why artificial general intelligence isn't what the billionaires tell you it is
"Unfortunately, the goal of creating artificial general intelligence isn’t just a project that lives as a hypothetical in scientific papers. There’s real money invested in this work, much of it coming from venture capitalists."

The hype around artificial intelligence (AI) risks spiraling out of control as claims around the emerging technology escalate into the realm of the absurd. AI is a big-money business, write the authors of the new book, "THE AI CON: How to Fight Big Tech's Hype and Create the Future We Want" (2025), and the marketing fanfare we see is meant to promote the interests of big tech and do one thing: sell AI products.
In this new book, authors Emily M. Bender, professor of linguistics at the University of Washington, and Alex Hanna, director of research at the Distributed AI Research Institute, challenge our understanding of what AI is — and what it isn't. Ultimately, they attempt to see through a lot of the overblown claims and sensationalism to understand the true impact AI is having on society.
In this excerpt, the writers grapple with the idea of artificial general intelligence (AGI), the origins of that idea and what the term actually means. In this extract, they argue that the true definitions of AGI and a hypothetical "superintelligence" are fuzzy, at best, and in practice only serve to feed the corporate AI hype machine.
If you listened to executives and researchers at big tech firms, you’d think that we were on the verge of a robot uprising. In February 2022, OpenAI’s Chief Scientist Ilya Sutskever tweeted "it may be that today’s large neural networks are slightly conscious."
In June 2022, the Washington Post reported that Google engineer Blake Lemoine was convinced that Google’s language model LaMDA was sentient and needed legal representation. Lemoine was fired over this incident — not for his false claims (which Google did deny), but for leaking private corporate information. In an August 2022 blog post, Google VP Fellow Blaise Agüera y Arcas responded to the Lemoine story, but rather than countering Lemoine’s claims, he suggested that LaMDA does indeed "understand" concepts and that the debate over whether or not LaMDA has feelings is not resolvable or "scientifically meaningful."
In April 2023, a team at Microsoft Research led by Sébastien Bubeck posted a non-peer-reviewed paper called Sparks of Artificial General Intelligence: Early Experiments with GPT-4, in which they claim to show that the language model GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, [and] psychology" and thus shows the first "sparks of artificial general intelligence."
Related: GPT-4.5 is the first AI model to pass an authentic Turing test, scientists say
Get the world’s most fascinating discoveries delivered straight to your inbox.
The word "sparks" evokes an image of something about to catch fire and spread of its own accord. The phrase "artificial general intelligence" here is meant to differentiate from ordinary technologies called "AI," and is particularly common in modern discourse around thinking, sentient or conscious machines.
The eugenicist origins of "general intelligence"
Despite claims that machines may one day achieve an advanced level of "general intelligence," such a concept doesn’t have an accepted definition. (OpenAI has avoided the question by suggesting that they will allow their board to decide when their algorithms have achieved artificial general intelligence.) But the project of identifying general intelligence is inherently racist and ableist to its core, making the project of chasing artificial general intelligence foolhardy at best, and deceptive and dangerous at worst.
Microsoft’s "Sparks" paper contains a preliminary definition of general intelligence, one that has no references to fields that may have a say in such a thing, like psychology or cognitive neuroscience. Despite being a paper claiming that certain statistical models have shown the inklings of "artificial general intelligence", it offers no well-sourced definition of what the components of general intelligence are.
In a prior version of the paper, the authors cited a definition from a 1994 Wall Street Journal editorial signed by a group of 52 psychologists but penned by Linda S. Gottfredson in defense of Richard Herrnstein and Charles Murray’s 1994 book, The Bell Curve. This book argues among other things, that there are significant differences between the inborn intelligence of different racial groups, and that those differences are mostly due to genetics. Gottfredson, in her letter, claims that "genetics plays a bigger role than does environment in creating IQ differences among individuals" and that "IQs do gradually stabilize during childhood… and generally change little thereafter."
These claims about the inherent hierarchies of racial intelligence are not new, and studies of "general intelligence" have a long and sinister history. This is not "forbidden knowledge," as Murray and his defenders would have it; they are justifications for racism that are as old as the modern Western state and capitalism. Both the measurement of intelligence — namely IQ tests — and the concept of general intelligence are implicated in this sordid history. Bubeck and colleagues had no other source to cite for a definition of intelligence. Discussions of intelligence, pertaining to people or machines, are race science all the way down.
To Bubeck’s credit, when we notified him of the context and contents of Gottfredson’s letter, he and his coauthors quickly scrubbed the paper of the citation and of the associated definition. But this doesn’t erase the racist roots of the general intelligence project. General intelligence is not something that can be measured, but the force of such a promise has been used to justify racial, gender, and class inequality for more than a century. The paradigm of describing "AI" systems as having "humanlike intelligence" or achieving greater-than-human "superintelligence" rests on this same conception of "intelligence" as a measurable quantity by which people (and machines) can be ranked.
AGI and modern-day eugenics
Unfortunately, the goal of creating artificial general intelligence isn’t just a project that lives as a hypothetical in scientific papers. There’s real money invested in this work, much of it coming from venture capitalists.
A lot of this might just be venture capitalists (VCs) following fashion, but there are also a number of AGI true believers in this mix, and some of them have money to burn. These ideological billionaires—among them Elon Musk and Marc Andreessen—are helping to set the agenda of creating AGI and financially backing, if not outright proselytizing, a modern-day eugenics. This is built on the combination of conservative politics, an obsession with pro-birth policies, and a right-wing attack on multiculturalism and diversity, all hidden behind a façade of technological progress.
Tesla and X/Twitter owner Elon Musk has repeated common eugenicist refrains about population trends: notably, claims that there are not enough people and that humans (particularly the "right" humans) need to be having children at even higher rates. In August 2022, Musk tweeted, "Population collapse due to low birth rates is a much bigger risk to civilization than global warming." Musk has himself suggested that he is contributing to the project of increasing population, fathering at least ten children (that we know of ). The white South African son of an emerald miner has noted that "wealth, education, and being secular are all indicative of a low birth rate," which is bad news for "successful" people having more kids. He would rather have a positive eugenic project of these people having more children.
Marc Andreessen, founder of major venture capital firm Andreessen Horowitz, echoed Musk’s concern on far-right darling Joe Rogan’s podcast, remarking: "Right now there’s a movement afoot among the elites in our country that basically says anybody having kids is a bad idea… because of climate." Andreessen pushed against this, suggesting that elites from "developed societies" ought to be having more children.
Musk and Andreessen believe that we are on the precipice of artificial general intelligence. Oddly enough, they also believe that the development of AGI, done poorly, could spell the end of humanity, a belief that is known as "existential risk". You would think that dumping billions into AI research while also believing that AI can bring the end of humanity would be at odds with each other. And you’d be right.
But why do so many people involved in building and selling large language models seem to have fallen for the idea that they (might be) sentient? And why do so many of these same people spend so much time warning the world about "existential risk" of "superintelligence" while also spending so much money on it?
In a word, claims around consciousness and sentience are a tactic to sell you on AI. Most people in this space seem to simply be aiming to make technical systems which achieve what looks like human intelligence to get ahead in what is already a very crowded market. The market is also a small world: researchers and founders move seamlessly between a few major tech players, like Microsoft, Google, and Meta, or they go off to found AI startups that receive millions in venture capital and seed funding from Big Tech.
As one data point, in 2022, 24 Google researchers left to join AI startups (while one of us, Alex, left to join a research nonprofit). As another data point, in 2023 alone, $41.5 billion in venture deals was dished out to generative AI firms, according to Pitchbook data. The payoff has been estimated to be huge. That year, McKinsey suggested that soon, generative AI may add "up to $4.4 trillion" annually into the global economy. Estimates like this are, of course, part of the hype machine, but VCs don’t seem to think that fact should stem the rush to invest in these tools.
This hype leans on tropes about artificial intelligence: sentient machines needing to be granted robot rights or Matrix-style super-intelligence posing a direct threat to ragtag human resisters. This has implications beyond the circulation of funds among VCs and other investors, most notably because ordinary folks are being told they’re going to be out of a job.
A smart, incisive look at the technologies sold as artificial intelligence, the drawbacks and pitfalls of technology sold under this banner, and why it’s crucial to recognize the many ways in which AI hype covers for a small set of power-hungry actors at work and in the world.
Alex Hanna is director of research at the Distributed AI Research Institute.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.