According to Eliezer Yudkowsky and Nate Soares, AI is going to kill us all. Ads for their book, If Anyone Builds It, Everyone Dies, are plastered across the NYC subway system, screeching in all caps: “We wish we were exaggerating.”
The authors say that if super-intelligent AI is created it, “will have weird, strange, alien preferences that they pursue to the point of human extinction.” They offer no explanation as to what those “preferences” might be.
Their argument is based on hypotheticals and fear of the uncertain future of the technology. They admit it relies on limited knowledge of how AI actually works. Nonetheless, they at times allude to the real problem—the profit motive.
Companies and nations are racing to make the “smartest” AI, regardless of the consequences. Based on what they’ve seen from Big Tech so far, most people are skeptical about the technology. Pew Research reported that 43% of US adults think AI will harm them personally.
The authors’ solution is to convince nations to ban further AI development—a utopian proposal. They go as far as recommending airstrikes as a means of stopping the technology, if nations don’t comply with the international standards they advocate.
Beyond the muddle as to what might happen and how to fix it, the authors seem to miss what is right in front of us. The issue is not that one day AI might hurt people—capitalism is already using AI to kill Gazans, replace jobs, and surveil workers. It is yet another industry polluting the air and contributing to climate change.
We don’t need to worry about a hypothetical killer AI. The killer is present here and now: it’s capitalism.
But we’re not doomed to live in a capitalist dystopia. By overthrowing capitalism, we will overthrow a terrifying future. We could reorient AI to make our lives easier with a rational plan. Not based on profitability, but human need. To do that the working class needs to take democratic control of the means of production.

