hidden mastodon verification link

The Luddite

An Anticapitalist Tech Blog


Why Is There an AI Hype?
May 2024

The tip of the pyramid was generated with craiyon.com, to which we added a collage of Sam Altman's face (from press coverage) and part of the cover of the book God Emperor of Dune.

AI companies are worried that they're going to run out of data to train their LLMs and energy to power their data centers. This is a strange way to discuss a fundamental design flaw. It's not some quirk or fluke that a product is constrained by reality, but the core of all engineering1 and science, though this is considered so self-evident that it normally remains unsaid. The problem with the Hindenberg was not the existence of fire, but a design that failed to account for fire. Why is it that the many intelligent people that work at these companies don't seem to see it that way? Why are they instead taken in by the hype? What's driving the hype? Why is there an AI hype at all?2

Let's begin with OpenAI, the vanguard of the AI hype. They're not just interested in LLMs, but in creating artificial general intelligence (AGI).3 From their brief about page:

OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity.

They also have an entire section of their site titled "Planning for AGI and beyond". You'll find that the other big companies (Anthropic, DeepMind, etc.) have similar copy.

Though most readers already probably know what AGI is, I argue that OpenAI has a definition that might not necessarily match your intuitive one. To piece it together, we'll have to examine not their marketing copy (shocking, I know), but OpenAI as an institution. An institution is, according to Philip Agre (who I'll be leaning on heavily throughout)...

... a stable pattern of social relationships. Every institution defines a categorical structure (a set of social roles, an ontology and classification system, etc), a terrain of action (stereotyped situations, actions, strategies, tactics, goals), values, incentives, a reservoir of skill and knowledge, and so on (Commons 1970 [1950], David 1990, Fligstein 2001, Goodin 1996, Knight 1992, March and Olsen 1989, North 1990, Powell and DiMaggio 1991, Reus-Smit 1997). The main tradition of computer system design was invented for purposes of reifying institutional structures (cf. Callon 1998).

The first step of software is generally to create a model of the world. It is the explicit task of the software developer to code a single interpretation of it. This will necessarily be a version of reality as seen through the institution that will use the code, otherwise it wouldn't be useful. For example, if a car dealership were to hire me to write an inventory management system, my code would have a concept of "salesperson," and another for "car," and maybe a third for "sale." Never mind that one of the salespeople is also a mother or a hockey player, or that cars can also be understood as this many pounds of steel, this much plastic, and so on. None of that will make it into the code, despite being no more or less real or true.

If we adopt the institutional vantage point of a company, the human-replacement agent that is an AGI is not a human as we understand ourselves, just like the car salesperson wasn't a full human within the world of a car dealership. These companies are projecting their institutional understanding of what a human is, and we can see these projections clearly in ChatGPT, even in such basic facets as its format. First and foremost, human beings are, to capitalist firms, employees. They exist to be told what to do, then politely and unquestioningly do it. ChatGPT only speaks when spoken to. When it does speak, it does so in the most hollow and saccharine way, completely bereft of emotion save for that professional cheeriness.4 For a certain kind of employer, the LLM is so tantalizingly close to being not just a full person, as these companies understand people, but a perfected one. Here's Agre again:

Precisely through its explosive success, the Internet has disserved us, teaching us a false model of institutional change. The appropriability of layered public networks is indeed a new day in the dynamics of institutions, but it is not a revolution. It brings neither a discontinuous change nor a disembedded cyber world. It is, to the contrary, another chapter in the continuous renegotiation of social practices and relationships in the world we already have.

Like Agre argues about the internet, the AI hype, were it to do what its proponents say that it'll do, would not be a "revolution" insofar as it would break with existing social patterns; it would add a new dynamic to existing ones. Specifically, it would greatly shift power within institutions in favor of bosses. I argue that power, or the allure of power, is the foundation of the AI hype.

We have evidence that Altman, probably the single leading figure of the AI hype, understands how power works in institutions. He was president of Y Combinator, probably the single most influential venture capital firm and startup incubator in tech. It is quite literally the venture capitalist's job to deploy their capital to buy control of businesses such that they make a profit. More recently, Altman was fired by the board, only to outmaneuver them and return. He's clearly canny, and he's put together a compelling pitch for companies: Complete control. Capitalists want control over labor the way that kings wanted gold, and Sam Altman is an alchemist promising no more complaining workers, with their annoying, incessant demands for higher wages, family leave, and even bathroom breaks.

This pitch also contains the seed of a more generalized power: If the first step in making a computer program is to make a model of the world, he offers a general purpose computer program, containing a model of the entire world to replace all human labor. This computer program, were it to exist, would allow capital to permanently neutralize labor as a political force. It promises to freeze existing social hierarchies, allowing society to perpetually reproduce itself as it is today, leaving those at the top of the hierarchy in power, completely unchallenged.

With this lens of power, consider how much simpler it is to understand news about OpenAI. For example, Sam Altman is seeking trillions of dollars in venture funding:

Sam Altman was already trying to lead the development of human-level artificial intelligence. Now he has another great ambition: raising trillions of dollars to reshape the global semiconductor industry.

The Wall Street Journal frames the funding as necessary for AI. After all, Altman can only deliver AGI if the world's hardware manufacturing and energy capacities increase exponentially. This story is riddled with confusing problems: Can one man really do all that? Aren't computer manufacturing supply chains very complicated? Isn't there some sort of climate crisis or something? But stripped of this veneer, this is the simple story of a powerful alchemist trying to convince a literal king (the United Arab Emirates is one of his potential funders) to become his patron.

This story of power accounts for Altman et al's behavior, but there's obviously more to it. There are dozens of new papers and articles published just on Nature.com about AI in the last 24 hours alone. Clearly, these are not all willing conspirators hoping to help put Altman in power, so we now must ask a question that I've so far avoided: What even is AI?

Critics have spilt much ink discussing the terminology here, trying to distinguish LLMs from AI from AGI, marketing speak from fields of science, etc., but the AI hype continues to evade critics' attempts to use precise language because it is not a real event. Just because alchemists often did practice chemistry doesn't make alchemy real. AI is an idea that began as a subfield of computer science, until it was so distorted that it popped, detaching itself from reality. Now, this orphaned concept has grown to a life of its own, as our discussion of AI eclipses any meaningful definition of it as a real, definable thing.

Philosopher Jean Baudrillard calls this "hyperreality." He opens Simulation and Simulacra with a reference to Borges's two-paragraph story "Del rigor en la ciencia" ("On the Rigor in Science"), in which cartographers, in their zeal for exactness, made a map the same size as the empire, only for later generations to disregard the map, leaving both the map and the empire beneath it to decay. In The Gulf War Did Not Take Place, Baudrillard puts forth my personal favorite exploration of the concept: He argues that something happened out there, in the desert, but it is not the "Gulf War" as we understand it, seen mostly from TV screens telling stories of surgical strikes and liberated people. It was an atrocity, in which the world's most powerful empire slaughtered many innocents, but the "Gulf War," the concept, as understood by those watching on TV, didn't happen. It's a simulacra.

Likewise, AI, as Altman tells it, is not taking place. Something is most certainly happening. LLMs are very real and very impressive. There is an avalanche of investment into them, and into other adjacent AI research. The servers that they power are most certainly using real energy, putting real greenhouse gases into the atmosphere, and using real water to cool them. But the fear of the AGI takeover, the "tsunami coming for the world's jobs", the Effective Altruism community's panic, and so on are all discourse about a concept of AI long-detached from AI in any technical sense. This discourse, discourse about the discourse, ideas from that discourse, etc. have become the main event, completely divorced from reality as such.

Still, it is no less serious. Just because something is a simulacrum doesn't mean that it doesn't matter. Here's Baudrillard, writing in 1991 with, in retrospect, incredible foresight (apologies for the long quote. Baudrillard has a certain way of writing that's hard to condense, but, in my opinion, very fun to read [you can just smell the cigarette and coffee through the page]):

That said, the consequences of what did not take place may be as substantial as those of an historical event. The hypothesis would be that, in the case of the Gulf War as in the case of the events in Eastern Europe, we are no longer dealing with "historical events" but with places of collapse. Eastern Europe saw the collapse of communism, the construction of which had indeed been an historic event, borne by a vision of the world and a utopia. By contrast. its collapse is borne by nothing and bears nothing, but only opens onto a confused desert left vacant by the retreat of history and immediately invaded by its refuse.

The Gulf War is also a place of collapse, a virtual and meticulous operation which leaves the same impression of a nonevent where the military confrontation fell short and where no political power proved itself. The collapse of Iraq and stupefaction of the Arab world are the consequences of a confrontation which did not take place and which undoubtedly never could take place. But this non-war in the form of a victory also consecrates the Western political collapse throughout the Middle East, incapable even of eliminating Saddam and of imagining or imposing anything apart from this new desert and police order called world order.

[...] Why are we content to inflict a perfect semblance of military defeat upon him in exchange for a perfect semblance of victory for the Americans? This ignominious remounting of Saddam, replacing him in the saddle after his clown act at the head of the holy war, clearly shows that on all sides the war is considered not to have taken place. [...] What is worse is that these dead still serve as an alibi for those who do not want to have been excited for nothing, nor to have been had for nothing: at least the dead would prove that this war was indeed a war and not a shameful and pointless hoax, a programmed and melodramatic version of what was the drama of war (Marx once spoke of this second, melodramatic version of a primary event). But we can rest assured that the next soap opera in this genre will enjoy an even fresher and more joyful credulity.

For us software developers, it's our job to learn to see the world through our own symbolic representations of it, since that's what code is. We're prone to fits of hyperreality. When Altman et al present their version of AI, motivated by what I argue is pure self-interest, buttressed by the self-interest of the many other capitalists (and monarchs) that want to believe that such power is possible, they find fertile ground among their employees and industry. We have trained ourselves for our entire careers to rigidly codify and adopt the models of the world put forth by the institutions for which we write code. Recall our example of the dealership's inventory management system, and with that in mind, here's Agre again:

Systems analysis does not exactly analyze a domain [...] Rather, it analyzes a discourse for talking [emphasis in the original] about a domain. [...] But the substance of the work is symbolic. Computer people are ontologists, and their work consists of stretching whatever discourse they find upon the ontological grid that is provided by their particular design methodology [...] The discourse is taken apart down to its most primitive elements. Nouns are gathered in one corner, verbs in another corner, and so on, and then the elements are cleaned up and reassembled to create the code. In this way, the structure of ideas in the original domain discourse is thoroughly mapped onto the workings of the computational artifact. The artifact will not capture the entire meaning of the original discourse, and will distort many aspects of the meaning that it does capture, but the relationship between discourse and artifact is systematic nonetheless.

[...] The discourses with which computer science wrestles are part of society. They are embedded in social processes, and they are both media and objects of controversy. [...] This is the great naivete of computer science: by imagining itself to operate on domains rather than on discourses about domains, it renders itself incapable of seeing the discourses themselves, or the social controversies that pull those discourses in contradictory directions.

Consider LLMs, the beating heart5 of the AI hype. They are fluent but not knowledgeable. Though speaking fluently often coincides with speaking knowledgeably, neither guarantees the other. This conflation is at the heart of much of the flawed AI research that we've already discussed, but seen through Agre's argument here, it takes on new significance. Companies are training LLMs on all the data that they can find, but this data is not the world, but discourse about the world.6 The rank-and-file developers at these companies, in their naivete, do not see that distinction. They instead see the first general purpose tool that comes with an "ontological grid" (as Agre calls it) that can coherently fit the entire world. This tool can interface with the world just as we developers do, since we too only ever output symbols, be it human language, computer language, diagrams, etc. So, as these LLMs become increasingly but asymptotically fluent, tantalizingly close to accuracy but ultimately incomplete, developers complain that they are short on data. They have their general purpose computer program, and if they only had the entire world in data form to shove into it, then it would be complete.

This will never happen. It will be forever just around the corner, because the AI hype is millenarian, even going so far as to contain literal apocalyptic prophesy. Goalposts will forever move — if only they had more data, or more energy, or more hardware. The meanings of words in previous predictions will be fudged, then squeezed until they've been drained of all sense, only then to be discarded and replaced with new words in a new media cycle to keep the story forever alive and constantly changing.

A year ago, Altman, Musk, and various other weirdos signed a letter to pause AI, citing their existential fears for humanity. When it was released, it seemed to consume all the oxygen. Today, the Future of Life Institute, the group behind the letter, has closed, but it doesn't matter. The point was never their work, but the panic that their work generated. There will be more letters, more warnings, more institutes, more think tanks, more press releases, and more demos to generate more and more discourse because, as Agre warns us, technologies themselves are not revolutionary. A new technology "will only change anything if it is aligned to some degree with real forces operating within the institution." The AI hype involves some of the world's most powerful people, many of them billionaires, with the power, incentive, and, in the case of Altman, political savvy to fill the world with the stories that suit them. Left unchallenged, the hype, as untethered from reality as it may at times seem to be, can still increase their very real power and wealth.


1. There's a lot of argument about whether or not software engineering is "engineering." I don't intend to enter this debate in general, but I will say that anyone who thinks that you can blame reality for your design flaw fundamentally misunderstands the assignment.

2. In my research for this post, when I googled "Why is the AI hype happening?", the first 4 results are ads for various pro-AI news services, followed by many links with titles like "Is the AI hype dying down?", "The AI hype bubble is deflating", and the delightfully credulous question "Is AI just a hype or is it truly an extraordinary technology?", which I'm sure Quora handled with aplomb. It's as if the tech industry's ability to self-reflect transubstantiated into a web page.

3. In other words, a human-like, general-purpose intelligence that can outperform (or at least match) humans in wide-ranging tasks.

4. While we were editing this post, OpenAI released a new version, ChatGPT-4o. While this analysis still holds, this new release certainly adds a concerning dimension to it. ChatGPT now speaks with a flirty voice, and, in their demo for the release, they asked it to help them calm down and tell them a bed time story. This post is already too long to fully engage with this new sexy mommy/secretary dimension, which probably deserves its own post.

5. Or perhaps the blabbering mouth.

6. This is itself a simplification, as LLMs are not trained on language as such, but a reduced, tokenized version of language. See here for further reading.


An enormous thank you to my dear friend, Jessamyn West, for introducing me to Agre's work. She recommends, unsurprisingly, checking out the rest of Agre's work. You can also find more from her here.