hidden mastodon verification link

The Luddite

An Anticapitalist Tech Blog


The Anti-Intellectualism of Social Media Design
February 2025
Against a purple gradient background, getting slightly darker from the bottom right to the top left, there's a pie. The pie is in a pie pan, and the pie's crust has four holes like those holes in those cardboard cutouts at, say, amusement parks, where you can stick your face in and take a picture and it's your face on whoever's body. The pie has four holes, and there are four faces sticking out, looking pretty expresionless, maybe a bit confused. It's hard to know.

Before we get started, a note on current events: If you live in the United States, now is the time to get organized. I don't mean joining a discord or your favorite leftist's patreon. Participate in something, ideally something with a formal structure that deliberates on its program and elects its leaders. It doesn't matter which one, and all the organizations doing good work in your area probably know each other anyway. If you don't know where to start, get in touch, and I will do my best to connect you to someone. I myself am a longtime and active member of Democratic Socialists of America. It is by no means perfect, but you will never find the perfect organization, and, in the meantime, you'll miss out on doing good work with good people.


Many studies seem to discover and rediscover1 that The Algorithm, in its various incarnations, has a right-wing bias.2 These studies tend to discuss their findings in faux-neutral language, like "filter bubbles" or "partisanship." Thinking of politics as differences of degree along a one-dimensional spectrum is, unsurprisingly, a self-limiting and unproductive way of examining this phenomenon. Instead, I want to compare the substance of the right to the design of social media and, in a way, to computation itself.

At its most stripped down, social media is a cyclical, three-step process:

  1. Users create data by using the app.
  2. Companies use that data to build profiles on each user.
  3. The Algorithm uses those profiles to serve targeted ads and recommendations to users.

This first step, data-creation, is a time-consuming process. This is a general point, not particular to social media. For data to mean anything, it must result from a material process, and those tend to be distributed in time and/or space. Computation was, until recently, also a time-consuming process, but computers have upset this relative balance, making data-creation a bottleneck. So long as computation remains useful (more on that soon), people will want more data to use it on. They become concerned with getting more detail out of data-creation and prolonging the amount of time that we spend doing it. To use the more common terms, they attempt to increase surveillance and engagement.3

Once companies have the data, they use all this computation to classify users every which way they can: race, religion, hobbies, class, interests, age, political leanings, fashion style, and so on. They then set out to learn and exploit the various relationships between these categories. Here, we can already partially explain another common finding, the reports that brand new accounts, without any interaction from the user, are soon inundated with "sexism and misogyny" or some other undesirable far-right content. Learning algorithms generally work by making a hypothesis, then updating it with new information. The first hypothesis of a brand new profile is almost certainly its demographics, based on what little information the user provides, plus any initial tracking information, maybe location, operating system, browser, etc. If all The Algorithm "knows" about someone is that he's a man, then it's going to show man things.

This explains the targeting, but to explain its negative valence, we must consider what happens beyond the initial guess. Learning systems aren't useful if they don't converge on something. Otherwise, they produce noise. They want to predict behavior, but they also need to focus on predictable behavior, maybe even encourage it. The manosphere both targets men while simultaneously encouraging them to identify more strongly with being men. It is both stable and stabilizing. There is no parallel on the left here. Even political beliefs themselves are relatively stable, but nowhere near as stable as core identity traits, like gender or race, and encouraging those to stay as stable as possible is a conservative project by definition. The right's politics contains essentialist understandings of race, gender, class, etc. Reifying the historical classifications of people is, at least partially, a shared project of both the right and The Algorithm.

In reality, race, as with all these categories, is a social construct, not a biological reality. We're forced to repeat this often because many people disagree. Some do so in bad faith, but others, I think, are genuinely confused. They might point to certain statistics, like how Black people are much more likely to have Sickle Cell Disease, and conclude that race is "true" in the most biological and essential sense. The error here is that social constructions can still be useful. That's why we made them in the first place, for good or ill.

All categories are arbitrary. This is a property of classification itself: If two things have everything in common, they are the same thing. Categorization is a lossy process in which we choose to focus on salient features while disregarding others. This discretion brings with it arbitrariness, but the continous use of classification systems has a naturalizing effect. We've all struggled to find something in someone else's kitchen, but our own kitchen's organization is second nature. When it comes to data, computation is how we use it. As we do more computations with it, we find more relationships inside it, and its organizational scheme too become second nature. The inherent arbitrariness becomes difficult to remember in the same way that an ancient Roman transported to the present would have to learn to distinguish our modern races but we today must actively remind ourselves that they're social constructs. Computation, used in this way, reifies categories.

Social media exploits this feature of computation. Users create the data, and the companies do a staggering amount of computation, the likes of which was unimaginable until the last couple decades, to uncover the relationships between the (constructed) categories. The Algorithm then feeds the results back into the process of data creation, beginning the cycle anew. This process makes its own sense. Around gender alone, there's a proliferation of identity-based categories, like the manosphere, but also tradwives, incels, femcels, gooners, misogynist gamers, and so on. It's not that The Algorithm invented these kinds of people, but they have now entered culture as identifiable categories, brought together and amplified by the pattern-finding abilities of the computer.

We can see this mechanism not just in the proliferation of identity-adjacent genres of content, but also in the "identitizing" of non-identity content genres. It's the difference between videos about gardening and videos by a vaccine-skeptical micro-homesteading mom (another weirdly specific and now identifiable category of person). This content isn't just about what we do, but who we are, since any activity-based content is more productive when split into its identity-based components: If a masculine man wants to learn about gardening, he might avoid our aforementioned mommy-blogger channel and instead watch one by a gun-toting, muscular prepper decked out in American flag paraphernalia. Both will teach you to grow the same potatoes, but wanting to grow potatoes isn't a great predictor of what you'll watch or buy. If The Algorithm latched onto such passing fancies, it would be erratic and noisy. It's more useful to further segment the potato-curious until you've reached something more core to their identity, a better foundation for stable and useful predictions. The more that someone thinks about being a man, the more manly things he'll probably want to watch and buy. Similarly, even if a man isn't interested in growing potatoes, he might still enjoy seeing our prepper's compound throughout the process, or check out his other videos about guns or lifted trucks.

It's well outside the scope of this post to explain why identity-based content is compelling. For our purposes, I submit the endless discourse on diverse representation in media, both in favor and against, as proof that seeing ourselves in this way obviously matters to us. Beyond that, however, it's important to note that much of this new kind of content is engaging in its own right. It is personal and intimate, often broaching topics that are difficult to discuss in other contexts, because of stigma or taboo, in a way that is long overdue. This isn't a question of people being duped into liking something bad. There are endless creative people making clever, hilarious, personable, and thoughtful things. Instead, it's that, in the infinite space of possibility that this self-evident creativity could explore, this specific genre is a naturally occurring resonance of personalized recommendation. Ultimately, it shouldn't be too surprising that this resonance also happens to synchronize with the right's political project, given that many of the same capitalists are behind both.

If the right advances a constraining essentialism, then our work is to tear down these (and all) barriers to human liberation. This means critically examining, for example, race and gender, but also pushing beyond, noting and politicizing the hitherto unnoticed structures and forces that shape our lives. This is an idea repeated throughout the leftist literature in various forms. Here's a famous passage from Marx saying as much:

[I]t is all the more clear what we have to accomplish at present: I am referring to ruthless criticism of all that exists, ruthless both in the sense of not being afraid of the results it arrives at and in the sense of being just as little afraid of conflict with the powers that be.

Here's Adorno and Horkheimer, articulated in a way that I really like:

Intellect's true concern is a negation of reification.

We're all intuitively familiar with this, and we have various ways of talking about it: We encourage people to "think outside the box," or be "unconventional;" to "break the mold" while going "against the grain," which we contrast with "coloring within the lines." Often, the best ideas in a meeting are the ones that shake off the unstated assumption, or take notice of the previously-unacknowledged framework. If we accept that the purpose of intellect is to challenge what we've come to accept, then much of the tech industry is, in a very literal sense, anti-intellectual. If intellect breaks the spell by which the arbitrariness of our social order seems natural and inescapable, social media is designed to trap us in a narrow and essentialist understanding of ourselves. The more stable and self-reinforcing the content stream, the better. The ideal world, from the perspective of The Algorithm, is one where all people are perfectly categorized and unchanging, sorted into the constructed communities of people who are, in some essential way, the same.


Thanks again to Geoffrey C. Bowker and Susan Leigh Star's book Sorting Things Out: Classification and Its Consequences, which greatly influenced this and other posts.


1. This paper is bad. Among its numerous problems, each of which individually ought to have been disqualifying, it considers socialism to be similarly problematic as white supremacy, and its methodology for rating the ideological "slant" of videos makes absolutely no sense on many levels.

2. Setting aside that, in some instances, billionaires buy social media companies and do this on purpose.

3. The effects of this simple relationship go far beyond social media. Many if not most papers today do a battery of statistical tests. This was not the case 50 years ago. The authors then dedicate a section of their paper to describing these tests in multiple paragraphs of unreadable prose. Every single reader either skips this section or reads it and gets almost nothing out of it. In a mostly harmless version of this phenomenon, the (over)abundance of computation produces intellectual flotsam. In a more insidious form, as we've discussed, it's used to launder corporate interests, ideological priors, and/or a profound void of insight through what has become the aesthetic of science.