The same illustration has also been published in Plutonics. Check them out.
Every computer can do anything that any computer can do. They are uniquely general-purpose tools. This makes them pretty nifty because we don't need to make one machine to look at cat pictures and another to calculate asteroid trajectories, but this lack of constraint comes at a cost. For a simple demonstration of this cost, open your phone's settings menu and scroll through the many configuration options, sometimes buried many layers deep, and see if you can spot more than a handful that you care about; then consider how many pieces of software you use, each with its own menu structures, and how often you struggle to find the right option.
This complexity is compounded by computers' obvious utility. Each of us has so many things that we wish our computer would do, but there are billions of us, and we want different, sometimes conflicting things. Since we don't all want to become computer experts, we ask companies to write our software for us. This lets us trade some of that versatility for convenience, a necessary trade for such versatile machines, but it also means that we allow companies to decide what our computers will do.
These companies are, of course, going to try to add all the features that they think they can sell, but that doesn't cause the problem; it just exacerbates it. In reality, if a software team had the earnest desire to make every single user happy by implementing every single feature request, motivated not by profit but entirely by love for their users, they would create a user experience of forbidding complexity, no matter how useful each individual feature would be in its own right.
Even implementing a minute fraction of all possible features creates too many options when compared to a basic human constraint: We can only do (roughly) one thing at a time. So, when we use a computer, we have to navigate through the many things that our computers can do to find the one thing that we're looking for. These little things add up fast on the worldwide network of interconnected computers that has embedded itself in our daily lives, creating labyrinthine user experiences. Each path through the labyrinth is justifiable, even desirable, but we spend a lot of time navigating its sheer size all the same.
When users and/or developers identify the many tasks that computers can make more efficient, we're often correct, but it's precisely because computers are so useful in so many places that we ought to exercise collective discipline deciding when and where to use them, lest we become buried in complexity. We generally opt for the complexity, which brings us to one of the recurring tensions of our digital world: Computers are good at reducing administrative tasks, but we find ourselves on our computers doing more and more computerized administrative work.1
There's also a second, more subtle consequence, of which Google is the most obvious example. They've indexed the entire web to make it searchable, organizing the otherwise unstructured chaos, but this puts them in a position of influence, up to and including influencing the structure and content of individual websites. Google is an extreme and well-documented example, but this source of power is usually deeper and more banal.
Returning to the example of the phone's settings menu, the settings aren't all listed in one flat list, but are instead nested into categories. This kind of categorization constrains and subdivides the space of all possible settings, which is otherwise much too large for a human to traverse. For example, it's a lot easier to change my ringtone because I can find the top-level menu category "Notifications" than it would be if all settings were simply listed in alphabetical order. This introduces some arbitrariness that the user must learn, because it requires creating a taxonomy, and the different categories in a taxonomy will always be at least somewhat arbitrary. Part of the user experience is learning the names of these abstract categories that developers make for us.
This can be difficult at first, but, eventually, we learn them, and they become natural. For a settings menu, this is innocuous enough, but the computerized world is full of these abstractions, starting at much lower levels than settings menu options, and amalgamating to form larger ones. Every single thing that we do on a computer is created, named, and organized by developers. Toolbars, window minimization, pinch-to-zoom, and tabs are recent fabrications that became naturalized parts of our everyday life exceedingly quickly, which we then subdivide and build upon: Both left- and right-clicks are clicks, but we distinguish between them because it's a useful convention.
To choose a particularly big and important example, we distinguish between phones and desktop computers. Recall that every computer is capable of doing what any computer can do, which makes this at least somewhat arbitrary, but arbitrariness is a necessary part of making and organizing. This particular distinction has some very good reasons: Mobile devices are smaller, don't have a mouse, and so on, but those things can't explain every single difference between mobile and desktop computing. Some of those are simply decisions made, for whatever reasons, by the people designing them.
The average user takes for granted that they will use a platform's website on their desktop but their app on mobile. With very limited exceptions, whatever runs on a native app could just as easily run in the browser, and even some of those exceptions are themselves arbitrary, because Apple purposefully sabotages browsers iOS. In the case of mobile app stores, Apple and Google are leveraging the power inherent to defining and organizing the user experience to walk away with 30% of all in-app revenue.
It's such an extreme example that even capitalist governments have begun to take issue with it, but there is no regulatory apparatus that can truly fix this problem, because the power to define a piece of software's abstractions is innate to creating the software. It can't be abrogated because it simply isn't possible for a developer to consult other people for every single decision that has to get made. Here's Philip Agre:
At any given moment a given programmer is probably contemplating a fairly small bit of code whose purpose can perhaps only be expressed in terms far removed from the average user’s concerns. It might be, for example, the second subcase of the termination condition of a subsystem that performs various housekeeping functions so that all the other subsystems will have access to particular resources when they need them. Perhaps it is not working quite right, or perhaps it is unclear how to taxonomize all the logically possible situations that might arise at that point. The very fact that the programmer understands this bit of code is in itself an enormous imaginative gulf between programmer and user. Such detailed logical issues are at the center of the programmer’s world; they are the stuff of day-to-day work. A programming project of any size generates thousands of small puzzles of this type, each of which must be solved consistently with all the others. And the puzzles matter, since the whole program has to work right for anybody to get any work done later.
Regulators that wish to curtail tech companies' power to define how we think about computers would need to be part of some exceedingly diligent and imaginative bureaucracy. We see this playing out today in how we discuss the power of tech platforms: We argue over how they should behave, but never interrogate the concept of the platform itself, even though it, too, is arbitrary. Likewise, regulators are looking at the monopolistic practice of the app stores, but they aren't questioning the existence of app stores, or the arbitrary distinctions between mobile and desktop computing. Even if, by some miracle, they did do that, how would they enforce it? They would force tech companies to reorganize things, but these companies would necessarily respond by making new abstractions, re-anchoring consumer understanding and the regulatory process however they so choose, beginning the cycle anew. This is something that they would do in bad faith, but that exacerbates what would already exist under less hostile conditions.
I've so far discussed this power as a problem to be addressed, but, from the perspective of companies, this is often the primary appeal of being a tech company: They can launder illegal business models through all this complexity and abstraction to make the otherwise-obviously-illegal business illegible to regulation. Taxis that ignore existing regulations become "ride sharing." Illegal hotels become "short-term homestays." Mass copyright violation2 becomes "training AI."
Obviously, all that is not to say that there shouldn't be regulation, but the regulatory regime must find some way to see through this magic trick. I'm often unmotivated to consider "fixes" to regulation under capitalism, because regulation under capitalism is broken by design, but, as I've tried to argue, this problem extends beyond our existing system: It's a general problem of accountability and computers. Any society that relies on computerization must find some accountability mechanism for this.
I argue that we should ration computation itself. As the carbon footprint of "the cloud" surpasses that of flying, there's already a strong environmental argument to be made for some sort of rationing, but this is entirely separate. In order to ensure social accountability, firms, be they the tech companies of capitalism or firms in whatever post-capitalist form they'll take, must plead their case for the computational resources that they want before they've launched their product.
Focusing on computation avoids the otherwise-inevitable ontological whack-a-mole of "regulating tech," which can mean regulating app stores, unlicensed taxis, mass surveillance, movie studios, warehousing, web search, and graymarket hotels, just to name a few. As I was editing this post, MIT Technology Review wrote an article titled "There are more than 120 AI bills in Congress right now," which points out that the "bills that are as varied as the definitions of AI itself:"
One aims to improve knowledge of AI in public schools, while another is pushing for model developers to disclose what copyrighted material they use in their training. Three deal with mitigating AI robocalls, while two address biological risks from AI. There’s even a bill that prohibits AI from launching a nuke on its own.
It's no surprise that the US Congress has such a hard time with this. As we've discussed, even serious people struggle to nail down a definition of AI. Poorly defined though it may be, that's just one of so many things that fall under the general umbrella of tech.
Any harm that the tech industry does requires a lot of computation, so any such rationing could easily distinguish between personal and commercial applications, or even between small and big business, while simultaneously addressing the only thing that actually ties all these otherwise-disparate businesses together. In other words, if we want to regulate tech, we should regulate the tech, not the infinite downstream complexity that companies can use the tech to generate.
While this won't be a panacea, many of the tech industry's abuses can be distilled to a simple principle: Tech companies use eons of compute time to find more and more ways to squeeze users. Mass surveillance requires massive data warehouses to store and process all that data; algorithmic recommendations depend on mass surveillance, as do gig economy companies that deploy sophisticated (i.e. predatory) pricing algorithms, both of which require constantly running and re-running many personalized calculations per every user. The AI hype is using eye-watering amounts of computation, but companies wouldn't be so quick to just add an LLM or tracking onto every product if it took away from their ability to do business-critical computing.
More salient to our discussion, the process itself would ensure that tech companies are exposed to public scrutiny before they're given a chance to exercise their special ontological power. Had Uber's business been exposed to public scrutiny before they took over, it would've been easier to see them for what they truly are. Now, after the fact, people already rely on them for both convenience and their livelihood, clouding our collective judgement. Because the power of these business models grows as people's lives reorganize around them, a reactive regulatory posture always finds itself at a disadvantage.
During the height of pandemic lockdowns, many higher-risk people depended on Instacart. These kinds of arguments, that people with disabilities rely on these services, have become a staple of the gig-economy debate, and for good reason: Now that Instacart exists, people with disabilities do depend on it, and those people deserve to have the services that they need. But Instacart, as it exists today, built up from the many arbitrary decisions that they've made, isn't the only way to implement food delivery. Had Instacart been forced to plead their case before embedding itself in our world, with details on how they would pay their delivery workers, it would've been clear that their business model is predatory. Instead, we find workers pitted against disability advocates in manufactured solidarity with companies hiding in plain sight under layers of abstractions.
Besides the need to check the power of firms, there's also that most basic question, one that average people never get to answer today: Given such a general purpose machine, what do we actually want it to do? What are computers for? My phone is several orders of magnitude more powerful than the computers aboard Apollo 11, but is it several orders of magnitude better? There's no objective measure for that. We get to decide! Each new application of computers is the result of individual and collective choices, not part of some linear and inevitable technological development of society. The general-purpose nature of computers is itself proof that there is no predetermined technological path, and each decision that we make comes with an opportunity cost of countless others.
Today, our digital world is a bloated mess of unconscionable complexity for both developers3 and users. Unrestricted access to excessive computational resources is a necessary (but probably not sufficient) condition for making and maintaining this mess. Maybe rationing is necessary not just for more accountable software, but for creating the conditions necessary to rethink our relationship with computers at a more fundamental, social level.
Geoffrey C. Bowker and Susan Leigh Star's book Sorting Things Out: Classification and Its Consequences greatly influenced this post. I really liked it. I was also influenced by Ellen Russell and her talk "Democratic Economic Planning and the Problem of Economic Expertise." I'll try to track down a recorded or written version of that talk and update this. Finally, this post is, in part, an example of l'esprit de l'escalier. A few months ago, I was interviewed for a video about tech regulation. After thinking about it for a while, this is what I wish I had said. Thanks to Ben for getting me thinking about it.
1. Philosophers and theologians have long considered what they call the problem of evil: If there is a single deity, and it is good, then why is the world full of pain and suffering? I like to think of the proliferation of computerized admin as the tech bro's equivalent: If computers reduce admin, then why are we awash in computerized administrative tasks?
2. But also, copyright sucks. I make this argument purely from the perspective that copyright exists and is current law, not from any moral stance in favor of copyright. The case against a company doing something harmful to society need not depend on existing laws. If we take democracy seriously, it should be sufficient to argue that we simply don't want to live in the world that they're creating without invoking the bullshit that is intellectual property law.
3. I assume that readers, as internet users, already know this, but from the developers' side, this is a frequent topic of discussion in trade publications and developer forums. Some examples:
- https://stackoverflow.blog/2023/12/25/is-software-getting-worse/
- https://news.ycombinator.com/item?id=30160282
- https://www.reddit.com/r/programming/comments/1an4l4l/why_bloat_is_still_softwares_biggest/
- https://www.reddit.com/r/programming/comments/v5l1nz/complexity_is_killing_software_developers/