hidden mastodon verification link

The Luddite

An Anticapitalist Tech Blog


Effective Altruism: Should We Pause AI?
April 2023
The justice scales, balanced, overlaid by a psychedelic pattern.

The Future of Life Institute recently wrote an open letter calling "on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful." This letter, signed by noted moron Elon Musk, is a perfect example of everything we at The Luddite despise. It is, most charitably, a piece of propaganda that aggrandizes tech and its creators, in which they claim that they wield a power that could accidentally destroy all of humanity, but also urge themselves to wield it responsibly. This is your drunk friend, all hat and no cattle, yelling "hold me back bro," except covered by the New York Times. It is patronizing bullshit written by fundamentally unserious people that are either so delusionally hubristic that they believe themselves, or so cynical that they would concern-troll us to undermine the serious discussions about the place of technology in our world.

The Future of Life Institute is an effective altruism organization. The arrest of Sam Bankman-Fried triggered some mainstream conversation about effective altruism, but the idea has been bumping around the tech world for some time. Effective altruism is, much like Sam Bankman-Fried, a hollow and self-serving fraud, perpetuated by people who are often pretty smart but think they are much smarter than they actually are. At its core, it advocates that we should make rational, evidence-based decisions to do the most good we can.

Short term, this often leads to the conclusion that the best thing we can do in life is to find the highest paying job we can, and maximizing the amount of money we can donate to charities, which themselves should be carefully scrutinized for how rational and evidence-based they are. Long term, this leads to what they call "longtermism," in which, they argue, because the future is long and contains so many people, the long term needs of humanity far outweigh its short term ones.

There is something fundamentally appealing about this way of thinking. It justifies our high paying tech jobs, converting moral problems into financial ones without forcing us to interrogate the system in which we make it. Focusing on a far-off future allows us to participate in broken systems without the burden of recognizing the harm we do. We overlook today’s struggles as small in the grand scheme of things. We comfort ourselves with the money we donate to The Future of Life Institute, which protects the trillions of human beings that have yet to be born.

To justify these comforting absurdities, effective altruists use the complexity of human existence as a cudgel. They insist on using empirical evidence to guide their morality, but because our lives are immeasurably complex, this can be quite hard. The result is a little rhetorical trick, in which the measurable wins.

Here is a classic EA thought experiment, provided by Peter Singer:

"It costs about 40,000 dollars to train a guide dog and train the recipient so that the guide dog can be an effective help to a blind person. It costs somewhere between 20 and 50 dollars to cure a blind person in a developing country if they have trachoma. So you do the sums, and you get something like that. You could provide one guide dog for one blind American, or you could cure between 400 and 2,000 people of blindness. I think it's clear what's the better thing to do."

EA is about being rational and doing the most good with every act, so this thought experiment should lead us to the conclusion that it is irrational to train guide dogs when you could cure so many blind people in a developing country.

This sort of "scientific" morality is inherently (small "c") conservative. Anyone who has ever designed a science experiment knows that to collect data, you must change exactly one variable and keep all others the same. To generate meaningful data, you must necessarily leave everything else intact. Large scale political change is therefore, by its very nature, immeasurable. Radical options are necessarily excluded if, as a requirement, options must have empirical grounds. In other words, instead of effective altruists deciding whether we should cure blind people in developing countries or train one guide dog, the blind people and their friends could team up and take the effective altruists' stuff by whatever means necessary, then decide among themselves how to allocate the resources. Is that a measurably better outcome? How many blind people cured is a French Revolution? How many Medicares For All is an Emancipation Proclamation?

These are obviously absurd questions, and once you start scratching, you notice just how immeasurably complex Singer’s seemingly simple accounting question really is, and how much those hidden nuances tell us about the real problem, which I argue is, among other things, Singer himself. It takes for granted the legacy of colonialism, which forms the distinction between developed and developing countries. It assumes that US dollars – green pieces of paper printed when a bank buys a treasury bond and then puts it in a special reserve account, whose value comes from being redeemable for that same treasury bond – can be exchanged for a surgery, and that this is normal and good and should remain unchanged.

For Singer’s thought experiment to make sense, and for the effects of our decisions to even be measurable, we must necessarily leave most of the world intact.

This is what I hate so much about effective altruism. In there somewhere is a meaningful discussion about how we as individuals can make a difference with our limited resources, but these effective altruists are the very people who do not need to have that discussion. They are some of the richest people on Earth. Before we get to budgeting how many guide dogs we should train vs how many trachomas we should cure, or how to best safeguard the unborn trillions, perhaps we should ask why they get to make that budget, a point exemplified by Singer’s choice of venue. That quote, without a hint of irony, comes from a fucking TED talk, in which the world’s most tedious and self-aggrandizing douchebags come together to pat themselves on the back about what fucking luminaries they all are. They are the problem.

This fundamentally conservative and patronizing aspect of tech psychology is crucial to understand not just this letter, but the tech industry in general. This is why OpenAI, again without irony, has increasingly deemed its work "too dangerous" to release to the public. The nonprofit, which was founded with the financial backing of Musk, Thiel, and other such weirdos, began as an organization dedicated to advancing machine learning for the public good, releasing its work open source. Unsurprisingly, they have converted into a for-profit company, once again forcing us to wonder whether their claims of GPT3 being "too dangerous" are cynically self-serving, or if they are simply high on their own supply.

I don’t care what they believe. It doesn’t matter, because the results are the same. It is a key part of tech mythmaking. When OpenAI talks about how worried they are about technology, they are just telling us how fucking smart they think they are, and why they should be allowed to make decisions that should shape our lives. When the New York Times writes an article about ChatGPT titled "Did Artificial Intelligence Just Get Too Smart?", they are simply perpetuating this. The tech industry is owned and operated by and for capital. Perpetuating these myths about how smart these capitalists are justifies capitalist control of technology. This is why Elon Musk, a notorious idiot, is involved with and financially backs The Future of Life Institute. He thinks that he and people like him should be the one shaping the entire future of humanity, which brings us back to their letter:

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.

The rest of the letter babbles on about bad things AI could do and things we could do about it – all without realizing how incredibly close they really were to asking the question that matters. Here lies the irony of this letter. Elon Musk, the world’s second richest human being, has signed a letter with the phrase "[s]uch decisions must not be delegated to unelected tech leaders," even though he frequently talks about how he wants to build a city on Mars through SpaceX, a corporation he owns. In fact, just a few short weeks before this letter was written, Musk unveiled his Master Plan for Tesla. Investors had expected him to reveal a new car, but Musk instead revealed that Tesla will create "the path to a fully sustainable energy future for Earth." He purchased Twitter last year to, he claimed, "preserve free speech." These are all things that shouldn’t be up to an unelected tech leader.

These decisions should not be made by tech leaders because we should not have tech leaders. This is why we are Luddites. When "machines automate away all our jobs, including the fulfilling ones," we ask who benefits. If we do not benefit, if instead we are robbed of our means to survive, then we should destroy the machines. It really is that simple. We embrace the technologies that liberate us and seek to destroy those which oppress us. This has almost nothing to do with the technology itself, but often relies on who owns the machines. It is clear that the people who own the machines, the people like Musk, do not understand that it is not the job itself we seek, nor the technology itself we fear. He cannot imagine a world in which he doesn’t get to own companies and we don’t have to work for them, and by extension for him. It is up to us to force him and those like him to imagine alternatives.