hidden mastodon verification link

The Luddite

An Anticapitalist Tech Blog


Emergent Aikido
October 2023
By Michael Verrenkamp
A golden clock with the words Time for a Guest Post written on its face.

Guest posts are not substantially edited by nor do they necessarily represent the views of The Luddite, though we do exercise discretion in which we publish. See here for how to submit one.


The wrong person with the right means, mean the right means work the wrong way.

When I wrote Rise of the Banal and The Snake Eats Itself, I assumed this would be the end of my thoughts on AI systems. For as much impact as they can have on the world, most angles of analysis have been covered by others. But with further time comes further reflection of not just the technology but the background on which these systems exist. The issues of complex systems. You cannot talk about something without implying the background on which it exists but sometimes you need to focus on the background to highlight the point one is trying to make.

All the current talk about the possibility of AGI (Artificial General Intelligence), that being an AI that is not only as smart as us but can potentially out do us and set its own goals an improve itself to meet those goals. It is all very interesting from a philosophical sense but that is not something I fear happening for a long time. If, and that is a very big IF - we can ever get to this point of technology, I feel it is many decades away. That I am even referring to this stuff as AI is being very charitable when it is just another buzz word to replace the old buzz word, Machine Learning. Machine learning meaning the processes are learned off data rather than the processes are made and then the data fits the process.

But many are heavily invested in this, particularly monetarily, and they are getting very good at trying to push the narrative that they are building said AGI, that they are the gods of these technologies. The story is that they are on the verge of this threshold and that if this is true, they can control their creation. Even if this is not explicitly said and almost all of it a total fabrication.

This talk looks a lot like dramatic theatre trying to pump share prices. All talk, no walk. If you have been around the blocks a few times, you are not shocked.

But there is a little sliver of unspoken truth on the push of AI technology and its potential power. The problem isn't so much only the technology, but the world in which we deploy them. We do not need to have AI systems that can do everything for it to be a real threat to the structure of complex systems. AI implementations with unintentionally malicious goals can do more than enough damage through unforeseen blow back of throwing these things into fragile complex environments. Implementations simply with too broad of a goal can do a lot of damage.

I must stress the unintentional part, while AI systems can be made to be malicious and yes there is a risk of this, they are not the thing we should fear most. That we will make a system with nothing but good intentions, and it could function exactly as designed and still cause damage on the way to its goals.

We have unintentionally build a complex society that is now dependent on computers to function rather than having computers as a complementary technology. Resilience is what we trade for efficiency.

Having an economic system that are designed to grow at any costs, regardless of the inherent limits of space, it will become more fragile by its very nature. Rather than genuine growth being achieved, efficiencies are used to gain said growth. This is why you don't need the most complex of technologies to cause significant failures. Failure is an emergent property of complexity.

Thus, entered Aikido. Aikido is the Japanese martial art of passive fighting. An oxymoron but it makes sense. When a larger opponent throws their weight into a punch, the goal of Aikido is to guide that punch into the nearest wall or to cause the opponent to misbalance themselves and fall to the ground. It is to use the more powerful individual’s strength against themselves. David and Goliath in action. What looks like a one sides battle on the outset could be tipped in the weaker sides favour by basic yet seemingly unanticipated by destabilising manoeuvring.

As has happened to western society with 'services-based economy'. In having a large chunk of physical work and manufacturing offshored to other countries, eventually we hand power over to those that those countries that provide the goods. In that sense, they can win over the larger player by undermining their own skills. They can move upwards without a single act of direct violence. You have seen this leveraged for manner of political power over the last few decades, in both good and bad ways and it will only get more extreme with time, but it is a good example of how system weight can be used against itself. When other countries do that to us, it is done merely to better themselves - it not an evil plot to destroy the west!

In an attack situation, this known as Asymmetric Warfare.

In a more extreme real-world example, Osama Bin Laden had a large set of grievances against the western world interfering with the Middle East in various ways and figured the best way to get back at them was to drag them into a multi-decade unwinnable war. By publicly attacking the centres of power in the US they used the national pride that the US and their allies that had been riding on for many decades of wealth and progress to fuel the revenge and subsequent war. By accelerating their use of their military scale, it could bleed the country faster. The funny thing is that it looks like it worked well as Afghanistan remains the graveyard of empires, trillions of dollars military spending and countless lives were lost for little gain.

Osama's actions were calculated pure evil with very clear intention but demonstrate perfectly how you can use a nations own weight against itself.

And the more relevant but global example is that this seems to be a feature of nature itself. Take the Covid virus, a virus that is incredibly simplistic and tiny compared with the bacteria it exploits. Many orders of magnitude smaller than the people it resided in. And yet, it could bring down a global trade system because of its fragility. The result is because of the same fundamental mechanism, one was intentional and the other was biological.

You can use this model on many things, a single spark causes the fire that burnt down a country side. A cable failure at a powerplant, can bring a whole city to its knees. The chain is only as strong as the weakest link.

The issue in these examples is not that there are failures, these will happen, it is that the systems are large and provide a large amount of fuel for failure. We tend to put all manner of protections in to try and minimize the occurrence of failures. By the same degree of how much impact Covid had on our global supply chain, it proved to be a bit more resilient than the worst pessimists. Covid was a disappointment to both pessimists and optimists. But the key to this kind of resilience is this that we need to understand the variables and properties of the situation and how to ensure safety. You cannot build an effective defence against what you do not know.

A big part of this risk to complex global systems is because of the current economic incentives to consistently move as fast as possible. Those that ship first by not considering long term problems are more likely to win short term as compared with those that stop to consider blow back. Those that stall can be late to the market and thus the potential to be left in the dust. Because we praise and reward those that can get to the market quickest, the incentive is of reckless deployment regardless of externalities.

This is how the current social media giants came to be, by pushing the services that are designed to engage people as heavily as possible without any care for how this would impact society long term - they became the winners at the expense of social cohesion. If there was a social media company that tried to anticipate these social issues, I do not know of them because they either never manage to launch due to being to cautious or were crushed by the race to the bottom. When the base motive is profits at all costs, those with the least ethical plays will benefit first, so long as they can convince people that they are not doing enough harm to bring action against the groups doing the damage.

This is the intersection of my two previous essays of AI and its future in our world. What happens when you release AI systems without proper checks and balances, into an environment that nobody truly understands, with goals that are vague and do not understand said environment, feeding back on itself. With we the people using these as a means to expand the complexity of and already large complex and fragile system without a care in the world for possible blowback. As you can already tell, my take is a little pessimistic.

It isn't because of any higher digital intelligence we have created; it is because we think we control the intent of computer systems, but there is no way of anticipation the emergent behaviours of what we put into the world. The world at large, there is only so much one can do to direct their use and overall impact - beyond that it can cause flux in weird and odd ways that few could predict.

To that - this is unintended asymmetric warfare against ourselves. An emergent behaviour that is never intended.

Briefly on emergence as it is a key concept to helps the point here, it is a fascinating field of study. It is the study of how seemingly simple structures and biology can form incredibly large and complicated structures. The cells in your body individually are following very simplistic rules but combined with lots of other cells all following their own simple rules, can turn into a person. Individually simple, on mass, complicated.

Similarly, people who work in social media companies. I have spoken to many folks over the years that work in places like Google, Facebook, Twitter and many more I could mention, they are all nice well-meaning people that individually are just chasing very simple goals that have the intention of making the world a better place. But the emergent behaviour is that of a monstrous surveillance system. It isn't by design; it is by emergence that this happens. Thus is the bizarre state of intent. When it comes to intent it should be considered but the outcome in the world can be more important - situation depending. To be clear, initially the intent was to make social media, once the rush for capital came in, it was exploited for wealth.

Every time something is produced for the intention of good, the inverse can be achieved be simply changing the intent. A TV set could be used as a tool for education, or as a tool of propaganda. The technology is the same; the goal is changed. But goals and the environment they operate in feedback on themselves. In setting a goal, the environment is changed, the environment in turn can change the goals. There is no way to separate this, only minimise it.

There are many articles I would want to write but I do not, because there is too much risk that they could be interpreted as supporting some nasty things - regardless of my intent. And those that rush in, well they will get theirs published were I will not. This is especially prominent in US right wing media nowadays that can pump out material on mass with little forethought of long-term impacts. Sometimes it is best to say nothing.

A major component of this is that we do not inherently understand the environment we are launching into. This is not a fault of nature but a fault in our hubris. The base assumption that we can know it all and control it.

A good example of this is in the 1970's, Computer Scientist George Van Dyne tried building an extensive simulation of an ecosystem using as much real-world data as his team could gather. This was done by cataloguing as much information about all life in a 1-mile square field. The goal was to demonstrate that we could model the stability of natural systems. But no matter how much data they used; they could never stabilise the system. The more data that was poured in, the less reliable the model became. The reason for this is a combination of us assuming that ecosystems are stable, which at best is an intermittent state, and that we could collect enough data to simulate these systems. The idea seems to both be naive and an exercise in folly.

A fascinating part of larger models is that the more we generalise the better they can tend to work, this is probably why MIT's 1972 Limits to Growth/World 3 model seems to be tracking in line with global data. This is great for learning but when it comes to day-to-day predictions it is useless. Same with Climate models, they work well long term but could not predict specific weather patterns. Any it is the specifics on which we must worry about.

It is one thing to try and deploy a model in a simulation for the sake of a learning and prediction tool, it is another to deploy it into our economic and work systems of which we understand even less of the total workings.

We have seen what happens with this kind of thinking before, where a large data set are pushed into the real world without thought of the actual realities is exists in. A part of the 2008 credit crisis was fuelled by the software systems that produced 'Security backed loans' in the US banking system. It had billions of data points and was able to topple the global economy in 2008 by optimising for a single metric of stabilising individual packages at the cost of the larger system. It did this by bundling good/great credit loans with high-risk loans - as such they could hide the risk of these high risk loads potentially going under. What the model did not anticipate was a sharp rise in oil prices and the follow-on resources squeeze, this in the context of the free market lead to prices of good to climb and topple the high-risk loads. The economic impacts which are still being felt till this day. A system made fragile suddenly hit with something simple but unanticipated.

These systems and a lot that we depend on daily are somewhat fragile because of both these unknowable inputs and the scale of operation. It says something that the longest-lived societies, people and businesses are those that just muddle along not shooting too high nor fading into obscurity.

And so we are moving into the realm of AI being deployed all over the place with little to no thought of the overall consequences. What we think of as the solution to our problems, could actually be the thing that topples the system.

From a technology sense, we cannot even get spell checking right all the time. And this is a knowable and constrained goal. How can we trust anything even more complex, something that is using a wide-ranging data set that cannot be controlled in any meaningful sense.

Maybe this is why there has been such a push for Mindfulness meditation, the kind that is stripped of all meaning and contemplation to put people into a state of 'no-thought'. What happens when you strip Hinduism, Buddhism and Zen of everything for the sake of keeping people docile. Don't think about the consequences, relax, and just do it! "Always be hustling!”. And yet what we need nowadays is more thinking, not less. To have intent of direction rather than just push for the next thing regardless of the pain it causes.

It seems that every time a new computer system is deployed it is done with the intent that we will just figure out the problems once they show up. Constantly putting out spot fires rather than anticipating issues, but these problems we encounter are a core fundamental reality of our world. Those that act first and most recklessly are the ones most likely to get first mover advantage. It is because our short-term goals are misaligned with long term goals and they work against each other.

Just look in the political space to see this in action. Long term goals of sustainability are generally shunned in favour of short-term needs. This is why those that are pushing a long-term sustainable narrative can also be seen as pushing for short term pain even if their goals are reasonable. The opposite is also true, favouring short term goals is seen as trashing the long term. This is where you see a lot of division in the world and in a way both sides are right and wrong - it is all a matter of perspective.

All these elements need to be considered when we deploy these AI systems into the real world.

The fragility of complex systems, emergent behaviours, the ability of small scale to overthrow large scale, intent and the inverse intent of people. To have society itself solve for the wrong variable.

The core problem is what I stated at the beginning - we are rewarding for the fastest and most aggressive people and solving for a world of more, not better. We are solving for the wrong variable.

What if as a society we solved for a long-term goal of a sustainable world that makes more people and the greater ecosystem around us happier? It is all well and good to think that we can treat technologies as neutral and that they can intrinsically be guided to our own goals. But there is an implicit state that comes from the form of technology that dictates part of the means in which it will be used. A Hammer cannot be used as a hat no matter how much we wish it to be. This is one of the big stretches of Marshall McLuhan’s 'The Medium is the message'.

While there can be large parts of society that can wrangle these things for the gain of everyone, it needs to be done on a global scale so as to prevent the possibility of there being a power mismatch that leads to

Some folks would rather just hand off personal interactions to a computer rather than engage in the messy, gritty nature of conversation. Rather and deal with each other as a problem to overcome rather than as people.

There is a deeper fundamental issue when it comes to the use of technology and how it is consistently used as a means to progress profit over actual human wants and needs.

Steve Wozniak, co-founder of Apple, was once asked about why they are so excited about the home computer when it is "merely a toy". Steve's answer essentially was, "Yeah it is a toy, it is a brilliant, wonderful toy. There is nothing wrong with that. Toys are fun and not everything has to be done for the sake of business or some greater goal.".

NOTE: I cannot find the original quotation on this as it was an interview from the mid 90's.

But in a way that is the kind of perspective and joy that modern technology has had eliminated. Remove joy, insert capital.

This goes down to the fundamental drive of some people, why do some many strive towards this capital race? I don't mean this in a 'market good!' kind of cave man Manosphere thinking we see nowadays; I mean there is this drive-in people to go beyond 'enough'.

For all the issues of excess we face, why is it that people tend towards taking as much as they can get even if it doesn't do them any good long term and maybe be fatal to most of our civilisation on a long-term scale?

That the single thing that has made us so successful over other species in terms of controlling the material world is an inherent state of sadness and wanting in most people. In some sense this is what the Buddhist say when 'Life is pain'. There is this grabbing, clinging to something more that needs to fed straight away beyond the greater long term goals.

The single most powerful function we have ever come across, asymmetric power. The manifestation of this hole in the physical world.

Short term gains in any sense are misbalancing the world in all factors.

We have tipped the balance of the world in an asymmetric basis for our own gain by going against what used to be factors that would confine us. Life used to be confined by mutually universal pressures of competition and evolution, but because emotionally and intellectually we figured out how to go beyond this - we started to move far beyond the sustainable into a mode of conquest. Something that looks like on a long enough scale to be fatal. Just like a cancer cell.

That is to say, we manipulated the scales of time to favour us short term over everything else. We develop tools so we can outdo the pace of evolution and get an upper hand, thus we have made a 100 million years of evolutionary progress in terms of out physical gain in less than 10,000 years - and the rest of the world is being demolished because they cannot change on this scale. Maybe this will mean we will live on this planet for 200,000 years whereas others have had runs of 200 million years - they never took the path of asymmetric short term goals.

This is why we burn coal today at the cost of tomorrow, implement technology for immediate gratification regardless of how many lives it destroys down the line, why con men can throw out all manner of confident statements long before anyone else can repute them with actual study and factual rebuttal. This asymmetric balance is what has made us so successful on the way up but makes the path down so quick and disastrous. That we see technology that is by itself neat but is knowingly being promoted for use in the most awful fashion for immediate capital gain, it is a path of pain most cannot comprehend yet.

But it does not need to be like this. That there is an inherent hole in people is not the sole issue, that hole of sorts is normal, it is that it goes ignored. Unacknowledged. Original sin ignored. This goes for life itself as well as countries.

When one is not aware of this fault, we are likely to fill them in weird ways or be blindsided by issues. So, people have this hole, they fill it with money, fame, TV, food, sex, religions of various manners, online battles of various shapes etc. This is a common theme in the Occult space, the question of - Is their religion necessarily true? For some they know it isn't empirically true, but it is better to believe something than nothing, as that void will be stuffed full of odd stuff. Look at the modern higher-strung parts of the atheist movement, in that it is starting to toke on almost religious values and points of view but those that are deep in it do not see it.

When we see these billionaires, flying to space or nowadays being crushed under the ocean - they are trying to fill this gap in a most baroque manner. Pushing for another ego boost via an AI development etc; They are some of the most fragile and neurotic people I have ever met. They do not admit to the hole in them, they cannot, for they are playing a far-out ego game that is on a collision path with the limits of the planet itself. The truly happy do not seek this.

At best they get a little bit of exuberance and that fills the hole for a little while and then it fades away, drains out, then the hunger resumes. Like a giant plutocratic monster than can never be feed to content. To some degree, most people are like this on a much smaller scale. If you scratch the itch of desire, long term it only gets worse. A big part of this is others exploiting people to go down this path, to convince people that yes, be like Sisyphus and push the stone endlessly up the hill. When really you should let go and let it roll down the hill - that can be cool.

If you are aware of a flaw in advance it can be handled with intent. Look to Japan as an excellent example of this in action. They are astoundingly good at long term planning, look at their infrastructure and how it handles earthquakes, but when something like Covid came along they were not prepared to handle it well and thus they lagged a lot of prevention measures. This could be our future on AI systems, we are reasonably good at things when they act within our goals but if even small things change, we are unprepared for what could happen. To be prepared is inefficient and our current economic system despises inefficient things.

William T Vollmann has a two book series called Carbon Ideologies - the first book is called 'No immediate danger' all about the failings of nuclear power, and in the context of the Fukushima Daiki power plant and the lead up to its failing. The base summary is that, to save money, they didn't do the safer thing and move the power backup onto the local hills. The books are a broader warning that this is the nature of all corporations, to take short cuts were ever possible for short term gains. Vollmann's analysis seems spot on.

Friendship, Love and Family, those are things that can fill the gap in a fashion that doesn't rely on the exploitation of the external world but co-operating with the external world. To realise the ‘you’ in others and the others in you. It is an anaesthetic to the wanting of the hole.

I say this because there is nothing wrong with the hole in people, it is the monkey in the man, the element of chaos that makes us well... us. But if you know it is there, you can live with it, rest with it and just figure out - I am flawed just like everyone else and that is where the magic is. This is the chaotic state that adds flavour to life. You can work on getting rid of these faults, chances are you will not get rid of one, but what you gain is that you will be at ease with them and see them as just that little thing over there than needs to be address. Like a bird in the corner of the room, like 'Hey, that over there is my anxiety. It comes and it goes.' 'That other bird is by jealousy of others. But it is just because I want to be involved with others.'

There is no issue with the hole, it is the core of the Japanese term 'Mono no aware' - the feeling that everything is fleeting, and that life is but transitory but when we give into the most selfish of these desires for the means of oppressing others for short term gains.

I talk about the issues with people and the nature of people because this is the background on which we implement technology. AI technology is potentially benign in its potential damaging, it can be a cool toy or even a reasonable tool to use. While it is just the nature of many people to ignore any kind of introspection on their faults, it is that ignorance that allows said faults to become worldly issues. In not realising the asymmetric nature of our technology, the society in which the technology resides and the nature of people that make the society that uses the technology, that is when we are most likely to make the mistakes that take us down a dark path. The real way to win Aikido, is to never throw the punch.


Michael Verrenkamp is based in Melbourne, Australia. A long time follower of computer technology. Former Tech Utopian and now disappointed idealist. Free/libre software enthusiast. With luck has avoided working directly in computer technology for the last decade. Uses their spare time living life in the real world enjoying the slack that the simpler life provides.