hidden mastodon verification link

The Luddite

An Anticapitalist Tech Blog


The Snake Eats Itself
May 2023
By Michael Verrenkamp
A golden clock with the words Time for a Guest Post written on its face.

Guest posts are not substantially edited by nor do they necessarily represent the views of The Luddite, though we do exercise discretion in which we publish. See here for how to submit one.


Don't get high on your own supply

- Unknown

When someone uses the term prophecy, it is usually see as a clear vision of the future as exactly how the future will play out. A reason for this is that when someone makes a prediction of the future and then many years later it turns out to be exactly as stated, it can appear to be a form of magic in the traditional sense. In that sense it can be survivors-bias at work. We remember the ones that were right and discard the incorrect ones.

The much more logical and rational concept of prophecy is the 'art' of prophecy of seeing the paths that have led to where we are and then to trace them into the future. In that sense, one can be a lens at which the present is but a point at which the transition from past to future happens. The purpose is to provide a warning or a foreboding of what could happen and how to avoid it. It is like a comedic Satire. Satire is where you point out the absurdity of someone's actions via humour. If one is really good at satire, even the target of the joke will acknowledge that they have done something silly and will attempt to change their ways. In that sense, the jesters of the old court were the only ones that could speak truth to power. Truth wrapped in a joke is like making a medicine sweet and palatable. What I provide here is a little sprinkle of prophecy about the future of LLM AI's.

In my previous essay, Rise of the Banal, the theme was about how GPT/LLM chat bots have the potential to corrupt our sense of authority and of self worth. The flip side is the bizarre issue that these bots, through the users own hubris could corrupt the entire system they are built on. It is more likely than many take for granted and it will catch a lot of people off guard as the overall quality of the output of these systems both become more technically competent but less useful as it goes along. We are doing something really amazingly stupid. These programs and machines will produce a satire of themselves and single handedly show others what a silly situation we are all in.

The big question is how much damage will this cause in the meantime? Because as much as something can eventually be seen as silly in retrospect the damage along the way can be disastrous. Many made the Nazis look like absolute idiots in the finest of comedic fashion well before their conclusion - but that was far far too late to save millions from perishing.

It is said that the thing that allows capitalism to be so productive is because of the complete lack of acknowledgement of Externalities. The nasty stuff they produce but can be ignored as the costs of it are socialised out to everyone else. The most obvious one would be atmospheric CO2, almost all manufacturing businesses treat the atmosphere as an open air sewer with little regard to the long term side effects. This applies to almost all industrial output, I say almost as there could be an outlier but I have yet to find it. I'm an optimist like that! Estimates vary wildly but it is thought to be between 2 and 5 times the cost for everything if externalities are taken into account. And when that kind of costing is introduced into the current race-to-the-bottom mentality of free-market capitalism, it is dead before it even gets a chance. The system as it stands today has no room for real ethics or morals.

To that, we have made a bold new text generation system. It is trained using a large amount of data scraped from various realms of the internet and merely remix other people's works into something that is technically competent but creatively bland. I mentioned that these systems are content creation/mixing on an industrial scale, they also now have the same externality problem to deal with and very few are actually taking this seriously. So the same issue as global warming but different context. People are getting boat loads of text (and images for that matter) generated and it is all being dumped into the open internet with little to no indication of its origin. What a wondrous time we live in, we have made a digital floating garbage patch!

The issue is very clear, these systems in the quest to increase their functionality are going to start sucking in ever larger volumes of text but with no context of what is being fed in. These models turn out bland stuff already, but now it will start to learn off this blandness - could these things turn into an exponentially bland wasteland? Dull feeding into dull. Mono-culture of the worst kind!

The comparison to pre-Atom bomb steel/Low-background steel is not an original one but it is a very valid analogy. For those that are not aware, there is a big need in the medical sector for steel that was produced before 1945. This is because after the atom bomb was detonated, it produces a radioactive radionuclide that has spread around the world and has since tainted all steel produced after this point. In sensitive equipment this can be a big issue as the very materials on which equipment is made out of will throw out the functions it was designed to do. It is a strange side effect that few could have anticipated at the time. To the same point, how can we trust any text model trained on data after mid-2022? The reality is that we probably cannot or at least trust it less. If you can trust them at all. And it will only get worse as we "progress". Usually externalities take decades or centuries to catch up, the initially slow but now increasing temperature of the planet is a testament to that. I suspect we have produced a system that will face these issues in years. Computers have accelerated not only information but also the side effects. Go us!

We need to have this new data feed in consistently in part because, the quest for better outputs and that language changes quicker than we cognitively recognise. Systems such as language translation have shown that having data sets even just a year or two old can be a detriment to the overall output. This is because, day to day our language is changing all the time but it is so subtle and gradual that it is difficult to notice. It is only when we look back a few years that it becomes more apparent. Language is the ultimate democracy. Made by the people, for the people.

This could be a key component that ends up undoing these kinds of technologies. They look impressive at first but in letting it out in the wild with little consideration of potential feedback, innovating on them has just become far more difficult than many would like to admit. In trying to ensure a pure data input this will become an anchor on future progress on these models. But many that have a weekly paycheck with these things will not say the quiet part loud.

It is said that Anxiety is the self feeding back on itself. That it one goes around in a loop never getting a solid resolution. Like putting a microphone too close to a speaker. If one can break that pattern they can relieve themselves of this hangup. Easier said than done, much much easier said than done; even for simple things. And this state is this the future of machine learning, just becoming ever more digitally anxious and if, a big if, we are aware of it, trying to break the loop. It is a possibility and one that needs to consider before it gets out of hand. These are things running out in the wild today, not some laboratory experiment or internal research project. It is machine based anxiety just wandering the world without a handler.

Feedback can be useful when you are trying to understand something you are interested in but as with everything - everything in moderation is a good idea. When taken to the extreme, it is the fuel that creates echo chambers and the weird social and political flux we have seen over the last few decades. No matter what you believe in, it is always wise to at least look at alternative points of view to at least see the world from another perspective. It doesn't mean you should believe them but to try and challenge your beliefs is a smart path to take. Maybe you will learn something or further confirm an idea you have. It is in the same way that in leaving a familiar location like a long time home, it seems more unique and impacting when you return.

Feedbacks can be interesting from a philosophical sense, this is the fundamental model of the Mandelbrot set and of all fractals. A fractal being a mathematical function that takes an output and feeds it back into the input in a never ending function. When plotted on a graph it can create shapes that consist of iterations of themselves stretching into infinity. A recursive function that somehow consists of nothing but itself. Like when you press equals twice on a calculator, it repeats the last function on the last input. If you keep hammering it you can get some interesting runaway functions.

Fractals raise a lot of questions about the state of infinity and about how we can create wonderfully complex structures that are made of nothing but themselves, all with very simple functions. From a mathematics and study of the nature of the universe it is wonderful to ponder on even if there are few answers that can be thoroughly grasped. In the same way it is fascinating to see reverberation in the world. But this is not something that is useful for the everyday person in terms of functionality and it definitely is not something that should be used as the backbone of communication and business. And yet, this is what so many are focusing on, building a business on a fractal - and you thought NFT's were a bad idea! At least it was easy to see through that scam if you have studied the history of economic bubbles.

There is one point in reverberation that can be useful and that is resonance. Resonance is the state of when something basically vibrates in synchronisation with the environment around it. It is something that engineers try to avoid in structures as it can be fatal to them as they shake themselves to pieces through constantly increasing forces. Once they are identified, it is best to avoid them at all costs. But it needs people to be actively looking out for these occurrences. Who is looking out for AI resonance today? It is an odd question to propose and yet in this strange timeline, this might be a job that will be available in the future. AI learning resonance analysis engineer position now open -requires 15 years of prior experience.

We are seeing some weird examples of how these sample data sets are showing their origins. Again I will use the visual models to demonstrate these issues.

Ever since Dale-2 and Midjourney gained so much attention mid-2022 in the text-to-image prompt space, there has been a race to be the first to offer text-to-video. There have been some early examples over the last few months and there is still a long way to go before it achieves the goals that many are expecting. The most fascinating thing is seeing a lot of video models that happened to use Shutterstock sample videos to train on. We know this because the videos they produce are placing the Shutterstock watermark over all the output videos. The model just assumes that this is a part of how video is formed. While we can pick this up in an instant because of the visual nature of most people, a computer model simply has one task, make sense of input data and then replicate. While some of the better example of visual AI generation are difficult to pick up, when applied to text, I do wonder if even people will be able to tell at first. It will only be in the most extreme cases of cognitive degradation that it will become apparent just how much these systems have failed. It may be similar to Alzheimer's disease, it is only once the mental degradation is well underway that many start to notice. We think we would know better and yet the Alzheimer's analogy is decent. Ronald Regan as US president was showing symptoms for years before folks started to question it and he was the biggest target of that era for people to be focused on. I would argue that his neo-liberal policies fit the diagnosis perfectly... but I digress. *cough*

There are proposals to have a sort of AI constitution that says "this is generated text/image" most folk using these systems like them precisely because they don't have this feature. The sales pitch is, the user can take credit for work not done, it allows many to act exactly like most business managers. Take the gains of others toils.

It would not surprise me if people that are heavily invested in using these models may have that same mind set. All the reward with none of the work and they will fight to the bitter end to keep it that way. It may get nasty in that sense. When it comes to power, power doesn't corrupt. The fear of loss of power corrupts. That is why those in power shouldn't have it. Those that should, don't want to. Those that shun power are well aware of the perils of the position.

And if there is a big awakening for most people against these systems, how long will it be until that happens? How many livelihoods are destroyed in getting to this point? And in the end it is possible that it provides no gain to society. Worse is that we end up trashing some of our societal gains well before that point. That could be a very blatant allegory for modern society over the last few decades, but like I said - things move faster in the world of computers.

Have I sold you on the joys of modern AI in the hands of everyone yet? It isn't like Ray Kurziel or Elon Musk who fear AI smarter than us somehow controlling us via robotics or something like that. Previous essays on here have tackled this head on already. No, it is us people favouring our most selfish traits that will do the damage all without the need for anything smarter than us. That the models are far FAR dumber than us is the crux of the problem.

As the curse says "May you live in interesting times", we most certainly do live in these interesting times. We are now stepping blindly into an unknown world, one that has the possibility to fundamentally upset the way we communicate with others en mass. It is dangerous and the risk of insanity of individuals not having any solid ground to stand on regarding their information, and the possible contagion into the core basis of modern civilisation is very real.

With any luck this will just have been a doomer prediction that turns out to miss the mark. I really hope so. But what I fear is that like others that have predicted the failings of the digital era since the 1950'S, it looks more like said prophecy turns into object fixation of the future rather than a warning in which we learn from in advance of committing the folly.