hidden mastodon verification link

The Luddite

An Anticapitalist Tech Blog


Rise of the Banal
false
By Michael Verrenkamp
A golden clock with the words Time for a Guest Post written on its face.

Guest posts are not substantially edited by nor do they necessarily represent the views of The Luddite, though we do exercise discretion in which we publish. See here for how to submit one.


Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

-Frank Herbert (Dune)

Part 1 - Don't mistake the Menu for the Meal

With all the implementation and talk about text based AI systems over the last few months, there are a few thoughts that need to be addressed with how folks are reacting, interpreting these technologies and what they mean to the future of communications.

First some foundations of the actual technology itself. Calling these things intelligent in any fashion is big a miss-step. AI is just the latest name that is allocated to algorithm technology to get venture capital money. In the same way Block-chain, AR, VR and Web 3 did the same thing. The most charitable name you could give these systems is statistical emergent systems.

There is nothing too intelligent about the latest batch of "AI" system being paraded around. At least not in a sense they we would considered intelligence as a being that fundamentally understands the world around it. Let me explain why. I will focus on these new chat bots like ChatGPT and Bing Chat bot simply because they are the ones that are getting the most attention.

I will heavily simplify how they work but this is to avoid getting tangled up in the technical details. While the specifics of the technology isn't the most important thing, it does give a groundwork to show what these things really are - compared with how they are being sold.

The technical term for these AI's is very honest considering, usually they have some bombastic marking name but we are clearly in Engineer-Land (TM). These are call LLM - Large Language Models.

They use a large data set of written works scrapped from the internet consisting of about 100 billion data points. This is then processed to figure out how each word relates to another word in terms of relative occurrence. If a word or set of words comes up as a question, all the systems asks is - "What is most likely to be the next word that comes after this?". It then does this again and again until it comes to what it thinks is an answer. This is a very simplistic view of it but it is the broad gist of it. There is also lot of randomization thrown in and various conditions to make it read much better but this is very broadly how they work. It is just a very large scale text predictor like on a smart phone.

When you understand it like this, it suddenly doesn't seem so impressive in term of the aura that is being projected by the tech-bro fanatics that see this as the future of everything.

In this sense it would be possible to print out the entire system of how every word relates to other words. It would be an incredible large volume of books to print but it would be possible. The core thing to take away from this is, this is merely a system that tells how one set of data relates to another set of data.

These GPT systems don't know what the words mean, it has no intelligence. It is merely shuffling words into recognizable patterns.

The results that come out from this can be very convincing, but it is a good example of digitized blind confidence. Rather than create real intelligence (if we ever can) it is a massive computer model that boils down to 'fake it til you make it'. A confident idiot. They can lie to you and just make stuff up and it is none the wiser about it. But is not how these are being pitched to the general public.

To Anthropomorphize technology is not just technically incorrect, but morally questionable. Just because it vaguely looks like something similar to our real world doesn't mean it is that thing. No matter how much text you throw at a model, a tree as we understand it in the world as a living growing thing, a tree in that sense doesn't exist in these models at all. There is no context or means for it to ever get to that point. It has become a Rorschach ink blot test - for many they see in it what they want to see.

While these are not being pitched as general intelligence systems, there is a notable silence from companies not wanting to correct this misconception as it arises. Why would they correct something that is a part of their new found core business that can dominate users and be the next big money spinner? Of course these assumptions will never be corrected until coerced to do so by force or once backed into a corner.

So far for those that have been looking at these chat bots with a critical eye, this is not really anything to revealing. This is starting to become a consistent talking point in some circles and it is at least a good conversation starter - sometimes. There is also the tendency of the larger ego folks to critique anyone that is talking the way I am about how these models work as being derivatives, but so be it. You cannot please everyone.

This whole model of using relational text is interesting simply because using something so simple in idea with a monstrous amount of data actually works fairly well. It could be seen as a neat launch pad for discussion about our ideas of how to determine intelligence, or how language evolves or as a weird oddity of large data models. But this point is not brought up much for the mainstream, and definitely not in the context of selling a product.

There is a lot of debate going on if these chat bots are possibly a primitive form of intelligence, I suspect that this discussion will look somewhat silly in future to even consider it within the realm. The desperation for this to be real is palpable.

The whole myth behind why we are willing to fall for the idea of "AI" goes deeper in two fundamental ways.

First, how the real economy has driven this fever for the golden era of Sci-fi technology and the promises of any fixes we can summon.

Ever since the global market crash of 2008 we have been in a world were the economics stance of most people in the western world has been white knuckle holding on for dear life trying not to slip backwards. If you are lucky. For many they have seen their living standard take a tumble. As such we are now in a world that is staring down the barrel of ecological and economic blow-back for a century of reckless consumption that has lead to a technology and business environment that consists of a calcifying technology conglomeration, floated by cheap money on the back of wild speculation. Folks just not wanting to face up to the reality that we have to take responsibility for our actions and what we have collectively done are desperately trying to find any alternative. No matter how desperate they may be. When you are drowning, you will grasp for anything.

In the 1900's thru early 1970's there was a fever pitch of 'everything will get better' and the pace of technological advance was extreme. We went from Candles to computer and space flight in the space of a single life time. But during this period we were promised all the now cliche tropes of modern Sci fi. Moon bases, flying cars, Robots and AI to compliment them. None of them happened. Arthur C Clark famously predicted intelligent AI like Hal 9000 in 2001 a Space Odyssey would be achieved by the far of year of 2000. It didn't happen and we are still a very long way form anything like that. But that didn't stop these ideas from getting ingrained in the cultural zeitgeist. There is still the notion that 'The Jetsons' is an inevitable outcome of technology and that anything we see that vaguely looks like it must be accepted whole heatedly or we will lose out. If we can think it, it can happen. As author John Michael Greer has said 'Progress is the one true religion of our time'. But the Great god of progress is dead, we just don't know it yet.

In a way a lot of people have been thrown into a state of unconscious shock and the memory's of this have come back as a driver engine for beacons of hope. Lets take the first two stages of 'The five stages of grief'. Yes, it isn't a real science but it is a useful though experiment.

We went into the denial stage - this is the era of cheap money. But quickly fading in was the constant stream of hyper-optimists, pushing all manner of bright-sided futures as society in now thoroughly in the bargaining stage. For a rumor to fly, one must want to believe it in the first place.

One of the biggest currency of recent times has been Hope. Hope for a better future no matter how unlikely it may be. So long as there is a visage of a better future, it is enough to keep people moving forwards and not questioning the increasingly direr situations they see themselves in. While not the most cheery prospect but it is something I have heard from many people over the last few years.

That is - For folks to believe in Mars colonies, ever lasting self driving futures, infinite energy on the back of decentralized crypto currencies, and yes, totally conscious AI... we are desperately trying to believe that the party isn't over. This is why things like the broad umbrella of Ray Kurzweil 'The Singularity' have had so much legs over the last decade. Regardless of just how much evidence against it and just how blatant some of the scams are. We are desperate for it to be 'true' even though there is little to back it up. Hope when unwarranted can be a very dangerous thing.

So when a computer can string some text together in an interesting fashion, this should be seen as an interesting progress of a potential tool, or in a strange way a piece of art that is a reflection of how languages work but is yet to be understood. Something we have been striving to do for a very long time. This is the field that Noam Chomsky first gained his fame from.

But many are desperate to think it is true intelligence or at least on the precipitous of it happening. Not because they want to be true because they need it to be. If we aren't progressing, then the illusion of the excess work toil of the world can shatter.

The earlier idea of this being art is an analogy can be stretched onto my second point of how one interprets things.

When a painter paints the Moon, we the viewer, understand the hidden contract that says this is not the actual moon. It is but a representation of the moon that one can interpret in an artistic fashion. We all understand this and those that do not could be considered insane. Yet those that are pushing this big "AI" text systems are more than happy to let us convince us that they have the real thing! Or at least they don't want to correct the misconceptions.

We have been down this path before just not on the same scale on the general population. The idea of a chat bot that could convince people it was more intelligent that is really is. Eliza was a chat bot invented at MIT in 1964. It's system was fairly trivial in that it was a chat bot that could parrot back specific responses from various prompts. And yet considering how simplistic the model was, some people become convinced that it was a thinking entity. Maybe we are just made to see intelligence in places where there is none. To be eager to see ourselves reflected back in the world around us. Not only did we build computerize mirror test, we are completely unaware it is a mirror.

Seeing people fall head over heals for this stuff and declaring the end of all manner of work is somewhat concerning. It is this belief that technology can do it all even when it is just a series of word on a screen. To see the Moon in a painting as the actual moon. The opposite of Zen.

There is a Hindu saying - You walk across a bridge, you do not build a home on it. In the same way, we should use technology but not become dependent on it. And so when I see folks tripping head over heals for the latest craze of "AI" and are almost salivating at the idea of folks jobs and well beings getting taken away - all I see is houses on bridges.

These chat text systems can be kind of neat, I'm sure they could be a useful tool in the right context but it is not wise to become obsessed with the tool as being the answer to everything. When that happens it is astounding how weird the thinking can become.

There is this a concept held by a portion of society that that if technological creation comes along, we must accept it as a good that cannot be denied. This is done unthinkingly. And yet looking back over the last 30 years of the internet age and it becomes more difficult to see the progress through all the noise and mess we have created. Maybe in going fully in on the insanity we will eventually wise up to just how silly an idea it is, but how much damage will be done before we get to that point? None of us know when, if it happens but in retrospect it will seem obvious despite the feelings about it today.

We have to be made aware of the fact that a lot of people see the potential in things even if they can never deliver. This is nothing new, a lot of amazing incredible technologies have been over sold before.

For example, back in the 19th century both oil and electricity were seen at one point being the solution to just about everything. From transport (that was true) and to aging (not so much). As in, we could use the massive amounts of energy in these inventions to literally reverse aging. Being the latest big innovation in technology, we projected our wants and desires onto it rather than seeing it for what it actually was. It sounds stupid now but it wasn't a completely crazy idea at the time. How much of what we do today will be seen the same vein?

It is no surprise that we are dazzled by mere symbols. The big hangup of western society is that we treat symbols as though they are a real thing. Money is seen as that same as the food we eat. Being concerned or outraged is the seen as good enough rather than real world action. That thinking makes people say "We must take what AI outputs as something to be respected". But as Alan Watts put it "We mistake the menu for the meal". Hopefully the more people lean into these kind of systems the more they will realize that this is but a fancy piece of text on a screen and that we do not have to trash the real world.

This is going to be interesting to see how this plays out over the coming years. Because while we should not mistake the menu for the meal ,many are and for a good reason.

Part 2 - It is a somewhat convincing Menu

There is one concerning flip side to the issue.

That is, for as relatively simplistic of a model that is being used to emulate the appearance of intelligence, at least compared with neurology based intelligence. It is very convincing in terms of how competent it seems to be. That is potentially a problem in itself. When you apply this level of generation to what we are seeing in both audio and image generation space, things can get weird fast. The initial knee jerk reaction to this is to bring up the question of trust for anything you read, hear and soon see on the internet or in print. This is an issue we have been facing for at least the last decade. The issue now is that this can be hyper accelerated, targeted to specific users and done without the usual skill set that would have kept it out of the hands of many. At least click bait farms have been limited by the economics of people producing the 'content'. But in making it so quick and cheap, it is going to create so much content online that we won't be able to see the signal for through noise.

The idea that entertainment, outrage, drama can all be generated on mass far quicker and cheaper than before essentially means the end of authenticity in the realm of information. Apply this to the world of politics and if you are like me, you get a sinking feeling like falling into a hole and pulling the same hole in behind you. Things are not only going to get weird but it will get very dangerous.

It is like plastic. It is a neat invention in the right context, but making it en mass for use in almost everything has become an ecological disaster. GPT models are a neat thing but they could be the next industrial revolution, industrial scale click bait.

Then there is the bias these systems can provide. It may not seem like a big issue initially but in having a single set of systems pumping out massive amount of information, it is handing over the world of information dissemination to a hand full of entities. A similar thing happened with the rise of Wikipedia. Rather than becoming a tool of information liberation, it became a monopoly of knowledge that ends up creating echo chambers among those that contribute. In that it can deny our own humanity and our differences.

There is more that unites us than divides us and we should embrace both sides of this.

The bias that comes out of these models comes from unconscious biases of those that build these machines. Even things like Digital cameras will reflect the bias of the programmers that figure what the picture should look like. The act of trying to directly rein these in because they have been launched directly onto an unsuspecting public with little over sight is going to be interesting to see. This is in the move fast and break things mentality. "Who cares about the risks! Think about the gains" - this is the same thinking that causes market crashes.

Then there is the direct manipulation to ensure it also doesn't become a major issue as the model starts spitting out things that are considered taboo.

The people that made these models probably had nothing but good intentions. But so hell bent on seeing if they could make computers do these things, one wonders if they ever stopped to think about the long term consequences of their actions. It is the same thinking that gave us the Atom bomb, it was only after the deaths of some 129,000+ people that they started to even think about the side effects unleashing Shiva onto the world. And yet here we have it, out in the wild and being used like without a second thought. Unleashed onto an unsuspecting public like a child in a candy store. Convincing others of positions that were never really considered.

Chat systems will become a creation bottleneck via a false sense of authority that it did not earn and probably can never earn. It has the real risk of becoming a mono culture due to the restrictions the AI owners can impart. I would be very confident that the folks that makes these things do believe that they are trying to make the world a better place.

The issue with virtue is that true virtue does not know itself. A lot of the worst things that have happened in the world was done by people who thought they were making the world a better place. Hell is full of good deeds. And yet to build these systems can quickly get out of control via emergent behavior's. Things that look sane and rational up close can be chaotic on mass. Or vice versa. This state of Yin and yang that is generally ignored. There is good in evil and there is evil in good. And when one creates something without this in mind they tend to produce things that can do great harm.

To be thoroughly convinced and amused by the artificial. But are we so entranced by this phantasmagoria that we would rather indulge in the artificial written word and art ONLY as means of entertainment? Or is the idea of conveying feelings and intents from others to you now but an old idea? An analog relic that should be shunned in favor of the latest gadget. Are we now in a hyper version of Neil Postmans - 'Entertaining ourselves to death'? To live in an entertainment tailored just to you - forever. The ultimate bubble never making contact with the real world.

We may have blurred the line between artificial and real intent too much.

Ever since computers came along, it has been very apparent what is a creation of the machines compared with the results of nature. In simplifying the world to fit various mathematical models, it also made it somewhat obvious to the users, viewers and readers what was the result of a machine algorithm compared with that of the actual world around us.

Look at computer graphics. In the early days everything was very blocky, consisting of various primitives that were easy to calculate but rarely appear in nature. When straight lines appear in nature it is considered a something very strange. In the same way you can tell when man-kind has had an impact on the physical world because of the straight lines and geometric shapes - this is the same tell you would get from these computer graphics.

Even though we have spent more than the last half century trying to improve this technology to the point were we can replicate the world around us on screen - there was still a gap that meant we could distinguish between the artificial and the real.

You don't see the latest super hero movies and believe everything you see. Even some of the most realistic depictions that are trying to replicate the world as close as possible, still have that artificial glow to them. The uncanny valley is a good example of this in action. But what does it mean when a computer model, particularly one that doesn't have direct artistic control or intent starts producing works that are unmistakable for the real thing?

I am using computer graphics because being a visual medium it is easy to impart to others. And yes, the progress of prompted visuals is coming but it is still in the early stages as it stands. However lessons of the blurred dichotomy between the artificial and the human created do apply to text.

There are certain parts of some cultures particularly in Asia where some aspects of authenticity are not considered so important. If it looks like the real deal, or it is close enough, that is good enough. One must confront the idea of authenticity as hang up of western culture. In the same way that the idea of a solid written history is a trait of the Judaic christian parts and its offspring societies, something that isn't considered anywhere near as important in other cultures. For instance in India culture, particular in the Hindu space - the specific dates and places are not considered that important, but this ties into their idea of Samsara and the wheel of time. That dates and time frames are not that relevant.

That would be all good if we are just talking about specifics of history, as that can be seen as just being pedantic. But in our context, this is information being produced in mass to try and sway people in the real world. Authentic or not, the results could be disastrous.

It veers into the realm of the difference between creator and observer. While there has always been this gap to some degree, it is well understood. Someone viewing art in a gallery understands that it is a creation from a person that has emotions and intent just like they do. There is a commonality to be experienced. When someone has gone out of the way to hone their skills and produce a works for others, we can get into a similar mindset as the creator. This is a part of memeisis, that part of us which means we mirror others around us - it is how we learn to talk for instance. But we are now trying to break this link by having no inherent creator. Now you are left wondering about the origin of the emotions one experiences in art works. On a broader scale - when the experiences don't match the inputs this can be considered a state of insanity. When almost all the works a society is consuming is automated, could lead to a society wide insanity?

And this will be sold us as "choice". One scenario I can come up with - if it hasn't happened already by the time this is published. I can see authors and voice actors selling their skill likeness and audio likeness rights to book and audio book companies. Couple this with something like Amazons hardware capabilities and their strangle hold on audio books and you can see were this is all heading. The latest political memoir generated in a day and read by any voice actor you can think of. Would many of us be ok with this? Only time will tell.

We shouldn't fear this because we don't understand it, but because we do understand it. It is the absolute death of authenticity in media.

Part 3 Be Human

While this applies to media in general, I will simplify this to the lens of the internet. There will be three eras of information online. The first wave was the internet in the early years from its creation up to around the mid 2000's. An era of free creation and distribution. Where you could let your freak flag fly and others could stumble on it. In the mid 2000's with the rise of hyper controlling search engine companies, social media and mass advertising, the corporate control started to take a grip and the worst traits of the media became heavily entrenched and solidified. Now we stand on the cusp of the next wave. Where even the awful situation we are currently in will look pleasant by comparison of what is to come. The firehouse of mistrust and the worst traits of people amplified to the speed of light.

We will have to face up to this because even if you won't use these generative tools - you will be exposed to them by those that have less scruples. In the same way we wouldn't need to have security updates on computers if we didn't have folks trying to exploit them.

There is the saying, "It isn't the destination. It is the journey". That is, if we focus on the end results we miss the journey. The real issue is there is no destination, there is only the journey. To become focused on the idea of a goal that doesn't actually exist, we miss everything that is right in front of us. To have the blinders on and focused on the latest "intelligence" technology, we miss the very people we are trying to help.

If everything become too easy, you lose all drive to achieve anything. How can anyone find self worth if there is no chance to try and fail. Just to be in the world.

A tiger in the zoo might have it easy for food but they just want to bound out on the plains or make their way thorough he woods - regardless of how much peril there may be. For some of us, we just want to get out there are create regardless of how much risk there may be. That risk might be what makes it all worth it in the end.

We cannot be compared to a computer or a program. Despite what so many wish for, the idea of making a computer that can confirm "the glory of Mankind (TM)" - The idea of neurons that are shaped from inputs of the universe around us, it is wildly different from how any computer program works and how we build and use these things. The idea that we can understand entirely how we are made is somewhat laughable. One has to be cognoscente that we are but social mammals with thinking jelly about 6 inch's across. It is unlikely we can know everything and how it is all made.

When we follow this path of technology as 'The only way' it becomes stifling and denies the very humanity of who we are.

I mentioned the issues with trying to be virtuous and leading us down an unwanted path. And yet there is another state of virtue that needs to be acknowledged. That is the virtue of our own skills. To use your mind to create and to be challenged. To be virtuous to yourself. All these AI generative systems are essentially trying undermine this and remove any sense of being a human. The virtue of patience. To create. To connect with others like I am here.

We must resist the Sirens call of the algorithms. As tempting as they may be, they are here to mislead us. To forego our own humanity and abilities in the quest of the easy and the sedentary.

That we see people bragging about their submissions to universities, program code, journalistic articles is astounding. It is a flat out admission that they do not want to use their own facilities any more. It is to move into the ultimate state of hedonism of all pleasure without the pain. Do nothing and yet try to receive everything. But without the salt of life to compliment the sweet, it can all become so meaningless.

The older I get the more I start to see why Socrates believe that writing was an act of folly. That face-to-face communication was the only way one person could truly transmit knowledge to another. While I wouldn't go as far as Socrates, I do understand why this position exists.

That it can be produced without the personal knowledge of the creator is a leap we all make when you read something. You have faith right now and are trusting that this was written by a person and not a computer. I'm not keen on being a hypocrite and I do wonder if a chat bot could truly critique itself. Attempts to do so have been, limited at best.

Communication between each other is one of the most important things that we have. Herbalist Stephen Harrod Bruher said that person to person communication is a largely unacknowledged state of communication between people. It is that feeling you get when you enter a restaurant with a friend and both of you get a sense of unease and so you both agree leave. That kind of communication is unique to us as far as we know. Or at least biological life. This should be cherished in all its weird fashions.

So through the last 4,500 some words comes the crux of the issue.

Maybe one of the reasons people are both addicted and taken aback by the online world is because it has been a perfect mirror of ourselves. Warts and all. While the rise of social media and various filter bubbles has sanded off a bit of the edges - these new system try to sanitize this and simultaneously dumb us down into either devils or angels.

This has never worked in the past because it it is a characterure of what we are. You cannot remove the monkey from the man. The quest for Utopia is pointless, it is like the horizon. Every step you take forward, it gets one step further away. But you can enjoy the journey even if there is no goal.

These computer systems turn out technically competent banality. They don't create, they remix. And not in the style of a good musician. They merely make readable derivatives without any soul.

That the output of these systems comes from in input of millions of people means it will be an by definition, the average, but nothing outstanding or amazing. It doesn't have the flair for creativity that many either crave or soon will crave. You cannot have something truly creative that also turns out decent legal copy.

Some of the boldest and most exciting works were produced were by a single person or a very small group. This is why this world need more Dis-census not Consensus. To allow a million different points of view flourish and be discussed in the open rather than squeezed into a few bland and controlled angles.

Appealing to the main stream creates technically competent but dull works. It is not a fault of the mainstream, it comes from trying to appease the most people. You have to go for the safest path rather than try to excel via bold creativity. This is the same issue that dulls down most mainstream media available nowadays.

Part of why these system have been pushed so hard is because it favor's big business both legitimate and illegitimate. In producing bland content on mass, it can drown out those that push back. To that, if you push back, expect to be labeled a Luddite in the worst way rather than the critical and nuanced person they may actually be. As I am sure most people reading this would agree, we aren't against technology - clothes is a technology. What we want is technology with purpose.

I feel for the people that have started to use these chat systems in their work place. That they can take a large amount of boring repetitious mundane work and simply automate it away, is a damning review of the kind of things that pass for work nowadays. How much of what passes for text based work is actually used in any fashion to actually better the world? That these people are desperately trying to do anything that gets them out of doing this work is admirable. This is a use case that I am not too worried about. It is when it is used to substitute something that is genuinely are stimulating. 99% of office work does not fit this criteria. For chat systems to be considered a good for humanity they need to prove it, it cannot just be taken for granted because it is 'state-of-the-art' technology.

The solution to these AI based systems is so incredibly simple.

Be Human. Be hyper human.

Do the things that computer technology cannot do.

If you are going to write, write things that are completely out there. Embrace the amateur in all of us. To create works that step outside of what is considered safe. To share with others all manner of works even if they aren't the absolute height of technical ability.

In a broad sense, if you do art, do the things AI cannot do. Make things in the physical world.

Embrace integrity and self respect. Just typing in a prompt and then claiming you made the output is a path to nowhere but the undermining of ones self worth. On the off chance you do that and are reading this. You can do better, it will be uncomfortable at first but the payoff is wonderful.

In the same way you could play a video game and use all the cheat codes - you are taking all the joy out of the experience.

It is to make ourselves so enslaved to the machines so that we become idiots that cannot function for ourselves. We have already slid so far, lets not keep sliding.

The touch of others is just so good.

These current systems glorify the works of the masses but undermines the individual. And yet, we are made up of individuals. This is a means of devaluing people themselves.

The optimist in me wants to shout that we can save the internet and that we need to make our own enclaves of unique weirdness - but that feels like fighting an uphill battle. Many people more motivated than me have tried and failed. Maybe it will happen but it is something I am not holding my breath on. There is hope that seeing these chat bots in the wild, people end up loathing them and that they become the driving engine of people away from such high technology. Similar in what has happened with voice activated assistants. They were a fun novelty but eventually many found them next to useless. If this happens, the tide could turn very quickly on the tech-bro's pushing so hard on this technology. Similar to what happened with Crypto currency speculation.

Be human. Get offline and out into the real world. Create something that you groove to, try to find others that are feel that vibe. Resonate with others without becoming a identical. It may be a cliche but - Be the change you want to see in the world. Others may come. That could be kind of cool.

As George Carlin said "I love people but I hate groups". Be an individual but hang out with others.

Don't hand you brain over to the algorithms and those that control them. At the point when you don't have to think at all thanks to the machines - you might as well be dead already. Another state of no action.

Ditch the smart phone, delete your social media accounts - turn computers into a dumb terminal you occasionally use to do a job and then move on. Talk to your friends, I mean actually go an physically see them and talk. This is were the real counter-culture is going to spring up from and it has a small chance of being something really cool.

Life is out there in the real world.

It challenges you but makes you into someone better. Take some previous advice from this site. Go plant a tomato, watch it grow and learn from it. See how it seeds and propagates. To see life, be life. That's pretty cool dude. ;)


Michael Verrenkamp is based in Melbourne, Australia. A long time follower of computer technology. Former Tech Utopian and now disappointed idealist. Free/libre software enthusiast. With luck has avoided working directly in computer technology for the last decade. Uses their spare time living life in the real world enjoying the slack that the simpler life provides.