Imagine spending your life chained to a cave wall with your face to the wall.
First to set the scene and make this easy to follow I’ll make just two short points:
AI without a lot of data is really not AI, Its just a rulebook. Self-driving cars is an example. No different to any app you use, but a bit more complicated. Just because it used to be done by humans doesn’t make it intelligent any more than Word processing or CRM.
a. Big data is a huge weak point in AI because it is never and most likely never will be accurate or sufficiently complete.
b. The inexplicability of the rules AI extracts from bodies of data make it too dangerous for most situations until that problem is solved.
Plato’s allegory of the cave was a ‘form’ in which he described Big Data and other similar concepts. Plato’s ‘forms’ are capable of adapting themselves to concepts not yet thought of because that is in essence, how they work.
Plato’s Cave allegory as narrated through Socrates, describes vividly a tribe of people who spend their lives chained to a cave wall with their faces to the wall.
Behind them life goes on as normal around a camp-fire.
These people see the world through the shadows cast on the cave wall. People, things, movements, even emotions all play out before them on the cave wall as shadows.
These people have a strong sense of life, but their sense is a shadow of the real thing at best. Key details are completely missing, detail that might have completely changed their interpretation of the drama playing out on the wall before them. Their sense of scale, of speed and the relationships between one form and another, all are misinterpreted. Vital information that we all take for granted and yet continue to make terrible mistakes frequently, are entirely missing for these unfortunate people chained to the cave wall.
If only we could let them view the scene from behind via a tv camera, let them see the expressions on the faces of speakers and recipients, know what lives in the hearts of the believers and of the non-believers. Maybe their view of that hidden world would be more accurate. Maybe then they would be better informed and make better decisions.
Plato was describing AI
The man facing the wall, hears the sounds and sees the shadows of movement. His mind has to make decisions about what it is that he is hearing and seeing. The canvas on which he must paint this picture is full of his fears and desires based on past experiences and shared knowledge of others along the wall. Both the current situation and the canvas are utterly distorted and the outcome is usually barely salient. “A large and growing shadow approaching the wall outside of mealtimes means an imminent beating.”. That explains a past experience, but has no concept of the effect of distance from the fire, targets in between, or indeed all the other reasons why someone big or small might approach the wall.
Even when he is eventually free to turn around and his eyes are able to stand the glare, he will have little frame of reference with which to begin absorbing the truth about life, even assuming he has such a desire. Rules once created, rarely have any mechanism to be removed and he is stuck forever with the majority of his mistakes from the past. Soon his behaviours will begin to earn him beatings and starvation and his ill-conceived rules will begin to prove to him that he was right all along.
How shadows work in AI.
The bulk of AI, especially if it does anything useful is based on Machine Learning. Most everything else is not AI, but a rulebook, i.e. a human told it the rules to follow. That’s how M.S. Word works and it’s not AI.
Another way is human classification or human/machine classification whereby a machine suggests classifications for something and a human checks it is safe before agreeing or saying no. That is a safer but less powerful approach because of the cost and the fact that shadows are not allowed. Humans don’t understand them.
Machine learning simply teaches the machine to look at what, to you and I, is a great fog or pea soup and spot groups of things that are repeated. This thing it discovers may be called an “AQ138_:wer” for arguments sake. The machine doesn’t care. It’s a shadow catcher.
Then we give the machine a “training set”, say a collection of dog pictures and we ask it to compare the data sets awhile labelling all the goupings/ shadows that match a dog picture as “dog”. If we get results, we have achieved something. Now it is able to find dog shadows. They could be Wolves or possibly even cats however, there’s a lot of work to be done yet with more refined training sets.
One example I came across was successfully finding wolves that were intentionally scattered among dog pictures until it was discovered that the recognition factor was a patch of snow in the picture. Our friend had not noticed that most of the training set for wolves contained snow. Shadows are dangerous things even if at times they are exciting.
It took a great deal of trial and error in this example, through masking different parts of the training pictures for subsequent tests before it was possible to discover that there was an error and why the error was happening. This illustrates perfectly the problem of inexplicability with Machine learning and AI. Not knowing there is an error is far more dangerous than the challenges of finding the cause and elimination the error. The truth is, we don’t know that the errors exist, or at least where they are. Imagine such a machine flying an aircraft full of people.
Or perhaps Plato was talking about Social Media
It may have occurred to you while we were poking fun at machines, that we humans are making very similar mistakes every day as we stare at our phones and PCs to see who is saying what on Social Media.
Now the whole gambit of this subject is for another day, but the bit that fits right into our discussion is the shadow on the wall that represent our friend, or the person we think is our friend that posted a picture and a profile online and told us we were clever.
The people, from friends to fakes to bots run by PR firms or even governments are all shadows of the real world no more or less than the shadows round the camp fire.
The difference here is that these shadows are deliberately manipulated by their owners to show you a particular shadow, not just any random shadow, let alone a truthful or representative one.
What is to be learned from these shadows in the sense of individuals is more likely to be the thing each owner wants to project and hence that which they are not, or not at the level they aspire to.
In the sense of organisations and governments, the shadows represent a charade, or mask of some sort representing that which they want you to believe but is almost certainly not what it really is.
How these shadows are presented to you, will tell you the opinion they have of you. Mostly this is an opinion on your type rather than you as an individual and indeed it reveals their view of the world more than yours.
Then of course there’s the platform, the guy who is earning from the campfire and needs you to remain chained to the wall. He makes sure that you get the feedback you need to keep recognizing and responding to the shadows and wanting to see more of them.
Unlike Plato’s prisoner, these prisoners are self-imprisoned. They are able to walk out of the cave and observe the campfire, yet they choose to relate to the shadow rather than the object. This is precisely as Plato hypothesised.
Plato’s prophesy for you and I is terrifying if only we are only able to see it.
Today we live in a world where all of our news and our interactions with the world we live in are experienced via Social Media Channels and often second, or third hand, or further and edited/presented with the attached attitudes of our peers. In choosing what we read, or view we are choosing our peers and vice versa. This is powerful and highly restricting, even more so than Plato’s cave allegory.
Indeed, traditional sources of news and information, what is left of them, are rapidly withering. Disappearing, or succumbing to the same fate. Soon there won’t be a campfire to see, because that too has been put in place for our benefit and reality is hidden even deeper, should we have the desire, or the constitution to look directly upon such a blinding light or any urge at all to depart the wall.
The people who govern us and police us, long ago will have ceased to be people, but AI driven machines that make decisions based on the average of what the shadows add up to, but much more on the basis of which way will cost less and yield more for whoever runs the show.
Right now, it looks increasingly as though the owners of tech platforms from Social Media to Search and cloud AI algorithms are the people holding the aces. Theirs is the campfire and theirs is the lens that sees the shadows and tells us what to think about them.
The people who empower them are convinced that they will always be able exert power of the men who control the machines,.
What do you think?