Artificial Intelligence is getting smarter.
While a human brain stays the same.
Not a fair fight.
AI and human weaknesses
To influence someone, you have to find their weaknesses: this has always been the unfortunate law of human psychology.
Only now it’s not some devious creepy character that found out what our pressure points are, but an impartial algorithm sifting through tens of thousands of variables collected about us. Finding unique relationships within our data — invisible to the human eye, but obvious to the processing power of the machine.
Of these variables the machine builds an Algorithm of You that attacks the brain of the user with weaponized content, optimized for any behavioral outcome to be extracted from them. This behavior modification is paid for by the real customers of social media platforms, who are advertising their products, political candidates, religions, charitable causes, revolutions — anything to influence their “target audience”.
Algorithmic behavior modification is the name of the game on social media. Machine learning algorithms work so well because, unfortunately, we humans are rather predictable.
Artificial Intelligence figured out how our brains work. After all, the human brain was built by millions of years of evolution and is not about to change overnight.
AI can work with that — it’s a stable set of variables.
The algorithms have been trained on incredible amounts of data that every human generates. They adjust dynamically with the ever-expanding digital footprint to manipulate the user more efficiently with every cycle of machine learning.
The behaviors we are nudged toward can be anything:
- products to buy,
- candidates to vote for,
- conspiracy theories to believe in,
- social causes to support,
- protests to participate in,
- revolutions to start,
- acts of terror to commit.
Few things on this list are about personal wellbeing or positive social change. Many are driven by negative emotions — anger and outrage. Because everything that is negative and deplorable in human nature is more easily manipulated and amplified: evolution has left us with negativity bias that is hardwired in our subconscious mind — and exploited by digital media, driving user engagement for the platform. Negativity is just one of many pressure points to be digitized for profit — there are thousands of psychological variables AI can use.
Compared to the machine learning algorithm, the human brain as a processor is pitifully outmatched: it cannot adjust to digital inputs in real time.
It would require thousands of years of further evolution to undo automatic cognitive processes that nature hard-wired for our survival and flourishing in the REAL world: social validation, loss aversion, reward seeking, sense of progress, reciprocity, etc.
AI has learned these unchangeable inner workings of the human mind. They make us predictable — and therefore, vulnerable to influence.
For the algorithms a human brain is like a control panel on the piece of industrial equipment — press the right buttons at the right time in the right order, and the person will do exactly what algorithm programs it to do.
A human robot will obediently buy the product. Become skeptical of its political views and will not bother to vote. Will be addictively glued to the screen for hours on end dishing out personal data for sale. Will obey oppressive government — or overthrow a democratic one. Anything the paying manipulators ordered.
Machine learning function optimizes content to achieve the desired behavioral outcome. As proved by Facebook study, highly sensitive personal traits can be predicted with 85–90% accuracy — and utilized for algorithmic influence.
We look at our personalized feeds and wonder why other people don’t share our version of reality.
Not realizing they are being fed their own “reality” by the algorithm of behavior modification. We all see different things — and we are all gullible enough to believe them.
We think that we humans are a superior species, but according to the Center for Humane Technology, technology has overwhelmed human weaknesses a long time ago. It becomes a child’s play for the intelligent machine to manipulate humans by exploiting the weaknesses in the brain it learned so well — without any conscious awareness of the user.
Charmed by technological benefits, we have sold our soul to the devil, but failed to notice the transaction.
Virtual inputs break the brain
As weaponized digital content of targeted behavior modification is fed as an input to the overwhelmed human brain, there is an obvious mismatch between the inputs and the device — the brain is designed for the sensory input of sights, sounds, touch, smell, and taste of the real world.
Not for tweets.
Our brain is a computer too. Neuroscience is the discipline that tells us how this computer is programmed. Our brain as a processor is not a blank slate — it comes pre-wired by nature to process sensory inputs for our life in a physical world.
Like an AI algorithm is trained on the data from the digital footprint, a human brain is trained on our experience of the world around us.
The optimal outcome of this brain development is a functional human being.
But when we exchange real experiences for virtual ones, we mess up the sensory inputs.
Instead of roaming the savanna where our prehistoric brains were formed, we are roaming Facebook and Fortnite for 12 hours a day.
The data coming in to program our brain computer is all wrong!
Digital inputs impair the functioning of the brain. Despite neuroplasticity, when inputs from the real world are in short supply, the brain malfunctions. Entire areas of it start operating suboptimally or atrophy altogether from lack of use.
Here is what our brain “computers” are designed by nature to experience:
Sensory input from the real world → Functional human behavior
Here is what actually happens:
Digital input designed to target brain weaknesses → Dysfunctional human behavior
Psychologist Nicholas Kardaras writes in his book Glow Kids: How Screen Addiction Is Hijacking Our Kids — and How to Break the Trance:
“People need to first fully develop their brains — their cognitive, attentional, linguistic, emotional, spatial and reality-testing mental faculties — before their brains can go beyond those areas and handle hyper arousing and reality immersing screens”.
No wonder the brain — especially a young developing brain — breaks down. There is an epidemic of mental health disorders that coincides precisely with the start of the digital age.
Mental health crisis has a lot to do with this change of inputs: in the fast-paced hyper-stimulating digital environment our prehistoric brain burns out.
It’s not designed for it.
Well-informed parents all over the world are trying to mitigate this disaster for their children, by returning kids’ brains to a natural habitat — real world. Which in simple terms means: limit their screen time. That’s the only way they can get enough sensory inputs that the brain needs to develop normally.
Biology changes slow, but technology changes fast. Unlike the human brain, unchanged for thousands of years, machine learning is adaptive and dynamic. It learns from the exponentially expanding digital footprint data and gets better at what it does — every day. Better at manipulating us, the slow humans.
Machine intelligence is getting better, while human intelligence breaks down.
Again — not a fair fight.
Distraction as emotional regulation
Our self-control is no match against the power of persuasive technology that spent billions to manipulate the user for maximum engagement. And once we are in the grip of almost non-stop engagement with the user interface of our apps, we surrender our emotional regulation to the machines.
Instead of processing our feelings, we externalize them on social media.
To avoid the discomfort of dealing with the emotion in the privacy of our own brain when we are bored, sad, or mad, we throw it out into the digital space and wait for feedback.
Distraction became our only way of emotional regulation.
As the user’s own brain becomes less efficient in knowing itself due to perpetual distraction from the screens, machine intelligence becomes better at knowing the same user:
- As the user gives more of her emotional turmoil to the data mining machine instead of dealing with feelings herself, the algorithms now have better weapons to predict and control her behavior;
- Under the influence of algorithmic behavior modification the user becomes more helpless at analyzing and controlling her own behavior.
And the vicious cycle continues. Until we lose our very identity and replace it with the digital façade of ourselves — fed to us by the machines.
Mind you, this is not the work of scary Artificial Superintelligence of the future many scientists are warning us about. You know, the stuff of science fiction that might actually destroy humanity. This is still a narrow functional AI that is given a specific task — like get you to stay on the screen, aka, “engaged”. It is incredibly efficient at achieving the goal — but in theory we can still resist and maintain our free will.
What a Superintelligence of the future would be able to do to us, we cannot even fathom.
The complete transfer of power to the machine might happen without any conscious awareness of the user.
In the meantime, we are already witnessing the suboptimal outcome of brain programming gone wrong, when the brain is fed a diet of manipulative digital inputs. Generations of “digital natives” contain a disproportionate number of unhappy, unproductive, isolated individuals with mental health issues: depression, anxiety, and addictions.
Do not let this happen to you or your children.
Her research on the relationship between technology and psychology seeks to reveal how digital behavior manipulation affects human wellbeing.