Scroll down, read, react to,... but what happens inside of us?
Let's open the social, and the first post is perfectly in line with our interests. We start a streaming platform, and is proposed to us in exactly the film that we had in mind. Also, the news that we read seem to reflect our ideas. Everything looks comfortable, fluid, customized. But behind this experience, there is a complex system: information the algorithmicthat will select and filter what we see. And our brain adapts to this flow.
The digital era has brought us huge advantages in terms of access to information. But it has also transformed the way in which we think, remember, and make decisions. Because the brain is not a fixed entity. You can model with what it consumes. And if what they consume is driven by algorithms, even our mental processes begin to follow logical predictor.
What it means to “information algorithmic”?
The information the algorithmic is the set of contents that are shown not on the basis of their general relevance, but their chances of catching our attention. It is a form of customization, made possible by artificial intelligence that will learn from our behavior: what we read, what to skip, where we pause.
This type of information is not neutral. It is built to optimize engagement, that is, our time, our reactions, our interactions. The algorithms do not “choose” the best content, but those performing according to specific objectives.
As we explain in “The AI and Social Media: The Invisible Power of Algorithms”our feed digital are not mirrors, but mirrors. Reflect us, but we strengthen, we close, we polarize.
How you react to the human brain to this flow?
The human brain adapts. It is plastic, i.e. it changes according to the stimuli. If you are exposed to content that is short, immediate and repetitive, develops preferences for that format. If you find the information is already in line with his beliefs, tends to strengthen them and to refuse to different opinions. This effect is known as confirmation bias, and is amplified by the way in which the algorithms select what we see.
According to a study published on Nature Reviews Neurosciencethe continuous exposure to content that is predictive reduces the ability of the brain to tolerate the ambiguity and waiting. In other words, we get used to quick responses, to strong emotions, opinions netlosing the ability to explore, to listen, to remain in doubt.
We tell it in “Focus on the crisis: how AI affects our daily attention”where it is clear that the mind has a need of variety, depth, and time. The algorithm, however, offers us what works in the short term.
The artificial intelligence as a filter, cognitive
The artificial intelligence is not only an assistant. Is a filter active-cognitive. Deciding what information to receive, in what order, with what format. Every tip, every notification, every proposal is the result of a prediction: “this you'll love it”. But the more the algorithm learns to know us, the more risks to narrow our field of vision.
It's like living in a room with the walls invisible. We are not closed, but we can not go out. This has consequences on our ability to think critically, to change one's mind, to discover the unexpected.
According to a report Mozilla Foundation (source), platforms that use predictive algorithms, in the end they propose the content more and more similar, reducing the diversity of information and increasing the polarization.
Implications of the everyday life and cultural
In the work, the information the algorithmic results in recommendations automatic summary is generated, decisions are supported by AI. Useful, but risky: it reduces the independent thoughtyou increase the dependency.
In school, the digital environments that adapt the content to the level of the pupil may help, but also deplete the educational experience. We have analyzed in “Personalised learning with AI”pointing out that the real learning comes also from the difficulty, from the unexpected.
In everyday life, the AI suggests to us what to eat, what to see, how to respond. The more we rely on these proposals, the more we train the brain to not to decide more.
FAQ – frequently asked Questions
The information the algorithmic is dangerous?
Not always. But it is problematic if it is not balanced by an open content, critical, different. The problem is the excess and the lack of awareness.
Can I avoid it?
Not quite. But you can slow down, and to actively seek different sources, disable your customizations and get used to it don't you confirmation.
AI can help the brain develop?
Yes, if it is used as a stimulus, and not only as a convenience. The AI can offer variety, challenges, questions, not only confirmations and shortcuts.
Take back control of the attention
Our brain is a wonderful machine, but fragile. If the feed variety, slow, deep, and grows. If we nourish with repetition, the immediacy, the confirmation, the atrophies. The technology can accompany us in both directions.
Recognize the mechanisms of information the algorithmic is the first step to take back our thoughts. Need awareness, but also the design ethics. It need to build the digital environment that we do not simplify life too much to make it poor.
As he wrote MIT Technology Review (source), the predictive information is powerful, but needs to be managed. And the human brain, in order to remain free, it need to be surprised. Every so often, even disturbed.