Surveillance predictive: and if the AI knew in advance what will we do?

We observe, we analyze... and we anticipate?

Imagine that you are walking on the street. Nothing wrong with that, until you get a message on the phone: “Attention, avoid this area: you may be involved in suspicious behavior.” No one has the following physically, and yet someone – or something – has observed, analyzed, and has provided what you were going to do. It is not a science fiction movie: it is a scenario that is already in the testing phase in many parts of the world. Is the surveillance predictivethe idea that artificial intelligence can anticipate our actions before we do it.

Looks like a found by the TV series, but is a technology that is real. And if, on the one hand, promises greater security and prevention, on the other, raises profound questions about our individual freedom, privacy and trust in the systems that decide for us. But what it really means surveillance predictive? How does it work? And what is diffuse?

What is the surveillance predictive?

The surveillance predictive is the set of technologies that use behavioral data and statistical models for the prediction of human behaviorwith the aim to prevent risks, crimes or events that are considered “deviant”. It is based on machine learning algorithms that analyze huge amounts of data: geo-location, web history, purchases, contacts, daily habits. Every action leaves a trace. And each track becomes a variable by which to infer what could we do after.

Unlike traditional surveillance, which looks at what happened, the predictive seeks to anticipate. It is a logic of probability: if a person has done “A” and “B”, then there is a high possibility that the face with a “C”. The problem? C hasn't happened yetbut it may already influence how we will be treated.

Where does artificial intelligence?

Artificial intelligence is at the heart of this process. The predictive models are not limited to collect data: process them, compare them with thousands of other similar profiles, and generate risk scores. Some systems already in use in the judiciary or police rank the citizens according to a scale of danger, even in the absence of the crimes committed.

One of the best known cases is the system PredPol, which is used in several american cities to predict which neighborhoods may experience any of the offences, based on the historical data of crime. However, a study conducted by the University of Chicago revealed that these systems can replicate the bias social already present in the data, and ultimately to discriminate against whole sectors of the population. Researchers have developed an algorithm that can predict crimes a week in advance, but they noted that the police response was more intense in the neighborhoods and wealthy, at the expense of low-income areas. This highlights how the use of historical data may perpetuate existing disparities in the criminal justice system. Source: the University of Chicago

We talked about it in our article “The AI and Surveillance: Who is Controlling Who?”, where there is a risk that the use of AI to control it becomes a stealth weapon of power, the more difficult to recognize, but very effective in limiting the personal freedom.

Practical implications in real life

In China, the system of social scoring monitors and evaluates the behavior of the citizens by the delays in the payment of bills to the sharing of online content. Those who get a low score may be penalized in the access to public services, transport or bank loans.

In the United States, some of the software used in the judicial system – as COMPAS – evaluate the likelihood that a defendant will commit new crimes. And these scores can affect the duration of the penalty or the possibility of parole.

Even in Europe, there are tools, predictive modeling, especially in the context of the cyber securitywhere TO analyze the network traffic to prevent attacks before they happen. In this case, however, the application is often more accepted, because it does not directly affect human behaviour.

But the border is thin. When an algorithm provides a behavior, and this prediction is used to take action before the behavior occurs, where is the presumption of innocence? And our ability to change, to surprise, to get away from our patterns?

Frequently asked questions (FAQ)

The surveillance predictive is already a reality?
Yes, in many forms. Some are limited to information security, the other to the police as predictive, and still others to the social monitoring.

Is it legal?
Depends on the context. In Europe, the GDPR imposes clear limits to the automated processing of the data, but the evolution is more rapid of the regulations.

Is always negative?
No. It can be useful, for example, to prevent suicides, domestic violence, or terrorist acts. But it should be always balanced with the fundamental rights and freedoms.

Can I avoid it?
Hardly. But you can limit it by choosing digital tools more respectful of privacy, and supporting clear and transparent rules.

Towards a culture of limit algorithmic

The surveillance predictive asks questions that no technology can solve alone. Questions of justice, of freedom, of responsibility. If we allow the artificial intelligence to anticipate our every gesture, we run the risk of giving up our unpredictabilitythat is one of the most human that exist.

Need an ethics of the forecast. Need for transparency about the functioning of these algorithms, on who controls them, what data they use. But it also serves a cultural change: accept that the risk cannot be eliminated without eliminating freedom.

AI offers us great tools, but we do not accept any use of them without thinking about it. We can build a future where the forecast helps to oppress, where AI augments the human being without limit. It is up to us to decide which direction to go.

📚 Do you want to learn Artificial Intelligence?

Discover our fundamental articles, ideal for starting or orient themselves in the world of AI:

📬 Get the best every Friday

Visit the page Subscribe to our newsletter and choose the version you prefer (English or Italian).

Leave a Comment

en_US