Skip to main content
La Bussola dell'IA
La Bussola dell'IA
  • 📝 Articles
  • 🚀 Working with Me
  • 🧭 The Route Ethics AI
  • 📖 Insights
    • 📧 Subscribe to our newsletter
    • 📖 Books
      • 💼 AI Business Lab (Saturday Smart)
      • 🧠 MindTech (Sunday, Mental)
      • 📰 IA News Weekly
  • 👥 Who we are
    • 🧭 Who we are
    • 💬 Contact
English
English
Italian
🏠 Home " Technology and the future " AI on a Leash? Rethinking Control Over Intelligent Machines

AI on a Leash? Rethinking Control Over Intelligent Machines

📅 1 May 2025 👤 Manuel 📂 Technology and the future ⏱️ 13 min read
IA e controllo delle macchine: simboli di potere e autonomia tecnologica

Artificial intelligence has become a constant presence in our lives—often silent, yet increasingly pervasive. From the suggestions on our smartphones to medical diagnoses powered by sophisticated algorithms, from self-driving cars to systems managing critical infrastructure, intelligent machines are weaving a dense web around our daily routines. This ever-growing presence raises a fundamental question—one that goes far beyond mere technological curiosity: who is truly in control of this unstoppable force? Who holds the reins of these artificial minds that are shaping our present and, most likely, defining our future?

Who Really has the Control of the IA?

The answer to this question is neither simple nor unambiguous. In a first time, we can be tempted to indicate to the developers, the engineers who design and program these complex systems. Certainly, their technical competence is essential in the creation of AI. As discussed in our article on what is actually the artificial intelligencethey are to write lines of code, to feed the algorithms with huge amounts of data, to define the neural architectures that enable machines to learn and evolve.

However, once an artificial intelligence system that is released in the world, the dynamics of the control system is much more nuanced and intricate.

The Problem of Bias in Algorithmic: When the AI Inherit Our Biases

Think, for example, to the great linguistic models that feed chatbot and virtual assistants. They are trained on the amount of giant text and code from the internet, a veritable mine of information on heterogeneous and often unfiltered. In this process of machine learning, the algorithm identifies patterns and makes connections and develops their own “understanding” of the language. But this understanding is inevitably influenced by the data on which it was formed.

If these data contain distortions, biases, implicit or explicit, the model of IA could play them, and even amplify them. As shown in this article about bias algorithmicthis phenomenon represents a crucial challenge for the control of AI. This is not a will to be malicious on the part of the programmers, but a pitfall inherent in the process of learning from data, are imperfect.

The problem of bias in the data is a particularly delicate in sensitive areas such as face recognition or risk assessment systems used in the field of judicial or credit. If the data for training are not representative of all segments of the population, the algorithm may show performance to be significantly different depending on the ethnicity, gender or other protected characteristics, leading to forms of discrimination in algorithmic entirely unintentional but no less harmful. Fondazione Patrizio Paoletti and Digital Agenda

An emblematic example is that of the COMPAS algorithm that is used in some systems of justice in the us to predict the likelihood of recurrence of a defendant. The problem of COMPAS is that this software had a strong prejudice that led to the double-false positive for recurrence in the case of offenders black (with a rate of about 45%) than it did in the case of the defendants in the caucasus (where, instead, the system recorded a percentage of 23%). The bias algorithmic: the artificial intelligence stumbles in prejudice

The Role of Large Technology Companies

Another fundamental aspect to consider is the role of companies and organizations that develop and implement artificial intelligence. They define the goals, to choose the data of the training, decide on how and where you will use these technologies. The logic of the market, the economic interests and the business strategies play a crucial role in shaping the development and diffusion of AI.

As discussed in our study about the IA and surveillancethis concentration of power raises important questions about the transparency and the accountability of the systems of IA. How can we ensure that the decisions made by algorithms that are more sophisticated are fair, ethical and in line with democratic values? Who is responsible when a system of artificial intelligence make a mistake or cause damage?

The Challenge of the “Black Box”

📖

Are you finding this article useful?

Discover the complete system to create viral contents with the AI

🔥 47 prompt tested
💰 Only $9.99
⚡ Instant Download
📚 Download the book now →

🎁 Bonus: 30 calendar days + video tutorial included

The inherent complexity of many models of IA, the so-called “black boxes” whose inner workings are difficult to interpret, even for the experts, it makes it even more difficult to assign responsibility and to exercise effective control. This opacity of algorithmic is one of the central questions in theethics of artificial intelligence, where transparency and comprehensibility become the fundamental requirements for responsible technology.

The Regulatory Framework: Towards a Regulatory Control

The issue of control over AI is not only about the developers or the big tech companies. It is a challenge facing the entire company, and that has led lawmakers around the world to take action with framework regulatory increasingly structured.

The Act for europe: A Regulatory Model

The regulation on the IA (regulation (EU 2024/1689) represents the first comprehensive legal framework in the absolute on the IA at the global level. The goal of the standards is to promote an IA reliable in Europe. The law on the IA | Shaping Europe's digital future This legislative instrument that establishes a clear set of rules based on the risk for the developer and deployer of IA with respect to the specific uses of AI.

From February 2, 2025 are in force the provisions of THE Act related to the systems that would pose an unacceptable risk and digital literacy. TO Act: in force from 2 February to systems at risk and training The european regulation represents a concrete attempt to “put the leash in the” the AI through a risk-based approach, that goes from the system is completely prohibited to those subjected to strict controls.

Sanctions and Liability

The effectiveness of this regulatory control is based on a system of significant sanctions. The penalties can vary from 7.5 million, or 1.5% of the annual turnover globally, 35 million, or 7% of the annual turnover of the world, depending on the type of compliance violation. What is the EU TO Act? | IBM

Self-regulatory initiatives and Transparency

In parallel to the efforts of the regulatory, are born of self-regulation by the industry and the scientific community. The Partnership on AI (PAI) is an organization independent non-profit 501(c)(3) that was originally established by a coalition of representatives from technology companies, civil society organisations and academic institutions, supported by multi-annual grants of Apple, Amazon, Meta, Google/DeepMind, IBM, and Microsoft. Partnership on AIWikipedia

PAI develops tools, recommendations and other resources by inviting entries from throughout the community, IA and in addition to sharing insights that they can be summarized in the guidelines to implement. Then work to promote the adoption, in practice, inform public policy, and advancing the public understanding. About – Partnership on AI

The Current Challenges of the Control

Development speed vs. Capacity Control

One of the main problems in the control of IA is the time gap between the technological development and the ability to understand and regulate it. As highlighted in our article on ChatGPT and the future of communicationthe evolution of the systems of IA progresses at a dizzying rate, often exceeding the capacity of institutions to adapt framework and regulatory control.

The Globalization of the IA

The global character of the development of AI poses more challenges to the control. While Europe developed the AI Act, other countries and regions adopt different approaches, creating a potential conflict of law provisions, and opportunities for “regulatory arbitrage” for technology companies.

Towards a More Effective Control

Investment in AI can be Interpreted

It is crucial to invest in research and development of AI be “interpreted” and “transparent”, systems in which decision-making is not an unfathomable mystery but can be understood and verified. Only through a greater understanding we will be able to exercise more effective control and to build a trust solid against these technologies.

Education and Public Awareness

As pointed out in our article on the 5 tool of IA for beginners, it is essential to promote a culture of accountability algorithmic, in which those who design and use the AI is aware of its potential ethical and social implications.

Collaboration In Multi-Stakeholder

Without coordination intentional, we risk creating a fragmented landscape where the developer and deployer of IA are not clear on the best practices for an IA a safe and responsible. New Report from the Partnership on AI Aims to Advance Global Policy Alignment on THE Transparency The challenge of control of the IA requires collaboration among governments, businesses, researchers and civil society.

Frequently Asked Questions

Who controls the real artificial intelligence today?

The control of the IA is distributed between different actors: developers, and technology companies that are creating the systems, the governments that regulate them, and the users that use them. There is no single entity that controls completely the IA, which makes governance difficult and complex.

The Act for europe is sufficient to control the AI?

The Act represents an important step but not the final. It is the first complete regulatory framework in the world, but its effectiveness will depend on the implementation and technological evolution in the future. Could serve as a model for other regulations that are global.

How can we prevent bias in artificial intelligence?

The prevention of bias requires a multidimensional approach: the data for training the most representative, team development, diverse, rigorous testing, and tools for the continuous monitoring of systems in production. The transparency in the algorithms is crucial.

What happens if an AI system causes harm?

The responsibilities vary depending on the jurisdiction and the type of system. The Act european establishes specific responsibilities for suppliers and deployer of IA, with penalties that can reach 7% of global turnover, annual.

It is possible to have a total control on the IA?

Total control is probably impossible and it may not even be desirable, as it could strangle innovation. The goal should be an effective control that balances safety, ethics, and technological progress.

Conclusions: A Delicate Balance

The power of artificial intelligence is undeniable, and its potential to improve our lives is immense. However, this power comes great responsibility. The question of who controls the intelligent machines is not only a technical issue, but a challenge for ethics, social and political) that defines the type of future that we want to build.

The answer is not in the harness fully the AI, nor let it develop without controls. As evidenced by the initiatives of regulatory how TO Act, and from partnerships such as the Partnership on AI, the best way seems to be that of a distributed, transparent, adaptive, involving all stakeholders of the company.

Ensure that this power be exercised in a responsible way, transparently and in the service of the common good is an urgent task and essential to the whole of humanity. This is not to create progress, but to guide them with wisdom and foresight, while keeping firmly in our hands, the compass of basic human values.

The challenge of control of AI will continue to evolve with the technology itself. What remains constant is the need for vigilance, dialogue and collective commitment to ensure that artificial intelligence remains a tool in the service of humanity, and not the other way around.

🧭

Don't miss the future of AI

Every Friday, you get the compass to navigate the future artificial intelligence (ai). Analysis, trends, and insight, practical and directly in your inbox.

🚀 The journey begins

🔒 No spam • Deleted when you want • Privacy guaranteed

📚

Viral AI Prompts

Complete system with 47 prompt tested to create viral contents with the AI. Only €9.99

📝 47 Prompt ⚡ Instant Download
💎 Get the System Time
⭐ PREMIUM
🚀

Advice, IA Custom

Find out how I can help you to implement AI in your business. Book a strategic consulting.

💡 Customized 📈 Focus on Results
🎯 Ask for your Strategy

💡 Affiliate Link transparent - supports The Compass of the IA

🏷️ Tags: autonomy-technology control-equipment ia intelligence-artificial power-technology

📤 Share this article:

X Facebook LinkedIn Email
← Previous article AI and Justice: The Artificial Intelligence to the Bench of the Accused
Next article → War In The Future? The Ominous Shadow of the Smart Weapons

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

🧭 La Bussola dell'IA

Your guide to navigate the future of artificial intelligence with awareness and competence.

📚 Content

📝 Latest Articles 🧭 The Route Ethics AI 📖 Books 💼AI Business Lab 🧠 MindTech

⭐ Products and services

🚀 Consulting AI 📚 Book: Viral AI the Prompts 🎓 Online Courses

ℹ️ Information

👥 Who We Are 💬 Contact 🔒 Privacy Policy 🍪 Cookie Policy 📧 Newsletter ⚖️ Terms Of Advice

🏆 Certifications

Google Analytics Individual Qualification - La Bussola dell'IA Google Analytics Certified
Google Ads Search Certification - La Bussola dell'IA Google Ads Search Certified
🎓
My Certifications View all

© 2025 La Bussola dell'IA. All rights reserved.
Comply with the EAA - European Accessibility Act

Manage Consent
To provide you with the best experience, we use technologies such as cookies to store and/or access information from the device. The acceptance of these technologies will allow us to process data such as your browsing behavior or unique ID on this site. Not to consent or to withdraw your consent can adversely affect some features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or by the user, or only for the purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose to store the preferences that are not requested by the subscriber or by the user.
Statistics
The technical storage or access which is used exclusively for statistical purposes. The technical storage or access which is used exclusively for statistical purposes and anonymous. Without a subpoena, a compliance voluntary on the part of your Internet Service Provider, or additional records from a third party, your information is stored or retrieved for this purpose alone cannot usually be used for identification.
Marketing
The technical storage of, or access are needed to create user profiles to send advertising, or track the user on a web site or on various websites for marketing purposes similar.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Display preferences
{title} {title} {title}