A European Ambition to Regulate AI

On May 21, 2024, the European Union officially adopted the AI Act, a law aimed at regulating artificial intelligence within its territory.

Originally presented on April 21, 2021, and unanimously adopted by European lawmakers last March, the text was substantially revised to account for the rapid rise of generative AIs such as ChatGPT, launched at the end of 2022.

This legislation harmonizes existing AI regulations and reflects the EU’s determination to establish a robust legal framework for the development of reliable, ethical, and secure AI systems.

It is the first law in the world specifically dedicated to artificial intelligence, aiming to ensure the strict protection of European citizens’ fundamental rights while promoting innovation. The European Union thus aspires to become a leader in ethical and regulated AI.

Companies and Organizations Affected

The new legislation applies to all legal entities—whether companies, associations, or public administrations—that provide, distribute, or deploy AI systems or models.

This also includes organizations located outside the EU if their services are used or distributed within the Union.

Entities planning to introduce or deploy high-risk AI systems in the EU are particularly concerned.

Likewise, providers from third countries whose high-risk AI systems are used in Europe must comply with this law.

An AI system is defined as a system “designed to operate with varying levels of autonomy, capable of adapting after deployment and which, for explicit or implicit objectives, infers from the data it receives how to generate outputs such as predictions, content, recommendations or decisions likely to influence physical or virtual environments.”

Obligations for Companies and Startups

Companies using artificial intelligence will have to comply with new obligations, which vary depending on the level of risk associated with their AI systems. For high-risk systems, providers must notably:

  • Obtain CE certification.
  • Register the company in a European database.
  • Implement a risk management system.
  • Ensure data governance to avoid bias in datasets.
  • Provide detailed technical documentation.
  • Establish a quality management system to ensure compliance throughout the product lifecycle.
  • Draft a declaration of conformity.
  • Ensure traceability and transparency of the AI system.
  • Guarantee human oversight of the system.
  • Ensure the model’s robustness, accuracy, and cybersecurity.

Categories of AI Systems by Risk Level

The AI Act introduces a classification of AI systems based on the level of risk they pose:

Unacceptable Risk:

Systems considered a threat to individuals, such as manipulation of vulnerable people (e.g., interactive toys encouraging children to engage in dangerous behavior), social scoring, or real-time remote biometric identification. These systems are banned in the EU.

High Risk:

Systems that pose risks to health, safety, or fundamental rights.

This includes systems used in critical infrastructure (transportation, energy), public and private services (healthcare, finance), education, employment, or justice.

Exceptions are provided for specific situations, such as locating missing persons.

Limited Risk:

Systems such as chatbots, personalized recommendations, or deepfakes.

They must be transparent and inform users that they are interacting with an AI system.

Minimal Risk:

Systems such as spam filters or AI in video games. No specific obligations, but providers are encouraged to adopt codes of conduct.

Preparing for the AI Act: Steps for Businesses

To anticipate the entry into force of the AI Act, companies can take several measures:

Strengthen AI Governance:

Appoint an AI officer, who could be the data protection officer, responsible for ensuring compliance with the new legislation. Create internal and external working groups, establish regulatory monitoring, and participate in public consultations.

Map AI Systems:

Assess existing systems to determine their risk level and compliance, and regularly conduct tests and audits.

Train and Raise Awareness Among Teams:

Inform employees about the implications of the AI Act, the risks associated with using AI (bias, discrimination), and links with other regulations such as the GDPR.

Question Providers:

Before CE marking becomes mandatory, request information from providers on the composition of their products, the data used to train algorithms, and personal data protection measures.

Turn Regulation Into a Competitive Advantage:

By integrating the AI Act’s guidelines early on, companies can position themselves as trusted players in the global market for ethical and responsible AI.

Impact on Startup Innovation and Competitiveness

The AI Act also aims to boost innovation among SMEs and startups.

By offering a clear regulatory framework, it encourages the development of new AI systems.

“Regulatory sandboxes” will allow companies to test and train their models before commercialization, without immediately having to meet all regulatory constraints.

These controlled environments will be supervised by competent authorities appointed by the Member States.

European and French Investments in AI

Alongside the implementation of the AI Act, the European Union is strengthening its investments in artificial intelligence to increase its global competitiveness. The European Commission

proposes allocating one billion euros per year to AI projects under dedicated programs. The goal is to reach €20 billion in annual public and private investments.

In France, the government has announced similar ambitions. On May 21, the President announced an additional investment of €400 million for the training of AI specialists, aiming to train 100,000 people per year, as well as the creation of a new investment fund by the end of 2024.

AI is a strategic priority, with €2.5 billion allocated under a national program. Significant fundraising efforts, backed by renowned investors and public bodies, reflect this growing momentum.

Conclusion

The adoption of the AI Act marks a decisive step in the global regulation of artificial intelligence. By establishing a clear legal framework, the European Union aims to ensure the development of ethical AI that respects fundamental rights, while boosting innovation and the competitiveness of European businesses. Organizations would do well to prepare now for this new regulation to take full advantage of it and strengthen their market position.

About the Author

Assouan Bougherara

Senior Legal and R&D Manager at Smart Global Governance

Give me the latest news!

Subscribe to learn more about industry news

En cliquant sur « S’abonner » vous acceptez la Politique de confidentialité Smart Global Governance et acceptez que utilise vos informations de contact pour vous envoyer la newsletter