Demonstration
To process your request, we need to process your personal data. Find out more about the processing of your personal data here.

August 1, 2024 marks an important date for businesses and digital professionals in France, with the entry into force of the Artificial Intelligence Regulation (AIR).

This regulation, adopted by the European Union, aims to regulate the use of artificial intelligence (AI) in commercial, industrial and social activities, in order to guarantee safety, ethics and transparency in the use of these technologies.

In this article, we decipher these new regulations for you, share everything you need to know, and suggest the best practices to adopt to ensure your compliance.

What is RIA?

The Artificial Intelligence Regulation (AIR ) is a legislative framework drawn up by the European Union to regulate the application and development of AI. This regulation is part of a broader drive to make Europe a world leader in artificial intelligence, while protecting citizens' fundamental rights.

The RIA is based on a risk-based approach, categorizing AI systems into four levels of risk: unacceptable, high, limited, and minimal. Each category is subject to specific rules, designed to ensure a balance between technological innovation and the protection of human rights.

RIa

RIA's objective?

  • Harmonization of rules for the marketing, commissioning and use of AI systems in the EU
  • Offer AI systems that are safe, transparent, traceable, non-discriminatory and environmentally friendly

Why is the RIA important?

Artificial intelligence is at the heart of digital transformation. However, without adequate regulation, it can present significant risks, particularly in terms of discrimination, privacy violations and security. The RIA therefore responds to the need to establish a clear framework for the use of AI, guaranteeing that its development is carried out ethically and transparently.

This regulation is crucial for :

  • Ensuring the safety of AI systems: By imposing rigorous standards for applications deemed to be high-risk.
  • Protect fundamental rights: By preventing discrimination and ensuring the transparency of AI systems, particularly those used in sensitive areas such as justice, employment and public services.
  • Encouraging innovation: By providing companies with a clear framework that facilitates the integration of AI into their processes while ensuring compliance 

Key points to remember

  1. Prohibition of practices deemed unacceptable: because of their potential to violate fundamental rights (social rating, real-time biometric identification, etc.).
  2. Regulation of high-risk AI systems: AI systems used in critical sectors, such as healthcare, education, and justice, are subject to strict requirements in terms of transparency, risk management, and human control.
  3. Obligations for AI suppliers: Companies developing AI systems must ensure that their products comply with European standards, including data traceability, technical documentation and conformity assessments.
  4. Transparency and information: End-users need to be informed when they interact with an AI, particularly in the case of chatbots or content-generating AIs.
  5. Encouraging responsible innovation: The regulation introduces regulatory sandboxes to offer companies a secure, supervised environment in which to test innovative AI solutions. These sandboxes allow, among other things, the re-use of personal data, including sensitive data, when this serves a major public intereste.g improving the healthcare system).

Who checks its application?

Two types of governance : 

At European level:

  • AI Office (European Commission institution)
  • European AI Committee (representatives from each member state)

At national level: France has 12 months to define one or more control authorities.

What are the penalties?

In the event of non-compliance, the RIA imposes a market withdrawal or product recall, as well as administrative fines of up to 35 million euros or 7% of annual worldwide sales.

Does RIA replace GDPR ?

NO: The GDPR concerns any processing of personal data, whereas the RIA concerns the development, marketing and deployment of AI. 

Differences between RIA and GDPR

Although the RIA and the General Data Protection Regulation (GDPR ) are both European legal frameworks, they aim to regulate different aspects of digital.

RIAGDPR
Effective dateAugust 1, 2024May 25, 2018
Scope of applicationAI development, commercialization
and deployment
Any processing of personal data
(including for the purpose of developing
or training an AI)
ApproachRisk-based approach:
unacceptable, high, limited,
and minimal.
Approach based on the application
of key principles, risk assessment
and Accountability
Target playersMainly suppliers
and deployers of AI systems
Processing managers
and processors
Conformity assessmentInternal or third-party assessment
through a risk management system
Accountability principle (legal documentation
in-house) and
compliance tools
Control authorityNot yet definedCNIL (National Commission for Information Technology and Civil Liberties) ) (Commission nationale
de l'informatique et des libertés)
SanctionsMarket withdrawal or product recall
+ administrative fines of up to
35 million euros or 7%
of worldwide annual sales
.
Formal notice + administrative
fines of up to
20 million euros or 4% of worldwide annual sales
.

How does RIA fit in with GDPR ?

🔍 Here are four possible scenarios:

 1️⃣ RIA only: RIA only applies when, for example, a high-risk AI system is used without involving the processing of personal data. 

2️⃣ GDPR only: The GDPR applies in situations where personal data is processed, but the AI system used is not subject to the requirements of the RIA. 

3️⃣ RIA and GDPR combined: The two regulations apply in conjunction when, for example, a high-risk AI system requires personal data for its development or implementation. 

4️⃣ Neither applies: Neither the RIA nor the GDPR are required in cases where an AI system presents minimal risk and does not process personal data.

What does this mean for your company?

The coming into force of the RIA requires companies to review their AI processes and systems to ensure compliance with the new regulations. 

For you as a company, it's an opportunity to strengthen your credibility and competitiveness in the marketplace by adopting practices that comply with European expectations. 

Here are 4 best practices to implement right now: 

  • Audit and evaluate your AI systems: Companies need to identify the AI systems in use, assess their level of risk, and implement the necessary corrective measures.
  • Update your internal policies: It's essential to develop clear internal policies regarding the use of AI, ensuring that all practices comply with RIA requirements.
  • Train your teams: Employees must be trained in the new AI obligations and best practices to avoid any unintentional breaches of the regulations.
  • Collaborate with experts: Companies can benefit from the expertise of external DPOs or specialist firms to navigate the complexities of RIA and ensure ongoing compliance.

Dipeeo support

As an external DPO, Dippeo offers its clients 

  • An AI charter, which provides a framework for the use of AI and raises awareness among your employees.
  • Tailor-made advice 

Conclusion

The Artificial Intelligence Regulation (AIR) represents a turning point for the AI industry in Europe, imposing high standards in terms of safety and respect for human rights. Its entry into force on August 1, 2024 is a key step towards a more responsible and ethical use of artificial intelligence.

For companies, this means not only a need to adapt quickly, but also an opportunity to strengthen their credibility and competitiveness in the marketplace by adopting practices that comply with European expectations. The future of AI is promising, but it's up to economic players to embrace it safely and responsibly.

Anaïs GUILLOTON
Anaïs GUILLOTON