What is the EU AI Act regulation, and when will it come into force

BI & Data Science Analyst at 10 Senses

Artificial Intelligence (AI) has been sparking discussions worldwide for a few years now. It has positioned itself at the crossroads of innovation and ethical dilemmas.

The end of 2022 and 2023 have been full of AI topics due to the rise in generative AI systems. OpenAI released ChatGPT in November 2022; in 2023, Google launched Bard and Meta, the Tool Former.

All these Generative AI launches caused massive interest in and use of Artificial Intelligence, even among users who, until then, had never been interested in related topics. Nevertheless, they had finally noticed that the future is now and AI systems are powerful enough to substitute humans for certain tasks.

In fact, AI currently intertwines with our daily lives and critical infrastructures in various organizations at an accelerating pace. There are many individuals who welcome AI development with open arms. Nonetheless, there are also people who are more and more concerned about where AI growth is going.

As a result, the need for trustworthy AI, including explainable AI, and, as a result, comprehensive and balanced regulations for AI systems grows. Such new rules should not only ensure technological advancement but also societal well-being and trust.

In fact, the European Union (EU) started to work on regulating AI in 2021 by proposing to create core principles as the Artificial Intelligence Act (AI Act). Nevertheless, it was June 2023 when the European Parliament approved its negotiating position on the proposed Artificial Intelligence Act, and everyone started to talk about this regulation again.

Let’s check:

  • what is the EU AI Act,
  • what does the AI Act contain,
  • when will the AI Act come into force,
  • what impact the AI Act will have on research and development of new technologies.

 

What is the EU AI Act?

Starting with the basics, the EU AI Act is part of the EU’s digital strategy to regulate AI and ensure better conditions for the development and usage of such an innovative technology.

The European Union is aware that AI provides stakeholders with multiple benefits, among others:

  • improved healthcare,
  • safer and cleaner transport,
  • more effective manufacturing,
  • cheaper and more sustainable energy.

 

The first European legal framework for Artificial Intelligence was already proposed in April 2021 by the European Commission. Back then, it stated that all AI systems can be used in diverse applications, but they should be analyzed and classified according to the risk they pose to users. Different risks translate into regulations in this matter.

In December 2022, the European Council adopted the general approach to the EU Act. Its aim is to make sure that AI systems on the EU internal market, in the private sector, and used in the Union are safe and respect existing law on fundamental rights and Union values.

This year, 2023, on June 14, the European Parliament voted in favor of the proposed AI Act draft and sent it to the European Commission. It makes it the world’s first AI legislation. As a result, Europe has outpaced the USA in this aspect, even though the American Congress clearly brought up the need to regulate AI at this level.

By imposing such a law, the European Union wants to make sure that AI systems used in the EU are transparent, safe, non-discriminatory, and environmentally friendly. What is more, it intends to provide AI developers, deployers, and users with clear requirements and obligations regarding specific uses of an AI system.

What does the AI Act contain?

The Artificial Intelligence Act draft rules establish obligations for AI system providers and users depending on the level of risk the use of AI may create. As a result, all Artificial Intelligence systems need to be properly assessed and classified, even if they pose minimal risk.

The European approach classifies the risks proposed by the AI Act into four categories:

  • unacceptable risk,
  • high risk,
  • limited risk,
  • minimal risk.

 

What is the EU AI Act - classification of risks

Source: Regulatory framework proposal on artificial intelligence | Shaping Europe’s digital future (europa.eu)

Unacceptable risk

AI systems classified as unacceptable risks are systems that are considered a clear threat to the safety, livelihoods, and rights of humans and will be banned. Such systems entail:

  • cognitive behavior manipulation of humans or specific vulnerable groups (for example, with mental or physical disabilities), like voice-activated gadgets for children encouraging aggressive behaviors,
  • social scoring, which is the classification of people based on behavior, socio-economic status, or personal sensitive characteristics,
  • real-time and remote biometric identification systems in publicly accessible spaces for law enforcement purposes.

 

It is worth mentioning, though, that there are some exceptions. These are situations where when such systems search for potential victims of crime, including missing children, try to prevent a specific, substantial, and imminent threat to the life or physical safety of persons or of a terrorist attack, or try to detect, identify or prosecute a perpetrator or individual suspected of a criminal offense.

High risk

The high-risk category refers to AI systems that create a negative impact on the safety or fundamental rights of people. The AI Act draft distinguishes two categories of high-risk AI applications, such as:

  • AI systems used as a safety element of a product or falling under EU health and safety harmonization legislation (for example, toys, medical devices, lifts),
  • AI systems deployed in eight specific areas shall be registered in the EU database managed by the Commission:
    • Biometric identification and categorization of natural persons (for example, facial recognition and systems that create facial recognition databases),
    • Management and operation of critical infrastructure,
    • Education and vocational training,
    • Employment, worker management, and access to self-employment,
    • Access to public spaces and enjoyment of essential private services and public services and benefits,
    • Law enforcement,
    • Migration, asylum, and border management,
    • Administration of justice and democratic processes.

 

All AI systems classified as high-risk have an assessment before their launch on the market or put into service. What is more, they will have to comply with a range of requirements in fields such as:

  • risk management,
  • testing,
  • technical robustness,
  • data training and AI governance,
  • transparency,
  • human oversight,
  • cybersecurity.

 

As a result, providers and users of such high-risk AI systems will have to fulfill a certain range of obligations. Outside-EU providers will have to be represented by an authorized representative in the EU to ensure the conformity assessment, establish a post-market monitoring system, and take corrective actions when needed.

Do you need help with AI projects?

Let’s check if we can help you

Limited risk

Limited-risk entails AI systems that involve:

  • interaction with people (for example, chatbots),
  • emotion recognition systems,
  • biometric categorization,
  • image, audio, or video content generation (for example, deepfakes).

 

Minimal or low risk

Finally, all other AI systems that present only low or minimal risk will be developed and used in the European Union without the obligation to conform to any additional legal obligations. Nevertheless, the AI Act foresees creating codes to encourage providers of non-high-risk AI systems to voluntarily apply the mandatory requirements and transparency obligations for high-risk AI systems.

In fact, many AI systems, or even the majority, fall into the low-risk category at the moment. These are, for example, spam filters, or inventory-management systems.

What is the EU AI Act - process steps

Source: Regulatory framework proposal on artificial intelligence | Shaping Europe’s digital future (europa.eu)

When will the AI Act come into force?

The EU decision-making process involves the three EU institutions: the European Commission, Council, and Parliament. To introduce the AI Act, they need to produce an agreed-upon version of the text. As of now, the AI Act is a draft, not the final version.

The initiative was started by the European Commission in April 2021. It was followed by the European Council in December 2022. Then, finally, the European Parliament carved out their institutional positions, which were voted in favor on June 14.

As a result, the AI Act draft is ready. Now there will be talks with the EU countries in the Council on the final form of the imposed law. It is expected to reach political agreement in late 2023. The AI Act is expected to be finalized in early 2024. Nevertheless, conserving the 18–24-month transition period, the AI Act comes into effect in late 2025 or early 2026.

Summing up the timeline:

  • April 2021: the European Commission presents its proposal for the EU AI Act.
  • December 2022: the European Council adopts a general approach to the EU AI Act.
  • June 2023: European Parliament members adopt their negotiating position on the EU AI Act,
  • Late 2023: reaching political agreement on the EU AI Act,
  • Early 2024: finalization of the EU AI Act,
  • Late 2025 or early 2026: the EU AI Act coming into force.

 

What impact will the AI act will have on research and the development of new technologies?

Although there is still a lot of time before the AI Act comes into force, there are already lively discussions on how it will impact the research and development of new technologies.

Numerous individuals fear that it will hinder the progress of AI systems. Nevertheless, the truth is that it is rather likely to attract investment in Artificial Intelligence research and development. Such regulation can contribute to:

  • High-quality job creation,
  • Technological innovation,
  • Further economic growth.

 

The Act is a representation of a big step forward in AI technology regulation, which will have far-reaching economic and financial implications.

Nevertheless, the member states’ positions are different across nations, which reflects the diverse economic interests and concerns about the implementation of the AI Act.

Different perspectives of countries on the EU AI Act

Germany considers the Act essential to prevent fragmentation in the single market. They also believe it ensures that the use of AI systems is ethical, and has the potential to boost economic competitiveness.

France believes the Act can help protect the European values and maintain a competitive edge in Artificial Intelligence tools, generating economic growth.

Eastern European countries, including Poland, express concerns about excessive AI regulation that might hinder AI innovation. As a result, they call for a more flexible approach, especially for startups and small businesses.

The Netherlands highlight the need for clear rules and enforcement mechanisms, especially in areas like healthcare and transportation, where AI systems have huge societal and financial implications.

Finally, Sweden hopes that the EU Artificial Intelligence Act won’t slow down the development of AI technologies, especially in the context of continuous digital transformation and economic growth.

Such different angles emphasize the complexity of reaching a universal consensus among the European Union member states. Each of them has their own set of economic priorities and goals, but also concerns related to AI systems, and it will not be easy to find a common ground in this matter.

Responsible AI topic is important

The topic of responsible AI was the core of the ALLAI conference in Amsterdam in October 2023, which we attended. The conference inaugurated the event of WorldAIWeek 2023 and covered topics such as the responsible AI state of play, legal interpretation of the AI Act, or psychological patterns of AI.

All in all, along with the development of Artificial Intelligence, the need for its regulation is growing at an accelerating pace. The AI Act is the world’s first official attempt to impose a regulatory framework on AI systems that aim to ensure not only technological growth but also societal well-being and trust, depriving them of intrusive and discriminatory uses. As of now, the AI Act is in draft form, and it will not come into force sooner than in 2025, but it already sparks a lot of emotions and discussions worldwide.

Talk to our expert

Are you looking for expert skills for your next data project?

Or maybe you need seasoned data scientists to extract value from data?

Fill out the contact form and we will respond as soon as possible.