Europe is moving toward comprehensive legislation regulating artificial intelligence (AI) across industry sectors. This is an essential step because ‘harmonising’ AI regulations allows an AI company to operate in all 27 member countries as it would be in any single one.

In February 2020, the European Commission published a White Paper on AI and proposed to set up a European regulatory framework for trustworthy AIs. In October 2020, the European Parliament adopted three resolutions on AI covering ethics, civil liability, and intellectual property (IP). Furthermore, it asked the Commission to establish a legal framework of ethical principles for developing, deploying, and using AI, robotics, and related technologies. A draft of this regulatory framework was ready last April, which was updated and adopted on 22 March 2022.


The original draft proposal, the Artificial Intelligence Act (AIA), adopts a tiered risk-based approach toward AI uses with four risk levels and provides the following recommendations:

  • Unacceptable risk AIs are harmful AIs that violate EU values (such as social scoring by governments). These should be banned because of the unacceptable risks they create.
  • High-risk AIs are AI systems that adversely impact people’s safety or their fundamental rights. To ensure trust and protect safety and fundamental rights, a range of mandatory requirements (including a conformity assessment) should apply to all high-risks systems.
  • Limited risk AIs are AI systems subject to a limited set of obligations (e.g. transparency).
  • Minimal risk AIs are all other AI systems. These can be developed and used in the EU without additional legal obligations than existing legislation.


Overall the recommendations apply to products including those in financial services, medical devices, machinery, and toys, as well as the following:

  1. Biometric identification and categorization of natural persons
  2. Management and operation of critical infrastructure
  3. Education and vocational training
  4. Employment, workers management, and access to self-employment
  5. Access to and enjoyment of essential private services, public services, and benefits
  6. Law enforcement
  7. Migration, asylum, and border control management
  8. Administration of justice and democratic processes

 The report also supports establishing regulatory sandboxes to develop innovative AI systems and business models under regulatory oversight.

The proposal’s next steps are a discussion by the co-legislators, the European Parliament, and the Council (EU Member states).


Sources: CSIS, EU Legislative Train Schedule, European Commission

«« Cybersecurity Risks Arising from Healthcare Employees

NHS Blueprint for Testing Bias in AI Models »»

Latest Articles

European Union, Artificial Intelligence, AI Regulation The EU adopted recommendations on Artificial Intelligence regulation, after 18 months of inquiries, and will focus on the technology’s potential to complement humans.