Artificial Intelligence (“AI”) software and self-learning systems have been predicted to be at the heart of future threats against humanity, whether it’s the existential enemy in Terminator movies or undesirable behavioural controls seen in Netflix’s Black Mirror. Encouraged by the success of GDPR, the European Commission (EC) has now released its proposed Regulation on AI use in the European Union (EU). It seeks to encourage more AI use – within a safer regulated environment.
The Regulation would have extra-territorial effect and wide-ranging implications for any organisations developing, supplying or using AI applications (or the outputs of those applications) in the EU (irrespective of whether that organisation is established in the EU).
The EC considers that AI can bring a "wide array of economic and societal benefits across the entire spectrum of industries and social activities". By addressing the risks associated with some forms of AI and taking a "human centric" approach to regulation, the EC hopes to promote public trust in AI applications and, in turn, boost the uptake of such technology.
Who does the Regulation apply to?
The Regulation will generally apply to:
- providers of AI systems which are sold or made available in the EU, regardless of whether those suppliers are established within the EU;
- users of AI systems in the EU; and
- providers and users of AI systems located outside the EU, where the output produced by the system is used in the EU.
It is likely to have a flow-down impact on the use, development and supply of AI solutions internationally, in a similar way to GDPR’s global impact on privacy practices.
Some of the key rules contained in the Regulation include:
Ban on certain AI systems: The Regulation bans a small number of AI applications that are categorised as unsafe or which are considered to violate fundamental human rights. Notably, these include:
- the use of AI to deploy subliminal techniques to materially distort behaviour in a manner that can cause physical or psychological harm;
- any AI system that exploits any vulnerable group; and
- the use of real-time biometric ID systems in publicly accessible spaces for law enforcement purposes, except where broad exemptions apply (e.g. the targeted search for specific victims of crime).
"High-risk" AI systems: Certain applications of AI are categorised to be "high-risk", where both suppliers and users will have to comply with certain rules. These include systems for:
- biometric identification and categorisation of humans;
- management and operation of critical infrastructure (e.g. operation of road traffic and supply of utilities);
- education and vocational training (e.g. assessing participants in tests required for admission);
- employment recruitment practises and evaluating employee performance;
- assessing eligibility for benefits;
- law enforcement (e.g. to predict offending or reoffending);
- migration, asylum and border control management (e.g. conducting "lie detection" tests and making decisions regarding applications for asylum or residency); and
- administration of justice and democratic processes (e.g. assisting judicial decisions).
Providers of high-risk AI systems would need to comply with various requirements including:
- maintaining a risk management system for the AI system;
- implementing appropriate data governance and management practices;
- maintaining technical documentation to demonstrate Regulation compliance;
- ensuring transparency about the use of the AI system;
- maintaining human oversight (including "kill switch" functionality enabling human intervention via a "stop" button or similar procedure);
- registering the high-risk AI system on a newly established publicly accessible high-risk AI register managed by the EC.
The requirements above can also apply to importers, distributors and users of high-risk AI systems in certain circumstances.
Users of high-risk AI systems also need to comply with user-based rules and restrictions regarding AI system monitoring, the use of input data and the storing of logs automatically generated by the AI system.
- Transparency requirements for general AI: AI systems intended to interact with humans – such as chatbots and deepfakes – will be required to be designed and developed in a way that makes users aware that they are interacting with such AI systems.
Penalties, administration and codes of conduct
Penalties: The proposed penalties for infringement of the rules regarding banned AI systems include fines of up to 6% of global annual turnover or €30million (whichever is greater) and for, non-compliance with the rules related to high-risk AI systems, fines of up to the greater of either 4% of global turnover or €20million.
Governance and administration: The Regulation will also establish the European AI Board and processes for local oversight and management through EU Member State local authorities. The Regulation also provides for the establishment of a regulatory sandbox to facilitate the development and testing of compliant AI systems.
Codes of conduct: In addition to the rules set out above, the Regulation also requires the EC and EU Member States to encourage and facilitate the preparation of individual codes of conduct for AI systems that are not considered high-risk (for e.g., in areas such as environmental sustainability and disability access). Codes of conduct may be drafted by providers of AI and other AI ecosystem participants.
The Regulation is likely to undergo some changes as feedback is received from the European Parliament and Member States.
The full text of the Regulation can be accessed here.
These wide-ranging rules will have far reaching impacts. In Icon’s sphere of operation, many might be surprised how much value can be added with appropriate AI, and how widespread this is currently in many technically associated forms (e.g., machine learning). Banning some AI applications - including those used for mass surveillance and social credit scores – and regulating others should help to reassure consumers that technologies may only be used for societal and individual good.
But is a one size fits all over simplistic? This may stifle certain forms of desirable innovation. For example, there is a huge difference between the use of biometric technologies to assure the identity of a person through analysis of handwritten signatures or as human agent support using voice recognition for faster client servicing versus the ‘big brother’ threats of mass surveillance using facial recognition or gait analysis (how one walks). The questions may be more whether such AI can be always safe, used without unacceptable bias and who gets to decide?
However, this Regulation finally gets added to statues, it will certainly mean that many companies become even more regulated. Paradoxically, we are likely to see situations where compliance is supported by many different forms of AI systems! The AI Regulation will address AI safety risks ensuring safe functioning in machinery, while the Machinery Regulation will address, as applicable, safe AI integration into the overall machinery, so as not to compromise the safety of the machinery as a whole.
Along with Robotics and parallel technologies, ‘Trustworthy AI’ has to be applauded as a worthy goal. But what’s good for companies should also be good for Governments (many are exempt from the rules). We will see in the ensuing public debate whether citizens agree with the manner in which this regulatory text seeks to shape that trust and ultimately protect Society's citizens.
This will inevitably be a fluid landscape in the midst of fast moving technological change - Ask Icon about any aspect of the Regulation and how your organisation can both assist users using advanced tools as well as ensuring compliance with regulations. Contact Icon
EU Commission President (Ursula von der Leyen) introduces the Regulation video here