Monday, December 23, 2024
HomeAI News & UpdatesEU Artificial Intelligence Act: World's First All-Encompassing Legislation On AI 

EU Artificial Intelligence Act: World’s First All-Encompassing Legislation On AI 

The EU Artificial Intelligence Act (“AI Act”) was approved by the European Parliament on March 13, 2024, with a significant majority. The AI Act will establish the world’s first complete framework of regulations for artificial intelligence. 

The key points for this article will be: 

  • The European Parliament approved the EU Artificial Intelligence Act on March 13, 2024. This Act will be the first comprehensive regulation in the world to regulate artificial intelligence (AI).  
  • The AI Act is expected to have an extraterritorial reach, meaning that foreign corporations may still be impacted by it despite not having their headquarters in the European Union.  
  • The AI Act governs several aspects of the AI lifecycle and is expected to impose significant legal responsibilities on companies who utilize AI when it comes to their workforce. 
  • Punishments for failing to comply with the AI Act can reach EUR 35 million (USD 38 million), 7% of the company’s worldwide annual turnover in the prior fiscal year, or the greater of the two amounts, indicating the magnitude of the dangers involved. 

What Will EU Artificial Intelligence Act Do? 

Throughout the legislative process, the definition of artificial intelligence has expanded to encompass both generative and predictive AI (i.e., AI that creates new outputs based on patterns in the data it has been trained on; examples of this include ChatGPT). 

The AI Act provides a comprehensive definition of “AI systems” as machine-based systems that can operate with different levels of autonomy. These systems may demonstrate adaptiveness after being deployed and have the ability to generate outputs, such as predictions, decisions, recommendations, or content that can impact physical or virtual environments. The generation of these outputs is based on the conclusions drawn from the input received by the AI system. 

The legislators encountered a hurdle in formulating a precise and enduring definition of AI, as evidenced by the initial version of its AI Act in April 2021, which failed to predict the utilization of generative AI and was created before the introduction of ChatGPT by about 18 months. 

The AI Act governs many functions involved in the lifecycle of artificial intelligence. However, in the context of this article, the primary responsibilities lie with providers (those who develop AI or make it available for use) and deployers (organizations that have authority over the AI system, such as employers). The level of risk associated with the AI determines the extent of their responsibilities. 

AI Act: Tech Revolution by the European Union

A “Risk-Based” Approach 

The AI Act adopts a risk-based strategy for regulating AI systems, meaning that the level of compliance duties increases in proportion to the possible harm that the AI systems offer to persons. According to the AI Act, “risk” includes both the likelihood that harm would occur and the potential degree of that harm. 

  • Unacceptable Risk 

The AI Act outlines a number of AI procedures that are considered to be unlawful due to the “unacceptable risk” they create. The prohibition primarily targets AI systems that pose an unacceptable level of danger to the safety of individuals or exhibit intrusive or discriminatory behaviour. Employers should take note that the latest addition to this list of forbidden practices is the utilization of AI systems to figure out the emotions of people in the workplace unless the purpose of using the AI system is for medical or safety purposes. According to the AI Act, employers will be prohibited from using AI. 

  • High Risk 

The AI Act establishes a classification for AI systems that are considered “high risk” and consequently will be subject to extensive regulatory supervision. This includes a comprehensive set of legal obligations for both providers and deployers, with the providers bearing the majority of the responsibilities. 

 

According to the AI Act, the default stance is that the following applications of AI systems in work environments will be classified as high risk: 

  1. For hiring or selecting (including posting targeted job ads, examining and screening applications, and assessing applicants);  
  1. For making decisions that impact the conditions of employment contracts, such as contract promotions or termination, task distribution based on personal attributes or characteristics, or performance and behavior monitoring and evaluation in the workplace. 

This will likely encompass the majority of applications implemented by employers using AI systems with regard to their employees. Consequently, employers who employ AI in their workplaces will be obligated to undertake supplementary safety measures. 

Compliance Duties On “Providers” Of AI  

The primary responsibilities of high-risk AI providers are to ensure that their systems are designed in a manner that guarantees that they meet the AI Act from the beginning and throughout their usage. 

The “risk management system” must be established, implemented, documented, and maintained as a continual iterative process that is planned and executed during the whole lifespan of a high-risk AI system. Conducting a risk assessment only once does not appear to meet this requirement. 

  • AI models are trained in systems that ensure compliance with quality standards specified in the AI Act. The training, validation, and testing datasets are required to be relevant, adequately representative, and as error-free as feasible. 
  • The systems must be built and developed in a manner that guarantees their operation is transparent enough for deployers to read the system’s output and utilize it effectively. 
  • The AI Act mandates the implementation of a documented quality management system, which includes the creation of written policies, processes, and instructions in order to guarantee compliance. 
  • It Is necessary to create and keep technical records attesting to Act compliance. 
  • The systems must be built and developed to enable developers to incorporate appropriate human supervision that matches the risks, amount of autonomy, and usage context of a high-risk AI system. 
  • AI systems with a high level of risk should have the capability to automatically record logs, and providers must keep copies of these logs. 
  • The systems must be created and developed to attain a suitable degree of accuracy, resilience, and cybersecurity. 
  • It is necessary to verify that the high-risk AI system goes through the appropriate conformity assessment procedure before being made available for sale. 
  • Need to draft an EU conformity evaluation. 
  • One must adhere to the registration obligations stipulated in the AI Act. 
  • Obligation to retain records regarding AI compliance. 

Compliance Duties For The “Deployers” Of AI  

The majority of the responsibility for compliance will be placed on providers. Deployers will nevertheless be held to a number of substantial requirements, many of which stem from the duties placed on providers (described above): 

  • Making sure the AI system has been used as directed by its operating manual. 
  • Mandatory requirement to allocate human supervision to individuals possessing the requisite expertise, training, authorization, and assistance. 
  • Employers must ensure that the input data is appropriate and relevant, given the intended use of a high-risk AI system, to the extent that they have control over it. 
  • There is a need to keep an eye on the AI system in accordance with its usage instructions and, if necessary, notify the provider of concerns. 
  • The requirement is to keep duplicates of all automatically generated logs to a duration suitable for the specific high-risk AI system and for a minimum of six months. 
  • It is necessary to notify workers’ representatives, impacted workers, and other individuals (if applicable) that they will be affected by high-risk AI systems and their use before implementing them. 
  • If the data provided by deployers is applicable, there is an obligation to use it for completing data protection impact assessments.  
  • If it is relevant, there is a requirement to perform a fundamental rights impact assessment. 

The AI Act also establishes a new right for people to request a “clear and meaningful” explanation from the deployer of the AI system’s role in the decision-making process when they have been subjected to a decision based on a high-risk AI system that has legal consequences or materially affects their fundamental rights, like decisions about performance management or termination. The practical application of this right is yet to be determined, but it is conceivable that unhappy workers could utilize it to compel their employers to provide explanations on the functioning of complex algorithms. 

AI systems with lower risk, such as Chatbots used for customer service, would have fewer and less demanding requirements for transparency. These are unlikely to be useful to employers. 

EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act

Scope Of The EU AI Act:  

The AI Act will have an extraterritorial reach, similar to the European Data Protection Regulation (“GDPR”), and foreign businesses may still be impacted by it even if they are not headquartered in the EU. 

In addition to EU-based companies, the AI Act also applies to the following:  

  1. Providers who make ai systems or generative ai models available in the eu, regardless of their location.  
  1. Providers and users of ai systems based outside the eu if the output generated by the ai system is utilized within the eu. 

Consequences For Not Complying  

Serious fines for failing to comply with the AI Act can reach as high as 7% of the company’s worldwide annual turnover from the prior fiscal year or EUR 35 million (USD 38 million). In comparison, this amount is nearly twice the maximum fine imposed for violations of the General Data Protection Regulation (GDPR), which was already considered to be significantly high when it was introduced six years ago. 

What Is Next? 

The last steps of the legislative process for the AI Act are a final proofread and official European Council endorsement, which should occur before the EU elections at the beginning of June. The AI Act will then be released and published in the Official Journal, going into effect twenty days after that. 

Most of the provisions will take effect two years later, while the ban on AI systems posing an unacceptable risk will take effect after six months, and the responsibilities for high-risk systems will apply after 36 months. 

As for what to do next, companies should be thoroughly auditing how AI is used across the board and thinking about how it could be classified under the AI Act to figure out which compliance requirements apply to them. The primary objective will be to guarantee the removal of any AI system that is considered to present an “unacceptable” level of risk prior to the implementation of the ban. Employers should anticipate the future compliance obligations imposed by the AI Act when implementing new AI applications. Specifically, the new legislation emphasizes the importance of designing AI systems that adhere to the AI Act and facilitate compliance. 

Click this link to view the Act’s full text as approved by the European Union.  

 

 

Editorial Staff
Editorial Staff
Editorial Staff at AI Surge is a dedicated team of experts led by Paul Robins, boasting a combined experience of over 7 years in Computer Science, AI, emerging technologies, and online publishing. Our commitment is to bring you authoritative insights into the forefront of artificial intelligence.
RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments