Page of | Results - of

Main image for news
Insight

Understanding the European Union’s New Artificial Intelligence (AI) Regulations and the Key Takeaways for U.S. Businesses

By Alliant Specialty

Listen to the audio version:

In the world of privacy rights, the actions of the European Union (EU) are often instructive for determining where policy in the U.S. is headed. We have seen this with respect to the General Data Protection Regulation (GDPR), which has served as something of a model for California and other state regulators. As the development of artificial intelligence advances, the EU Parliament recently reached an agreement with the European Council Presidency on new rules for the use of AI. The primary new elements of the provisional agreement in the EU include:

  • Rules on high-impact AI models that are intended for general use but may cause systemic risk;

  • Revisions to the system of governance for AI, with some enforcement powers at the EU level;

  • Prohibitions on certain uses of AI with some exceptions for law enforcement; and perhaps most notably,

  • A requirement that deployers of certain AI systems deemed to be high risk conduct a “fundamental rights impact assessment” prior to rolling out the system. This assessment can be compared to an environmental impact statement for AI, but with a focus on human rights.

In the EU, an elaborate governance system will be set up with three units: an AI Office to establish standards and testing practices, a board comprised of member states responsible for implementing regulations and an advisory forum for various stakeholders. In addition, certain uses of AI considered too risky and will be banned from the EU altogether. These include:

  • Cognitive behavioral manipulation;

  • Untargeted scraping of facial images from the internet or CCTV;

  • Emotional recognition in the workplace and educational institutions;

  • Social scoring;

  • Biometric categorization to infer sensitive data, such as sexual orientation or religious beliefs; and

  • Some cases of predictive policing for individuals.

Penalties for Noncompliance


Penalties for noncompliance will be calculated using a percentage of a company’s global annual revenue or a predetermined amount, whichever is higher. Under the AI Act, fines could run as high as 35 million Euros or 7% of annual turnover for violations of the banned AI applications; 15 million Euros or 3% for violations of the AI Act’s obligations; and 7.5 million Euros or 1.5% for the supply of incorrect information. However, there are plans to provide for more proportionate caps on administrative fines for small and midsized businesses and start-ups for infringements of the provisions of the AI Act. 

Private party litigation is not as prevalent in Europe as in the United States, yet the compromise agreement makes it clear that individuals or other parties will be able to file a complaint with their member state’s authorities and can expect that the matter will be handled accordingly. This is similar to when an employee in the U.S. files a charge of discrimination with the Equal Employment Opportunity Commission (EEOC), and the agency then investigates it; the only difference being that the private party in the U.S. could still proceed in court.

Takeaways for U.S. Businesses


Assuming the European Parliament ratifies this agreement, organizations that conduct business in Europe will have two years from the Act’s “entry into force” before the new regulations take effect. Organizations that do not conduct business in Europe should still pay close attention; California has already proposed regulations around AI, and the framework that has been drafted by the National Institute of Standards and Technology (NIST) provides helpful guidance around governance, mapping, measurement and management of AI. While this is only a draft, it can help companies understand the risks associated with the use of AI, and what they should be doing to develop internal policies that can help reduce those risks.

How Can Alliant Help?


Alliant Cyber stays at the forefront of regulatory developments. Our risk consulting team is prepared to work with businesses to assess these risks and develop appropriate policies, while our brokerage team can help businesses anticipate necessary insurance coverages should losses occur. Beyond cyber, implications from the use of AI can reach media liability, employment practices liability, professional liability and even property and casualty; therefore, businesses should review all relevant policies based on the planned usage of AI models.

 

Alliant note and disclaimer: This document is designed to provide general information and guidance. Please note that prior to implementation your legal counsel should review all details or policy information. Alliant Insurance Services does not provide legal advice or legal opinions. If a legal opinion is needed, please seek the services of your own legal advisor or ask Alliant Insurance Services for a referral. This document is provided on an “as is” basis without any warranty of any kind. Alliant Insurance Services disclaims any liability for any loss or damage from reliance on this document.