On March 13, the European Union Parliament voted the Artificial Intelligence Act into law, setting strict rules on the use of AI for facial recognition, creating safeguards for general-purpose AI systems and protecting consumer rights to submit complaints and request meaningful explanations about decisions made with high-risk AI systems that affect citizens’ rights. The AI Act legislation outlines EU-wide measures designed to ensure that AI is used safely and ethically, and includes new transparency requirements for developers of foundation AI models like ChatGPT.

Of the members of Parliament, 523 voted in favour of adoption of the AI Act, 46 voted against adoption of the Act and 49 abstained. The vote comes after the member states agreed on the regulations in negotiations in December 2023.

Next, the act will pass through a “lawyer-linguist check” and be formally endorsed. After that, the act will be entered into force (meaning it takes effect) and published. The AI Act will go into effect 24 months after publication – which is expected to happen in May or June –, with some exceptions for high-priority cases:

  • Bans on prohibited practises will apply six months after the entry into force date (approximately December 2024).
  • Codes of practise will go into effect nine months after entry into force (approximately March 2025).
  • General-purpose AI rules, including governance, will go into effect 12 months after entry into force (approximately June 2025).
  • Obligations for high-risk systems will go into effect 36 months after entry into force (approximately June 2027).

What is the AI Act?

The AI Act is a set of EU-wide legislation that seeks to place safeguards on the use of artificial intelligence in Europe, while simultaneously ensuring that European businesses can benefit from the rapidly evolving technology.

The legislation establishes a risk-based approach to regulation that categorizes artificial intelligence systems based on their perceived level of risk to and impact on citizens.

The following use cases are banned under the AI Act:

  • Biometric categorisation systems that use sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race).
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
  • Emotion recognition in the workplace and educational institutions.
  • Social scoring based on social behaviour or personal characteristics.
  • AI systems that manipulate human behaviour to circumvent their free will.
  • AI used to exploit the vulnerabilities of people due to their age, disability, social or economic situation.

The AI Act won’t come into force until late 2024 at earliest, leaving a regulatory vacuum in which companies will be able to develop and deploy AI unfettered and without any risk of penalties. Until then, companies will be expected to abide by the legislation voluntarily, essentially leaving them free to self-govern.

What do AI developers need to know?

Developers of AI systems deemed to be high risk will have to meet certain obligations set by European lawmakers, including mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.”

AI models that are considered high-impact and pose a systemic risk – meaning they could cause widespread problems if things go wrong – must follow more stringent rules. Developers of these systems will be required to perform evaluations of their models, as well as “assess and mitigate systemic risks, conduct adversarial testing, report to the (European) Commission on serious incidents, ensure cybersecurity and report on their energy efficiency.” Additionally, European citizens will have a right to launch complaints and receive explanations about decisions made by high-risk AI systems that impact their rights.

To support European startups in creating their own AI models, the AI Act also promotes regulatory sandboxes and real-world-testing. These will be set up by national authorities to allow companies to develop and train their AI technologies before they’re introduced to the market “without undue pressure from industry giants controlling the value chain.”

“There is a lot to do and little time to do it,” said Forrester Principal Analyst Enza Iannopollo in an emailed statement. “Organizations must assemble their ‘AI compliance team’ to get started. Meeting the requirements effectively will require strong collaboration among teams, from IT and data science to legal and risk management, and close support from the C-suite.”

What about ChatGPT and generative AI models?

Providers of general-purpose AI systems must meet certain transparency requirements under the AI Act; this includes creating technical documentation, complying with European copyright laws and providing detailed information about the data used to train AI foundation models. The rule applies to models used for generative AI systems like OpenAI’s ChatGPT.

SEE: Microsoft is investing £2.5 billion in artificial intelligence technology and training in the EU. (TechRepublic)

What are the penalties for breaching the AI Act?

Companies that fail to comply with the legislation face fines ranging from €35 million ($38 million USD) or 7% of global turnover to €7.5 million ($8.1 million USD) or 1.5% of turnover, depending on the infringement and size of the company.

How significant is the AI Act?

Symbolically, the AI Act represents a pivotal moment for the AI industry. Despite its explosive growth in recent years, AI technology remains largely unregulated, leaving policymakers struggling to keep up with the pace of innovation.

The EU hopes that its AI rulebook will set a precedent for other countries to follow. Posting on X (formerly Twitter), European Commissioner Thierry Breton labelled the AI Act “a launchpad for EU startups and researchers to lead the global AI race,” while Dragos Tudorache, MEP and member of the Renew Europe Group, said the legislation would strengthen Europe’s ability to “innovate and lead in the field of AI” while protecting citizens.

What have been some challenges associated with the AI Act?

The AI Act has been beset by delays that have eroded the EU’s position as a frontrunner in establishing comprehensive AI regulations. Most notable has been the arrival and subsequent meteoric rise of ChatGPT late last year, which had not been factored into plans when the EU first set out its intention to regulate AI in Europe in April 2021.

As reported by Euractiv, this threw negotiations into disarray, with some countries expressing reluctance to include rules for foundation models on the basis that doing so could stymie innovation in Europe’s startup scene. In the meantime, the U.S., U.K. and G7 countries have all taken strides towards publishing AI guidelines.

SEE: UK AI Safety Summit: Global Powers Make ‘Landmark’ Pledge to AI Safety (TechRepublic)

Responses from tech organizations

“I commend the EU for its leadership in passing comprehensive, smart AI legislation,” said Christina Montgomery, IBM vice president and chief privacy and trust officer, in a statement made by email. “The risk-based approach aligns with IBM’s commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems.”

Organizations like IBM have been preparing products that could help organizations comply with the AI Act, such as IBM’s watsonx.governance.

At a press briefing on Wednesday, March 6, Montgomery said companies need to “get serious” about AI governance.

“There will be an implementation period, but making sure you’re regulation-ready and being able to shift in a changing climate is key,” she said.

IBM has been the first client for its own AI governance tools, Montgomery said, preparing for regulations by fine-tuning those tools, creating a clear set of principles around AI trust and transparency and creating an AI ethics board.

Jean-Marc Leclerc, director and head of EU policy at IBM, said the AI Act will have influence across the globe, similar to GDPR. Leclerc framed the AI Act as positive for openness and competition between companies in the EU.

Salesforce EVP of government affairs Eric Loeb wrote, “We believe that by creating risk-based frameworks such as the EU AI Act, pushing for commitments to ethical and trustworthy AI, and convening multi-stakeholder groups, regulators can make a substantial positive impact. Salesforce applauds EU institutions for taking leadership in this domain.”

What are critics saying about the AI Act?

Some privacy and human rights groups have argued that these AI regulations don’t go far enough, accusing the EU lawmakers of delivering a watered-down version of what they originally promised.

Privacy rights group European Digital Rights labelled the AI Act a “high-level compromise” on “one of the most controversial digital legislations in EU history,” and suggested that gaps in the legislation threatened to undermine the rights of citizens.

The group was particularly critical of the Act’s limited ban on facial recognition and predictive policing, arguing that broad loopholes, unclear definitions and exemptions for certain authorities left AI systems open to potential misuse in surveillance and law enforcement.

In March, European Digital Rights highlighted that the AI Act has “a parallel legal framework for the use of AI by law enforcement, migration and national security authorities,” suggesting this could be used to lever disproportionate surveillance technology onto migrants.

Ella Jakubowska, senior policy advisor at European Digital Rights, said in a statement in December 2023:
“It’s hard to be excited about a law which has, for the first time in the EU, taken steps to legalise live public facial recognition across the bloc. Whilst the Parliament fought hard to limit the damage, the overall package on biometric surveillance and profiling is at best lukewarm. Our fight against biometric mass surveillance is set to continue.”

Amnesty International was also critical of the limited ban on AI facial recognition, saying it set “a devastating global precedent.”

Mher Hakobyan, advocacy advisor on artificial intelligence at Amnesty International, said in a statement in December 2023: “The three European institutions – Commission, Council and the Parliament – in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally concerning artificial intelligence (AI) regulation.

“Not ensuring a full ban on facial recognition is therefore a hugely missed opportunity to stop and prevent colossal damage to human rights, civic space and rule of law that are already under threat throughout the EU.”

A draft of the act was leaked in January 2024, highlighting the urgency with which businesses will need to adhere to the act. Some lawmakers worry the act will hamper innovation and economic growth, such as French President Emmanuel Macron speaking to the Financial Times in December 2023.

What’s next with the AI Act?

The AI Act is now pending formal adoption by both the European Parliament and the Council in order to be enacted as European Union legislation. The agreement will be subject to a vote in an upcoming meeting of the Parliament’s Internal Market and Civil Liberties committees.

Subscribe to the TechRepublic UK Newsletter

Catch up on the week’s essential technology news, must-read posts, and discussions that would be of interest to IT pros working in the UK and Europe. Delivered Wednesdays

Subscribe to the TechRepublic UK Newsletter

Catch up on the week’s essential technology news, must-read posts, and discussions that would be of interest to IT pros working in the UK and Europe. Delivered Wednesdays