The rollout of the EU AI Act will not be delayed, a European Commission spokesperson has confirmed. This comes after a group representing Apple, Google, and Meta, as well as several European companies, urged regulators to postpone its implementation by at least two years because  of uncertainty about how to comply with the complex legislation.

Rushing the rollout could also jeopardise the EU’s projected €3.4 trillion economic boost from AI by 2030, a June 26 letter published by the Computer and Communications Industry Association (CCIA), which represents major technology firms in the US, said.

“Europe cannot lead on AI with one foot on the brake,” CCIA Europe’s Senior Vice President & Head of Office, Daniel Friedlaender, said in a statement. “With critical parts of the AI Act still missing just weeks before rules take effect, we need a pause to get the Act right, or risk stalling innovation altogether.”

Meta criticised European regulation of AI in a separate letter last year, alongside companies such as Spotify, SAP, Ericsson, and Klarna. They argued that “inconsistent regulatory decision making” created uncertainty about what data they could use to train their AI models, and highlighted that the bloc will miss out on the latest technologies as a result. Indeed, Apple, Google, and Meta have all recently delayed or cancelled rollouts of AI products in the EU.

Spotify, SAP, and 43 other companies say AI developers need guidance before they can comply

The EU AI Champions, a group of 45 European companies including SAP, Spotify, Mistral, Deutsche Bank, and Airbus, published their own letter on July 3, stating that the Code of Practice for General Purpose AI Models has not yet been released, despite being due on May 2. The document is intended to guide AI developers in complying with the Act and avoiding potential penalties.

Companies that fail to comply with EU AI Act face fines ranging from €35 million ($38 million USD) or 7% of global turnover to €7.5 million ($8.1 million USD) or 1.5% of turnover, depending on the infringement and size of the company.

“To address the uncertainty this situation is creating, we urge the Commission to propose a two-year ‘clock-stop’ on the AI Act before key obligations enter into force,” the EU AI Champions wrote.

European Commission says it has “legal deadlines” to meet

“I’ve seen, indeed, a lot of reporting, a lot of letters and a lot of things being said on the AI Act. Let me be as clear as possible, there is no stopping the clock. There is no grace period. There is no pause,” Commission spokesperson Thomas Regnier told a press conference, according to Reuters.

“We have legal deadlines established in a legal text. The provisions kicked in February, general purpose AI model obligations will begin in August, and next year, we have the obligations for high risk models that will kick in in August 2026.”

Nevertheless, the Commission plans to propose steps to simplify its AI regulations by the end of the year, according to Reuters, such as reducing reporting obligations for small companies.

New requirements start in August

The next phase of the AI Act will take effect on August 2, introducing new rules for general-purpose AI systems that require transparency, technical documentation, and disclosure of any copyrighted material used in training. General-purpose AI systems classified as high-risk will face additional obligations, including model evaluations, adversarial testing, and incident reporting.

A spokesperson for the European Commission told Reuters that the European AI Board is discussing the timing for implementing the Code of Practice, but it may not be published until the end of this year. The spokesperson added that enforcement powers for the general-purpose AI rules will not begin until August 2026.

What is the EU AI Act, and when does it come into force?

The AI Act outlines EU-wide measures designed to ensure that AI is used safely and ethically. It establishes a risk-based approach to regulation that categorises AI systems based on their perceived level of risk to and impact on citizens.

The legislation was published in the European Union’s Official Journal on July 12, 2024, and officially came into force (meaning it took effect) on August 1, 2024. However, various provisions will apply in phases:

  • February 2, 2025: Certain AI systems that pose unacceptable risk were banned, and staff at companies that either provide or use the technology must have “a sufficient level of AI literacy.”
  • August 2, 2025: General-purpose AI models placed on the market after this date must comply with specific requirements. Models posing systemic risks are subject to additional obligations, such as risk assessments and adversarial testing.
  • August 2026: Certain high-risk AI systems, such as those used in biometrics, critical infrastructure, and law enforcement, placed on the market after this date must comply with the AI Act’s requirements.
  • August 2027: General-purpose models placed on the market before August 2, 2025, must comply with the AI Act by this date. High-risk systems that are subject to existing EU health and safety legislation placed on the market after August 2, 2026, must also comply by this date.
  • December 2030: AI systems that are components of certain large-scale IT systems and placed on the market before August 2, 2027, must be brought into compliance by this date.

Tech giants that are against the AI Act may have ulterior motives

Tech giants may have more reason to be against the AI Act than simply the risk that EU citizens may miss out on the latest and greatest tech. They do stand to suffer financially if the rules prevent them from launching products in the EU, as the region represents a huge market with 448 million people. Working to comply with the rules will also cost them time and money.

Some consumer rights groups contend that public safety must come before corporate profits and any potential boost to Europe’s economy. “If certain companies cannot guarantee that their AI products respect the law, then consumers are not missing out; these are products that are simply not safe to be released on the E.U. market yet,” Sébastien Pant, deputy head of communications at the European consumer organisation BEUC, told Euronews in April.

“It is not for legislation to bend to new features rolled out by tech companies. It is instead for companies to make sure that new features, products or technologies comply with existing laws before they hit the EU market.”

Indeed, EU legislation hasn’t always excluded Europeans from AI products; instead, it has often compelled tech companies to adapt and deliver better, more privacy-conscious solutions for them. For example, X agreed to permanently stop processing personal data from EU users’ public posts to train its AI model Grok after it was taken to court by the Data Protection Commission.

The EU is walking a tightrope: striving to stay competitive in global AI innovation while keeping powerful tech firms in check to protect its citizens. Learn how it’s investing €1.3 billion to boost AI adoption, while also cracking down on tools like AI notetakers in video calls.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more. Delivered Tuesdays and Fridays