Widely regarded as the most comprehensive legal framework governing artificial intelligence (AI) to date, the European Union marked a normative milestone on August 1, 2024, when the Artificial Intelligence Act (AI Act) entered into force.
The AI Act established harmonised rules for the development, deployment, and use of the technology across the EU. While formally in force, the regulation follows a phased implementation schedule, with most substantive provisions set to apply from August 2, 2026, onwards.
When adopted, the Act was celebrated across the EU as a regulatory achievement that could benefit the global AI ecosystem while keeping technological development in balance. Since its approval, however, it has remained under scrutiny.
“Critics of the EU AI Act highlight not just its substantive requirements but the practical burden of compliance paperwork and execution,” Asparuh Koev, CEO of AI logistics platform Transmetrics, has noted in the context of other EU regulatory regimes.
“The problem is not the lack of rules, but the gap between intention and execution,” added the executive, referring to a dynamic that can turn well-meaning laws into operational chokepoints for organisations lacking sufficient capacity.
Balancing innovation and fundamental rights
Spanning over 100 pages of legal regulation, the Act “aimed at striking a balance between enhancing innovation while securing fundamental rights,” according to Christophe Geiger and Vincenzo Iaia from Luiss University. Yet, some critics argue that its complexity may hinder innovation and slow AI development in Europe.
The American-German Institute, for one, notes: “For startups, SMEs, and even larger companies, the bureaucratic burden of complying with the AI Act, along with the GDPR, EU copyright, and product liability laws, may be overwhelming.”
The AI Act forms part of a broader package of policy measures designed to promote trustworthy AI while strengthening innovation and investment across the EU. This includes initiatives such as the AI Continent Action Plan, the AI Innovation Package, and the launch of AI Factories.
Central to the Act is its risk-based approach, which categorises AI systems according to their potential impact on fundamental rights and safety. It establishes four risk levels: unacceptable, high, limited, and minimal.
As phase three of implementation approaches in August 2026 – and as set out in Article 113 of the Act – it becomes crucial to assess whether the regulation is fulfilling its objectives or instead constraining AI development.
“Regulations must coincide with sufficient financial and infrastructural support,” argued Koev, suggesting that regulators enforcing change need balanced approaches, phased adoption plans, and realistic compliance timelines to avoid jeopardizing industry viability.
Emerging concerns around regulatory design and impact
Regulation serves several purposes, particularly in adapting existing legal frameworks to emerging technologies. In the context of AI, this includes protecting users and their data, addressing bias and discrimination, and mitigating misuse.
Many AI-related risks are human-driven. The debate, therefore, is not whether AI should be regulated, but rather the extent to which regulation should go.
Effective regulation at the EU level faces significant obstacles. According to a Jacques Delors Institute’s policy paper “strong lobbying pressure exercised by large corporations, especially technology firms forming part of the so-called GAFAM (Google, Amazon, Facebook, Apple and Microsoft), has rendered it more difficult to enact sufficiently robust regulation.”
Scholars further noted that over 150 CEOs and executives from companies like Renault, Heineken, Airbus, and Siemens signed an open letter warning EU institutions about the costs of compliance. They argued that the AI Act risks making European economies uncompetitive, particularly when compared to the US and China, where AI regulations are far less stringent.
Since implementation began, concerns have also been raised that regulation may create barriers to market entry, disproportionately affecting smaller companies due to higher costs and regulatory burdens; the Act establishes significant penalties for non-compliance, including fines of up to €40 million or 7% of global annual turnover, depending on the severity of the infringement.
From August 2 of 2026, high-risk AI systems – used in areas such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and judicial and democratic processes – must fully comply with the regulation.
This means that before entering the EU market, high-risk systems must meet strict requirements, including pre-deployment risk assessments, mitigation measures, high-quality datasets to minimise discriminatory outcomes, and detailed logging for traceability. These obligations increase compliance costs, delay product launches, and impose requirements that some developers consider impractical. Article 10(3) of the Act, for example, mandates that datasets be “relevant, sufficiently representative, and to the best extent possible, free of errors and complete,” while also possessing “appropriate statistical properties.”
As noted by Bloomberg Law, this requirement is vague and may restrict access to data necessary for AI development, limiting innovation and scalability.
Moreover, the Act’s overarching objective of eliminating bias is widely considered technologically unrealistic, as even the most advanced AI developers have yet to fully resolve this challenge. Traceability requirements also raise concerns by increasing data storage costs and privacy risks, particularly for resource-constrained companies, while the opacity of large language models makes full traceability technically complex and costly.
Taken together, these regulatory burdens risk placing EU-based AI companies at a competitive disadvantage compared to markets with fewer restrictions. This disadvantage is further amplified by the Act’s broad definition of “AI systems,” which encompasses a wide range of software, including traditional algorithms and long-established statistical models.
Another line of observation highlighted by Geiger relates to the AI Act’s approach to copyright and data use. He notes that the 2019 EU Copyright Directive introduced an exception permitting commercial text and data mining, while allowing rightsholders, under certain conditions, to opt out.
These rules were subsequently extended to artificial intelligence under the 2024 AI Act. According to the Law professor and Director of the Innovation Law and Ethics Observatory, this opt-out mechanism generates legal uncertainty in Europe, which may hinder AI innovation and, at the same time, is unlikely to ensure fair remuneration for creators.
As an alternative, his research advocates for the introduction of a statutory remuneration right, designed to promote a more innovation-friendly AI environment while preserving the legitimate interests of authors.
A rapidly evolving AI landscape
The AI Act full enforcement occurs within a global context of accelerating AI adoption. According to a February 2025 report by Cisco Systems, 97% of CEOs plan to integrate AI into their organisations.
And, while the United States leads AI innovation with the highest number of AI models and $77.5 billion USD in investment, China leads in AI clusters. Regardless, the U.S. controls 50% of global AI compute power, according to TRG Datacenters.
In 2024, U.S.-based institutions produced 40 notable AI models, compared to China’s 15 and Europe’s three. Although performance gaps between the American and Chinese models have narrowed significantly, Europe continues to lag behind in investment and output.
U.S. private AI investment reached $109.1 billion in 2024, nearly twelve times China’s and twenty-four times the UK’s, with the gap in generative AI investment widening further.
Ensuring that the EU remains competitive while regulating AI responsibly therefore presents a major challenge. Philip Meissner, Chair of Strategic Management and Decision Making at ESCP Berlin, argued that a unified regulatory framework between the EU and the U.S. may be necessary to prevent regulatory arbitrage.
Overly restrictive regulation and severe penalties may disincentivise companies from operating in Europe, potentially slowing innovation or driving AI development elsewhere. Startups, in particular, may choose to establish themselves in jurisdictions such as the U.S., where regulatory approaches to AI are comparatively more permissive.
Ongoing implementation
Despite these concerns, the EU has undertaken significant efforts to promote AI innovation and adoption. At the 2025 AI Summit in Paris, European Commission President Ursula von der Leyen announced InvestAI, an initiative aimed at mobilising €200 billion in AI investment, including a €20 billion European AI fund.
The EU’s approach thus seeks to balance safety and trustworthiness with competitiveness and technological sovereignty.
To strengthen its global position, the Commission adopted the AI Continent Action Plan in April 2025, followed by the Apply AI Strategy in October 2025, which supports AI adoption across strategic sectors under an “AI First” policy while addressing associated risks.
These initiatives work together to align strategic development with practical deployment, supported by investments in AI Factories, supercomputing infrastructure, and research funding.
Through Horizon Europe and Digital Europe, the EU will invest €1 billion annually in AI, alongside private and Member State contributions targeting €20 billion per year over the digital decade. In addition, the Recovery and Resilience Facility allocated €134 billion for digital transformation, positioning AI as a central pillar of Europe’s technological future.
The EU AI Act represents the world’s first comprehensive attempt to regulate artificial intelligence. Whether it ultimately safeguards innovation or slows it down will depend on how effectively regulation can adapt to a rapidly evolving technological landscape.
Ultimately, however, the question is not whether the EU AI Act safeguards the future or slows it down, but whether it can do both at once.
Given its phased implementation, it is still too early to fully assess the Act’s concrete effects on innovation, competitiveness, and AI development in Europe. Time is needed to determine whether the regulatory burdens identified by critics materialise in practice, and whether parallel initiatives aimed at boosting investment, adoption, and technological capacity can effectively counterbalance those constraints.
The success of the EU’s approach will therefore depend not only on the strength of its regulatory framework, but on its ability to remain adaptive in a rapidly evolving technological landscape.
Featured image: The EU distinguished between four risk groups in its AI Act.
Source: Heute.at
License: Creative Commons License

Disclosure: This article mentions clients of an Espacio portfolio company.