EU AI Act: Can Europe Lead the World in AI Regulation?

January 15, 2024

The contrasting perspectives of the EU and the US on AI and its regulation will likely lead to a subtle competition between these actors in establishing AI standards and regulations.
A mobile phone screen displays ChatGPT logo in front of a screen displaying Boston Dynamics robot 'Spot' in Ankara, Turkiye on November 02, 2023. Photo by Anadolu Images.

A

rtificial Intelligence (AI) has seamlessly integrated into our daily lives, from personalized recommendations on our smartphones for music, movies, and books, to interactions with virtual assistants like Siri and Alexa, and even the AI-driven personalization of our social media feeds. These AI applications have become so ingrained in our routine that we often use them without a second thought about the AI algorithms at work. However, the narrative around AI shifted notably post-November 2022 with the debut of OpenAI’s ChatGPT-3.

This shift in discourse can be attributed to ChatGPT-3’s role in “democratizing AI,” i.e., making it accessible to everyone, including those without technical backgrounds. This broke the mold of AI being a tool solely for experts and engineers.

In fact, ChatGPT-3 has mirrored the impact of Google Search’s transformation of internet searching in the early 2000s. By making advanced AI technology more universally accessible, ChatGPT-3 has sparked a revolutionary shift, opening up the realm of AI to a broader audience.

Before these advancements, the discourse around AI primarily centered on its potential effects on economic growth and its strategic value in military contexts. Numerous experts, including leaders from major tech companies like Elon Musk, constantly raised concerns about the possible dangers of this burgeoning technology, particularly in relation to human rights. The introduction of ChatGPT-3 intensified these discussions, spotlighting potential risks.

The use of AI in various domains has demonstrated on occasion AI’s propensity for bias against certain groups. In June 2023, another incident that highlighted the threats of AI occurred when ChatGPT-3 experienced its first significant data breach. The breach involved the privacy of ChatGPT Plus subscribers, revealing their prompts and details to other users. Such events highlight the need for a careful examination of AI’s impact on privacy and ethical standards.

In this rapidly evolving technological landscape, more than 150 states and organizations have tried to respond to these threats by announcing their ethical principles, yet, few states have been able to transform these principles into enforceable laws. Against this backdrop, the EU is trying to take the lead in terms of AI regulations by approving its EU AI Act in late 2023. While not the first AI regulation, the EU AI Act is arguably the most all-encompassing, positioning it as a potential global standard for AI governance. This journey began in 2021 and the EU AI Act has undergone significant revisions to adapt to evolving technology, especially with the launch of ChatGPT-3.

What Is the EU AI Act?

The EU AI Act was initially introduced by the European Commission on April 21, 2021, with the primary objective of overseeing the deployment of AI in Europe. Originally crafted to address particular high-risk AI applications, such as its usage in critical sectors like healthcare and finance—examples being medical equipment and loan approval processes or hiring decisions—its scope expanded in response to shifting perceptions. As mentioned earlier, the introduction of Chat GPT-3 brought about a reevaluation of how AI was accessed and perceived. Consequently, the European Parliament introduced additional regulations to cover widely utilized AI systems with broad, general applications that extended beyond the initial target areas. After several discussions, on December 8, 2021 an agreement was reached among the three main bodies of the EU and the AI Act was finalized.

First, the EU AI Act contains a definition of AI which is important as not only have many states failed to do so, but most importantly an exact definition of AI makes the application of this regulation easier. Second, it’s crucial to recognize that the EU AI Act embodies a comprehensive and risk-focused framework, placing human rights at its core. From this vantage point, the regulation prohibits various AI applications, including: (i) biometric classification systems processing sensitive traits such as political, religious, or philosophical beliefs, sexual orientation, or racial attributes; (ii) indiscriminate harvesting of facial images from the internet or CCTV for facial recognition databases; (iii) the use of emotion detection technologies in workplaces and educational settings; (iv) social credit systems that assess individuals based on social conduct or personal traits; (v) AI tools designed to alter human behavior, undermining free will; and (vi) AI solutions targeting the vulnerabilities of specific groups, including those defined by age, disability, or socio-economic status.

Furthermore, the EU AI Act categorizes AI applications into four risk levels: (1) Minimal or No Risk, like video games or spam filters, subject to market monitoring and reporting of incidents; (2) Limited Risk, requiring transparency, such as chatbots making their nature known to users; (3) High Risk, including remote biometric identification and AI used in critical sectors, subject to stringent compliance, bias minimization, and security measures;  and (4) Unacceptable Risk, where AI poses threats to safety and rights, such as social scoring by governments, which are banned.

At the same time, the regulation focuses on the General Purpose AI (GPAI), influenced by the emergence of technologies like ChatGPT-3. Specifically, Generative AI, a model used by Chat GPT, falls within this category. The EU AI Act introduces two levels of obligations for GPAI overseen by a new—to be created—AI Office: “Level One,” which applies to all GPAI providers, includes maintaining documentation, adhering to copyright rules, and disclosing training data; and  “Level Two”, which applies to high-risk GPAI, entails additional measures like model evaluation and incident reporting. Under this regulation, companies developing GPAI models, like OpenAI and Google, are partly responsible for their AI systems’ usage, irrespective of their control over specific applications. Additionally, tech companies must disclose summaries of copyrighted data used in training their AI models, potentially allowing content creators to seek compensation for the use of their material.

Effectiveness of EU AI Act

The effectiveness of the EU AI Act relies heavily on two critical factors. First, it must find the delicate equilibrium between safeguarding the rights of civilians and tech users, on the one hand, and nurturing innovation and investment, on the other. Second, the act must contend with the rapidly evolving and ever-changing nature of AI and potential risks.

In terms of the first factor, the EU AI Act has received criticism from major U.S. tech firms, alleging a lack the balance and potential harm to innovation. For instance, OpenAI’s Sam Altman mentioned their intention to comply, but suggested a withdrawal from the EU if compliance becomes impractical. Additionally, the absence of Bard, Google’s AI chatbot, in the EU underscores the challenges tech companies face in aligning global tech progress with regional regulations. If more companies opt for a cautious approach, it could potentially leave the EU at a disadvantage in the ongoing AI race, with potential repercussions for economic development and the region’s competitive position in the global AI landscape.

In terms of the second factor, the EU AI Act faces a significant challenge due to the evolving nature of AI. AI’s potential applications and capabilities are still emerging, as demonstrated by the impact of ChatGPT-3 during the regulation’s drafting. While the act is commended as the first extensive AI law, it adopts a horizontal regulatory approach covering a wide range of AI applications under one framework. To put this in perspective, in contrast, China employs a vertical regulatory strategy tailored to specific AI applications. This flexibility allows China to adapt more swiftly to new technological developments. In essence, our rapidly changing world, driven by technological advancements, may soon render the current EU AI Act outdated, requiring updates or new frameworks to keep pace with emerging technologies.

Will the EU Become the Global Leader in AI Regulation?

The European Union has garnered widespread acclaim for its leadership in data regulation, exemplified by the General Data Protection Regulation (GDPR) enforced in 2018. The GDPR not only set a precedent for data regulation within the EU, but also influenced global data governance, known as the “Brussels effect.” However, whether a similar phenomenon will occur with AI regulations remains uncertain. The dynamics are different and the EU’s key ally, the United States, may pose challenges to the EU’s pursuit of global leadership in AI regulation.

While the U.S. embraced EU policies regarding the GDPR, a similar alignment may not occur with the EU AI Act. Fundamentally, the EU and the U.S. have divergent approaches to AI regulation: the EU places a stronger emphasis on protecting the rights of technology users, while the U.S. prioritizes fostering innovation. A significant factor contributing to this divergence is the intense AI competition between the U.S. and China, with the outcome impacting their global superpower status. Given that AI has broad applications, including civil and military domains, the U.S. is less inclined to stifle innovation at this critical juncture. Consequently, the U.S. is likely to craft regulations that align more closely with its approach, leaving greater flexibility for major tech companies compared to the EU’s stance.

The contrasting perspectives and expectations of the EU and the U.S. regarding AI and its regulations will likely result in a subtle competition between these two actors in establishing AI standards and regulations. However, the U.S. cannot afford to rely on the Brussels effect to shape AI development within its borders, as many entities seek to define the future of AI governance and, therefore, have a first-mover advantage.

Gloria Shkurti Özdemir is a PhD candidate at Ankara Yıldırım Beyazıt University and writing her dissertation on the application of artificial intelligence in the field of military. Her research interests include U.S. foreign policy, drone warfare, and artificial intelligence. Currently, she is a researcher in the Foreign Policy Directorate at SETA Foundation and also working as the Assistant Editor of Insight Turkey, a journal published by SETA Foundation.