Who Will Win the War over Artificial Intelligence: Digital Tech or Global Powers?

August 30, 2023

A technological cold war is brewing and there is no pause button - only time will tell where technology is heading.
The European Union (EU) will begin enforcing strict new rules on the world's largest technology companies such as Google, Facebook and YouTube, which have a large number of users, such as preventing disinformation and removing illegal content quickly. Photo by Anadolu Images.

T

he United States is undoubtedly one of the world’s leading states in terms of its armed forces, economic prosperity, and cultural production. But experts predict that this will no longer be the case by 2050, as the global power structure undergoes major shifts.

The rise of China continues with the country steadily becoming a new superpower, and some even suggesting that in the foreseeable future it will surpass the United States in global influence.

Another candidate for superpower status is India, which has just surpassed China’s population to become the world’s most populous nation. India has combined its growing population with significant economic progress and has the world’s largest and youngest workforce.

The list of potential superpower candidates could go on, but there is another, less expected candidate that could change the balance as we know it: new technology companies.

Can big tech companies take over the global system?

Big tech has not yet “taken over” governments in the traditional sense. Nevertheless, they can wield significant influence over governments and societies due to their vast resources, data-collection capabilities, and technological reach.

Rewind to the early days of COVID19 when everything suddenly became virtual and every sector had no alternative but to become dependent on technology. On the political front, technology took a leading role in implementing governance systems and people went from being citizens of nation states to citizens of large technology companies.

This change is the result of technology companies exercising their authority over digital spaces. In terms of power, it’s clear that tech giants dominate certain sectors such as telecommunications which has raised concerns within governments, leading them to impose limits on the companies’ capabilities. However, regardless of what governments do, tech giants can have significant impacts on countries’ political and economic spheres.

Tech companies are not what we think

Companies sending humanitarian aid to Ukraine Assistance (in U.S. Dollars)
Google $25 million
Amazon $75 million
Microsoft $35 million
Snap Inc. $15 million

During the Russian invasion, NATO countries sent military equipment to Ukraine and technology companies helped Ukraine defend itself against Russian cyberattacks. Leading U.S. tech companies leveraged their resources, capabilities, and expertise to serve Ukraine and defend its sovereignty. Companies like Google have taken measures to block YouTube channels linked to the Russian state media across Europe, including Russia Today (RT) and Sputnik.

As seen in the table, U.S.-based tech companies have donated humanitarian aid to Ukraine. Apple promoted donations for aid and fundraised on several platforms; Airbnb users could book stays in Ukraine with no intention of going. Amazon added a donation button, and Etsy users bought digital stickers from Ukrainian shops.

Prominent technology companies, such as SpaceX, facilitated direct communication between Ukrainian military leaders, generals, and soldiers on the front lines. If it wasn’t for the Starlink internet satellite sent by Elon Musk, Ukraine could have lost its online connection within weeks after the conflict began.

Ukraine is using Chinese-made DJI commercial drones for reconnaissance operations which have been modified to drop weapons. Following this, China restricted its drone export to Ukraine.

European Union Artificial Intelligence restrictions: AI Act (AIA)

The use of artificial intelligence (AI) in the European Union will be regulated by the AI Act (AIA), the world’s first comprehensive AI law. The main objective of the AIA, the proposed legislation by the European Commission, is to ensure the trustworthy and ethical use of AI, and promote innovation and competitiveness.

Members of the European Parliament want to make sure AI is safe, transparent, traceable, non-discriminatory, and environmentally friendly. The idea is to govern AI based on four levels of risk starting with unacceptable risk, when AI is used for unethical acts such as biometric surveillance. High risk involves causing harm to people’s health, harm to the environment, or affecting people’s fundamental rights. Limited risk refers to AI systems with specific transparency obligations, and last, minimal or no risk includes applications such as AI-enabled video games or spam filters.

Brussels aims to adopt the AIA by the end of 2023. The United Kingdom is striving to position itself as a regulatory leader, showcasing its tech credentials at an international AI summit in the fall of 2023.

It is predicted that if everything goes as planned, by the second half of 2024, AI regulation could become applicable. These rules ensure that AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination, and social and environmental well-being.

Chinese government and its regulation of foreign AI

The Chinese government has also issued 24 regulatory guidelines to strike a balance between state control of technology and fostering an environment conducive to innovation within the sector. The regulation was published on July 10 and referred to as “Generative AI Measures.” It involves seven agencies taking responsibility for oversight, including the Cyberspace Administration of China (CAC) and the National Development and Reform Commission.

Beijing will enforce visible labels on AI-generated content such as images and videos to prevent misleading manipulation. China also demands AI models to be trained on “legitimate data” with disclosure to regulations. Regulation is scheduled to come into effect in August.

The regulation aims to create a middle ground between state control and a welcoming environment for innovation in the sector. China has been actively developing its AI scene with local tech giant Alibaba and has been working on the creation of a rival to the popular chatbot ChatGPT.

U.S. position on artificial intelligence

The United States lags behind Europe and China when it comes to AI regulation. There remains a lot of disagreement in the United States on the best way to handle a technology that many American lawmakers are still trying to understand.

On May 16, 2023, Sam Altman, Open AI CEO, Christina Montgomery of IBM, and NYU professor Gary Marcus met with Congress and recommended that the government regulate AI.

Altman suggested that if a company creates an AI model, a permission from the government be required before it goes public. He also stated that the model should have safety standards, i.e., control of whether the model can self-replicate “in the wild.” Altman stated, “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that … We want to work with the government to prevent this from happening.” He also emphasized the 2024 U.S. presidential election and how AI can play a crucial role in it, concluding that regulations should be introduced as soon as possible.

As of yet, the U.S. has no regulation for AI, and the Biden administration has moved with urgency to seize the risk posed by artificial intelligence and to protect the rights and safety of its citizens. Seven leading AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and Open AI—met with President Biden at the White House, and talked about how regulation is necessary not just for the U.S. but also globally.

Future for governments and big tech companies

Global powers seem to have been caught off guard with fast-growing AI and tech companies competing with them fiercely, and the former not ready to relinquish their thrones. However, governments and state institutions still have a say. AI is a recent achievement and not many governments know how to deal with the situation and whether to regulate it or not. Meanwhile experts claim it should be regulated and not doing so, entails great risks.

In his Ted Talk “The Next Global Superpower Isn’t Who You Think,” Ian Bremmer, an American political scientist who focuses on global political risk, explains how technology companies play a pivotal role in shaping our identities. He highlights the interplay between human characteristics, environmental influences, and the algorithms used by technology companies.

Bremmer uses the example of China and the U.S. to illustrate a scenario in which both nations increase their digital dominance, with technology companies closely aligned with their respective governments. Such a situation, he warns, could lead to a technological cold war. In the realm of explosive and disruptive technologies, there is no pause button—only time will tell where the technology is headed.

Şeymanur Melayim is currently a bachelor's student at Sabahattin Zaim University, majoring in Political Science and International Relations. Her areas of interests are Middle East politics, Turkish foreign policy, and Southeast Asia politics.