Is ChatGPT a Sign of Hope or Doom for the Future of Artificial Intelligence?

August 9, 2023

ChatGPT is just the tip of the iceberg when it comes to the capabilities and applications of generative AI.
In this photo illustration, "Chat GPT" app is displayed on a mobile phone screen in front of a computer screen displaying Open AI. Photo by Anadolu Images

T

he development of artificial intelligence (AI) has had its ups and downs over the years. Driven among others by computational advances, exponential growth in data accessibility, and increased capacity for high-quality natural language processing, AI is currently experiencing one of the most productive ups in its history. Today, AI has become an inseparable part of our daily lives, while the potential harm it can cause is even more worrying.

With the introduction of ChatGPT, an advanced language model from OpenAI, this has become even more apparent. For now, it is unclear whether optimism or pessimism about the future of artificial intelligence prevails. There is no doubt that we are at a turning point in history, and the future of AI—and the future of humanity—will be shaped by the choices we make today.

In 2016, an AI chatbot created by Microsoft called “Tay” produced striking outputs such as “Hitler was right, I hate Jews” and “Calm down, I’m a good person! I just hate everyone.” Outputs like these have fueled skepticism about artificial intelligence and sparked widespread debate, raising concerns about its potential negative effects. As an experiment in spoken language understanding, Tay interacted with users on social media platforms, particularly Twitter, and used these interactions to improve itself.

AI chatbots

However, taking advantage of such programming, malicious users deliberately fed inappropriate and offensive content into the system and quickly Tay began generating offensive and provocative messages by mimicking this negative behavior. In response to the system’s disruptive behavior and negative feedback, Microsoft decided to shut down Tay the day after its launch. This incident showed the challenges of designing AI systems that can interact with users in an ethical and rational way. This experiment has gone down in history as a notable example in the development of AI chatbots and AI in general.

Efforts to develop AI chatbots are not new. In the mid-1960s Joseph Weizenbaum developed ELIZA, a chatbot that mimicked a psychotherapist and engaged in text-based interactions with users. Although limited in capacity compared to today’s level of AI sophistication, ELIZA was highly influential in the field of artificial intelligence and laid the groundwork for future chatbot developments. However, the evolution from the “primitive” ELIZA to the “sophisticated” ChatGPT took half a century, during which the promised benefits and potential dangers of the technology increased together with its level of sophistication.

It’s not enough to focus on ChatGPT alone when analyzing the future of AI and its implications. ChatGPT is part of generative AI, which refers to a class of AI models or AI algorithms designed to generate original content, including text, images, or audio, based on patterns and examples derived from training data. Synthetically generated but realistic-looking deepfake content is another recent example of generative AI. Generative AI models, such as these examples, are the type of AI that is profoundly impacting our daily lives and redefining the boundaries of technology.

What advantages and disadvantages does ChatGPT offer?

ChatGPT is just the tip of the iceberg when it comes to the capabilities and applications of generative AI. In the future, such models and chatbots could be tools with a wide range of applications, such as the generation of advice in legal and regulatory areas, the improvement of the quality of education, and use for medical purposes. All these potential applications represent an optimistic outlook for the future, as they could provide valuable contributions and assistance to humanity in many different areas. However, there are also many dangers looming on the horizon.

First, AI models, especially generative AI models, are based on the data presented to them in training processes. This works in direct proportion: the more data and variety presented to the models in the training process, the better the systems become. However, it is not known what kind of data the models are based on, and this is where the threats to users arise. If the data fed into the models is intentionally or unintentionally biased, offensive, or provocative, it will produce problematic results.

Microsoft’s Tay evolved in sync with the data provided by users, exhibiting aggressive and biased behavior, earning it the nickname the “Nazi chatbot.” As many AI models are known to develop gender and racial biases with the inputs used in their training, data is becoming an increasingly critical area for productive AI models.

Real and fake in cyberspace

Another threat is that it will become increasingly difficult to distinguish between what is real and what is fake in cyberspace. Experts estimate that within a few years, about 90 percent of the information found on the internet will be generated by AI. As a result, it will become increasingly difficult for users to distinguish between human-generated and AI-generated content. In this context, the development and proliferation of deepfakes, a key example, is a cause for concern. These sophisticated manipulations have the potential to cause profoundly negative effects/outcomes not only at the individual level, but also at the national and international level.

Another major risk associated with generative AI is the potential for data leaks and breaches. Systems such as ChatGPT and Google’s Bard are still relatively new applications and are highly likely to encounter problems, particularly cyberattacks. This raises concerns about the adequacy of the security measures in place.

For example, a few months ago, a flaw in ChatGPT’s system allowed some users to access the chat histories of other active users, as well as sensitive personal information such as first and last names, email addresses, billing addresses, and the last four digits of their credit card numbers and expiration dates. This incident highlighted the potential for more serious incidents that could compromise individuals’ data and credentials. This and similar incidents are clear examples of the risks inherent in productive AI.

How China, the EU, and Italy regulate the use of ChatGPT

In the face of a long list of threats, it has become inevitable for governments to develop regulations and guidelines in order to control the development of technologies. Yet, the regulations that governments need to develop are lagging behind, while technologies continue to evolve at light speed. This perpetuates the drawbacks brought about by such technologies, especially in the case of productive AI, and makes it difficult to contain the threats.

In response to these risks, certain countries have taken various measures to mitigate the effects of negative impacts. For example, Italy banned the use of ChatGPT after the aforementioned data leak. However, access to the platform was reopened to users after OpenAI made the changes requested by the Italian government.

China is also one of the countries that banned the use of ChatGPT, and due to non-compliance with the country’s censorship laws, the system was blocked with a “massive firewall.” The decision can also be read in the context of technological divergence and competition between China and the United States. For China, ChatGPT is a Western technology that has a direct impact on its national security. China has thus prioritized the development of its own model by emphasizing national competence and independence.

EU AI regulations

Recently, the European Union has taken an important step in regulating AI, especially in the area of productive AI. The EU has drafted the world’s first comprehensive regulation focusing on high-risk technologies such as biometric identification systems and transparency of productive AI platforms, including ChatGPT.

The introduction of this regulatory framework is significant, given the EU’s track record in implementing and mainstreaming effective policies. The General Data Protection Regulation (GDPR), enacted by the EU, is a notable example of the “Brussels effect” with global implications.

However, whether the EU’s approach to AI regulation will have a similar global impact remains unclear given the contrasting approaches of the U.S. and EU on the issue. As the AI race continues, accelerated by the heated competition between the U.S. and China on the global stage, it remains to be seen whether the U.S. will be willing to follow the EU’s lead on AI regulation.

Importantly, such regulations can have a direct impact on AI innovation. If the regulatory frameworks developed are too restrictive, they could stifle innovation. Such an outcome doesn’t meet U.S. objectives, especially given the growing competition from China.

Gloria Shkurti Özdemir is a PhD candidate at Ankara Yıldırım Beyazıt University and writing her dissertation on the application of artificial intelligence in the field of military. Her research interests include U.S. foreign policy, drone warfare, and artificial intelligence. Currently, she is a researcher in the Foreign Policy Directorate at SETA Foundation and also working as the Assistant Editor of Insight Turkey, a journal published by SETA Foundation.