Human-Centered Artificial Intelligence at a Crossroads

May 3, 2022

As Artificial Intelligence gets more complex, ethical concerns over the further industrialization of Artificial Intelligence-based technologies grow.
An AI robot with a humanistic face, entitled Alter 3: Offloaded Agency, is pictured during a photocall to promote the forthcoming exhibition entitled "AI: More than Human", at the Barbican Centre in London on May 15, 2019. Photo by BEN STANSALL/AFP via Getty Images.

For the fifth year in a row, the Stanford Institute for Human-Centered Artificial Intelligence issued the “AI Index” to measure the “contemporary trends of AI” as an independent initiative led by the AI Index Steering Committee, a multidisciplinary group consisting of a variety of experts from diverse academic and industry backgrounds. The value of the AI Index is derived from insights gained from data analysis on a large scale.

The index makes use of data inputs, and processes them into useful insights reported in the form of meaningful figures and graphs. The insights gained from the data analysis serve as a reference that enables stakeholders and decision-makers to make informed decisions regarding the use and development of AI in a way that benefits mankind.

The AI Index also provides relevant researchers and practitioners with a trackable and quantifiable key performance indicator (KPI) that can be referenced in future studies and points to the trends of AI investments. That is why the index is credible and reliable for active stakeholders and practitioners in the study or development of AI.

This year’s AI Index, released in 2022, sheds light on major developments in the AI industry between 2020 and 2021, and the findings are truly remarkable. They include a variety of themes and focuses on somewhat controversial, yet important subjects, covering both the theoretical and practical aspects of AI and its effects on different sectors like education, governance, policymaking, and economy.

It is worth mentioning that this year’s findings are still being debated and are expected to be the subject of further study in the coming years. AI technologies are witnessing a golden era and are trending in global markets. There is, however, a gap between the risks for companies of using AI technologies, and the attempts to mitigate the risks for the sake of the benefit of humanity.

The AI report shows several major developments in the field based on data collected by primary research. Below is a brief summary of the findings per sector.

The ethical and organizational dimension of AI

There is a growing emphasis on the growing ethical concerns arising from the further industrialization of AI-based technologies.

At the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), there is a session dedicated to discussing the bias in the language models analyzing several issues regarding transparency, accountability, and justice in the use of AI. The issue was also discussed at other important annual AI conferences such the Conference and Workshop on Neural Information Processing Systems (NeurIPS). The discussion of such issues took place after many complaints have surfaced about the biased AI algorithms found in several AI-based applications used across the globe.

Due to the COVID19 pandemic, which may have accelerated the process of the technical AI transformation, 2021 represented a quantum leap for the development and investment in AI technologies. In fact, AI shifted from being a growing trend to a mature technology with a global market. After 2021, the utilization of AI’s potential is no longer considered a research subject or a topic of theoretical studies, but has grown to have tangible applications with quantifiable effects that are easier to observe and measure than they used to be.

The development of master language models and multimedia applications accelerated as a result of superior technical criteria. Despite the major developments in AI, ethical concerns continued to arise as some texts generated by the models were classified as “toxic” or biased when it came to social, economic, and racial backgrounds.

Liz Rykert, President at Meta Strategies, a consultancy that works with technology and complex organizational change, noted that “The key for networked AI will be the ability to diffuse equitable responses to basic care and data collection. If bias remains in the programming it will be a big problem. I believe we will be able to develop systems that will learn from and reflect a much broader and more diverse population than the systems we have now.”

The efforts to deal with such ethical concerns were not sufficient as mentioned in the report findings. There are major labs and AI companies that claim that they have progressed towards eliminating relevant ethical concerns like OpenAI and others. Nonetheless, the AI Index 2022 report findings indicate that there is still a long way to go when it comes to dealing with the ethical concerns arising from the further development and use of AI technologies.

The economic dimension of AI

The 2022 AI Index report indicated that AI was substantially and directly integrated with economic activities. The impact of AI on economic performance has spread globally through research, publishing, and funding opportunities. There has been a major increase in the percentage of AI investments (which doubled in 2021 compared to 2020).

In 2020, there were only four funding rounds for AI-related projects. In 2021, however, there were 15 rounds of funding with an increased rate where the companies that specialize in data management, processing, and cloud services received the highest amount of funding followed by medical institutions and financial technology (fintech) companies.

The 2022 AI report also found a decrease in the average price of AI-based/robotic weapons, a drop-down of 46.2% in the past five years. This is mainly due to the ethical concerns surrounding AI-based weaponry as there are still not enough policies, laws, or clear guidelines that regulate the use of AI-integrated weapons.

The main concern arises from the ease of procuring AI-based weapons and using them in a way that endangers the people’s lives, especially societies’ marginalized sectors AI technologies have been shown to be biased, particularly against women and minority communities. False identifications disproportionately impact already marginalized and racialized groups.

The research dimension of AI

The United States of America and the Republic of China, alongside other countries, have exerted much effort in researching and publishing on AI-related topics. This, however, gives the U.S. and China an unfair advantage when it comes to dictating the trends and direction of AI-related research and practices, as both countries steer the research in a way that suits their policies and benefits their interests.

With the U.S. and China leading AI-related research, many developing countries are left behind as they lack the opportunity to participate in the policymaking and the setting of AI regulations. Instead, third-world countries are limited to the consumption of AI technologies, which contributes to creating negative stereotypes and biases against them.

The legal dimension of AI

The AI 2021 report uses the U.S. as an example for the increasing number of laws and policies that ought to regulate the practice and use of AI technologies. The U.S. law enforcement agencies and policymakers suggested 130 laws related to the use of AI in different fields, proposed to be passed in 2021. The year 2021 indeed witnessed a major focus on proposing laws and policies to regulate the use of AI-related technologies as compared to previous years—only one law was passed regarding the use of AI technologies in 2015.

Despite the massive increase in legal efforts and proposed laws, the percentage of laws that was actually approved was very low: only 2% in the past six years.

On the grand scheme of things, there has been a gradual and considerable increase when it comes to the proposal of laws and policymaking that regulate the use of AI with 18 times more law proposals and policies related to AI since 2015 in 25 countries around the world. Yet, not all legislation is active, and most laws are still pending approval.

Financial costs of AI

The report indicated an evident drop in the financial costs of the development of the main model for image categorization by 63.6%. Moreover, the development timeframe of AI systems has improved at a rate of 96.35%. These results are aligned with most of the accredited global reports that emphasize the drop-down in the costs of the development of AI technologies and systems. The reason behind this drop is partly due to the improvements in the design methods and internal programing.

However, several AI systems are still considered very costly. Only a few major labs and technology companies could afford such systems by virtue of their high purchasing power. This means that most small and medium enterprises and labs are not able to purchase such expensive AI systems, creating complications when it comes to the design and employment of AI systems that are affordable and viable for everyone.

Moreover, the authors of the AI 2022 report highlighted the privileges enjoyed by the parties most active in the field of AI development and research in the private sector. This includes access to major collections of data (several terabytes of stored data) used for the development of AI systems.

The quality of the development of these systems relies primarily on the nature and size of the data available for processing as an abundance of data contributes greatly to the improvement of the results of the produced product/system.

Since 2021, 9 out of 10 modern AI systems have been developed using bulks of data produced by as few as ten universities. Half of the automated learning developed has relied on the big data collections of specific universities and companies.

According to the 2022 AI’s report, the year 2021 witnessed intense positive and tangible globalization efforts in research, development, use, and regulation of AI-related technologies. Nonetheless, there has been a major spike in the ethical issues coupled with an increase in the regulatory efforts for AI-related technologies.

Nour Naim is the Founder and Director of AI Minds. She is a researcher who is interested in Artificial Intelligence (AI) ethics, AI for social good, algorithmic bias, computer vision, machine learning, and natural language processing. Naim received her PhD from the Department of Management and Artificial Intelligence at Istanbul Aydin University, Turkey.