Argument
An expert's point of view on a current event.

Artificial Intelligence Will Entrench Global Inequality

The debate about regulating AI urgently needs input from the global south.

By , a principal at the SecDev Group and co-founder of the Igarapé Institute, and , a co-founder and the president of the Igarapé Institute.
A journalist is silhouetted in front of a neon-lighted poster that reads "AI from Africa to the world" at the first AI research center established in Africa by Google in Accra, Ghana.
A journalist is silhouetted in front of a neon-lighted poster that reads "AI from Africa to the world" at the first AI research center established in Africa by Google in Accra, Ghana.
A journalist stands next to a Google AI poster in Accra, Ghana, on April 10, 2019. Cristina Aldehuela/AFP via Getty Images

The artificial intelligence race is gathering pace, and the stakes could not be higher. Major corporate players—including Alibaba, DeepMind, Google, IBM, Microsoft, OpenAI, and SAP—are leveraging huge computational power to push the boundaries of AI and popularize new AI tools such as GPT-4 and Bard. Hundreds of other private and non-profit players are rolling out apps and plugins, staking their claims in this fast-moving frontier market that some enthusiasts predict will upend the way we work, play, do business, create wealth, and govern.

The artificial intelligence race is gathering pace, and the stakes could not be higher. Major corporate players—including Alibaba, DeepMind, Google, IBM, Microsoft, OpenAI, and SAP—are leveraging huge computational power to push the boundaries of AI and popularize new AI tools such as GPT-4 and Bard. Hundreds of other private and non-profit players are rolling out apps and plugins, staking their claims in this fast-moving frontier market that some enthusiasts predict will upend the way we work, play, do business, create wealth, and govern.

Amid all the enthusiasm, there is a mounting sense of dread. A growing number of tech titans and computer scientists have expressed deep anxiety about the existential risks of surrendering decision-making to complex algorithms and, in the not so distant future, super-intelligent machines that may abruptly find little use for humans. A 2022 survey found that roughly half of all responding AI experts believed there is at least a one in 10 chance these technologies could doom us all. Whatever the verdict, as recent U.S. congressional testimony from OpenAI CEO Sam Altman reveals, AI represents an unprecedented shift in the social contract that will fundamentally redefine relations between people, institutions, and nations.

Adding to these ominous existential worries is the already lopsided distribution of power and wealth, ensuring that the winnings of future upheaval will accrue disproportionately to the 1 percent. But if AI menaces white-collar jobs and empowers undemocratic interests in privileged countries, what to say about the fallout in those parts of the world where billions toil in the informal sector without safety nets, making them even easier marks for power elites and their digital tools? However the AI disruption plays out worldwide, there is scant hope that, without mitigation, safeguards, and compensation such as universal basic income, the world will be a more equitable place to live, work, or vote.

Fears about the existential risks posed by machine intelligence are hardly new. In his 1872 novel Erewhon, Samuel Butler prophesied that sentient machines would eventually replace humans. In 1942, master science fiction writer Isaac Asimov famously laid out his three laws for robotics: Robots may not injure humans, must obey orders from humans as long as this does not violate the first law, and must protect humans’ existence as long as this does not violate the first two laws. A few years later, in 1950, Alan Turing imagined machines that could converse with humans, while in 1965 Irving John Good predicted a machine-driven “intelligence explosion.” The world had to wait another half century for the promised AI revolution to arrive.

And yet for all the historical premonitions, the current furor over AI is as unprecedented as it is uniquely unsettling. For one, the latest crop of highly advanced large language models and the computational power driving them are no longer confined to the laboratory but are already being used by hundreds of millions of people. Another cause for concern is that some of the most outspoken AI advocates are now convinced that its unregulated use poses a fatal risk to humanity in the near future. What was once floated as a distant theoretical threat is now a clear and present danger—so much so that technologists such as Eliezer Yudkowsky, Geoffrey Hinton, and Max Tegmark and more than 31,000 other people have called for a pause in training the most powerful forms of AI, which they see as among the “most profound risks to society and humanity” today.

Well before the latest outbreak of anxiety, governments, businesses and universities across North America and Western Europe were debating the real and potential harms associated with AI. Their attention converged on at least four possible threats. The first is the existential threat posed by super intelligent machines that may quickly dispose of humans. The second is widespread and accelerating unemployment, with Goldman Sachs recently estimating that as many as 300 million jobs are at risk of being replaced by AI. The third major concern relates to the disturbing way AI imitates and shares text, voice, and video—and could thus supercharge misinformation and disinformation. A fourth fear is that AI could be used to build doomsday technologies—such as biological or cyber viruses—with devastating consequences.

We are not yet at the mercy of thinking machines. As awareness of AI risks has grown, so too have standards and guidance to mitigate them. But for the most part, these are voluntary, including hundreds of protocols and principles advocating for responsible design and self-restraint. Common priorities include aligning AI with the best interests of humans and promoting safety in the design and deployment of algorithms. Other objectives include transparency of the algorithms themselves, accountability in relation to their development and application, fairness and equity in their use, privacy and data protection, human oversight and control, and compliance with regulations. The focus on voluntary self-policing is starting to change, with tech companies themselves advocating for the establishment of AI agencies and the enforcement of more robust rules.

Yet the push to create safeguards is far from ecumenical. To date, most of the debate over AI and possible strategies to mitigate unintended harms is concentrated in the West. Most of the government and industry standards now on the table were issued in the European Union, the United States, or member states of the Organization for Economic Cooperation and Development, a club of 38 advanced economies. The EU, for example, is poised to release a new AI Act focusing on applications and systems that pose unacceptable and high risk. The Western focus on AI is hardly surprising given the density of AI companies, investors, and research institutes working on AI from Silicon Valley to Tel Aviv, Israel.

Even so, it is worth underlining that the needs and concerns of regions such as Latin America, Sub-Saharan Africa, South Asia, and Southeast Asia—where AI is also rapidly expanding and will generate monumental effects—are not much reflected in the AI debate. Put another way, the vast majority of discussion about the consequences and regulation of AI is occurring among countries whose populations make up just 1.3 billion people. Far less attention and resources are dedicated to addressing these same concerns in poor and emerging countries that account for the remaining 6.7 billion of the global population.

This is a troubling omission, given that many of the darker consequences of poorly regulated AI are particularly resonant in the so-called global south. Undoubtedly, some anxieties are global, including those over super-intelligence, job losses, and accelerating fake news. Yet the darker portents of AI represent anything but an equal opportunity affliction. Unmitigated AI could deepen social, economic, and digital cleavages between and within countries. The unregulated spread of AI could also concentrate corporate power even more, and deepening techno-authoritarianism could accelerate the corrosion of already damaged democratic institutions.

While these AI-induced harms clearly represent universal threats, their impacts not only will fall unevenly across an already badly divided globe but could also prove particularly paralyzing in lower- and middle-income countries with precarious regulatory guardrails and weak institutions. For one, algorithms and datasets generated in wealthy countries and subsequently applied in developing nations could reproduce and reinforce biases and discrimination owing to their lack of sensitivity and diversity. Moreover, low-wage and low-skill workers already suffering from poor pay and lax labor protections are particularly exposed to the job-killing effects of AI. There are, of course, many potential benefits to the spread of AI in the global south, but these may not be harnessed without adequate AI regulation, ethical governance, and better public awareness of the need to limit AI’s damaging effects.

Given the blistering pace of AI advances, the time for building regulatory guardrails and other backstops is now. AI-powered technologies are rapidly being adopted in some of the world’s most unequal countries in Africa (including the Central African Republic, Mozambique, and South Africa), the Middle East (including Oman, Qatar, and Saudi Arabia) and Latin America (including Brazil, Chile, and Mexico). Yet many of the basic laws and principles to govern safe AI have yet to be fully developed, much less negotiated and publicly debated. Likewise, large U.S., European, and Chinese technology vendors are rapidly introducing powerful AI technologies in many developing countries, securing dominant market share in surveillance and other AI applications, and wiping out the local competition. The use of AI technologies to reinforce illiberal and autocratic governance is already on full display in places such as Cambodia, China, Egypt, Nicaragua, Russia, and Venezuela.

Unfettered AI development is good news for autocrats and power elites who are already set up to reap the spoils of government and monopolize public goods. Unless effective regulations, equitable compensatory mechanisms, social safeguards, and political firewalls can be built, AI is likely to deliver greater uncertainty and collateral damages to the globe’s digitally challenged underclass, for whom next-generation technology will be someone else’s miracle.

Gabriella Seller, a consultant with the Igarapé Institute, and Gordon Laforge, a senior policy analyst at New America, contributed to this article.

Robert Muggah is a principal at the SecDev Group, a co-founder of the Igarapé Institute, and the author, with Ian Goldin, of Terra Incognita: 100 Maps to Survive the Next 100 Years. Twitter: @robmuggah

Ilona Szabó is a co-founder and the president of the Igarapé Institute and a member of the U.N. Secretary-General’s High-Level Advisory Board on Effective Multilateralism.

Join the Conversation

Commenting on this and other recent articles is just one benefit of a Foreign Policy subscription.

Already a subscriber? .

Join the Conversation

Join the conversation on this and other recent Foreign Policy articles when you subscribe now.

Not your account?

Join the Conversation

Please follow our comment guidelines, stay on topic, and be civil, courteous, and respectful of others’ beliefs.

You are commenting as .

More from Foreign Policy

Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.
Palestinian President Mahmoud Abbas, Jordan's King Abdullah II, and Egyptian President Abdel Fattah al-Sisi talk to delegates during the Arab League's Summit for Jerusalem in Cairo, on Feb. 12, 2023.

Arab Countries Have Israel’s Back—for Their Own Sake

Last weekend’s security cooperation in the Middle East doesn’t indicate a new future for the region.

A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.
A new floating production, storage, and offloading vessel is under construction at a shipyard in Nantong, China, on April 17, 2023.

Forget About Chips—China Is Coming for Ships

Beijing’s grab for hegemony in a critical sector follows a familiar playbook.

A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.
A woman wearing a dress with floral details and loose sleeves looks straight ahead. She is flanked by flags and statues of large cats in the background.

‘The Regime’ Misunderstands Autocracy

HBO’s new miniseries displays an undeniably American nonchalance toward power.

Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.
Nigeriens gather to protest against the U.S. military presence, in Niamey, Niger, on April 13.

Washington’s Failed Africa Policy Needs a Reset

Instead of trying to put out security fires, U.S. policy should focus on governance and growth.