The United Nations takes on the challenge of AI governance with scientific panel modeled after the Intergovernmental Panel on Climate Change
For all its lofty ideals, the United Nations is not known for its lightning speed. Yet ambassadors Héctor Gómez Hernández and Maritza Chan Valverde, permanent representatives of Spain and Costa Rica, are proud to say that in a whopping six-month UN time, they have accomplished the task that the president of the General Assembly, Philemon Yang, assigned them last November as co-facilitators to establish an International Independent Scientific Panel to guide global dialogue on Artificial Intelligence (AI) governance.

Despite the genuine satisfaction with the work, there is a shadow on the Ambassador’s face when we meet at the Spanish mission on the 36th floor of One Dag Hammarskjöld. It might be the feeling of that rainy day, but the top of the high-rise buildings emerging from the fog seems to match the mood. “AI is going so fast,” he concedes. Faster than any other technology in history. At the time of this interview ChatGPT is only two and a half years old, having been launched on November 30, 2022. Yet, whoever tried it last year and wasn’t impressed with its capabilities, will be shocked in a recent attempt. AI has already crossed the crucial threshold where it can’t be distinguished from humans. At that pace, when the framework to establish this panel will be presented in September, the world could easily be facing the threat of a superhuman intelligence running wild among us.
“Artificial intelligence is a giant leap forward in terms of timelines and the speed at which we access information,” the ambassador admits. “It could be an extraordinary contribution toward building a better society. But in terms of regulation, the concern is to ensure the right balance for a harmonized and equitable global development with respect to human rights.”

Humans are notoriously slow to react to the urgency of catastrophes, inexplicably ignoring the warning signs until they explode in their faces. The day before Russia launched in Ukraine the first major invasion in Europe since World War II, journalists from around the world who traveled to Kyiv reported bars full of people at a vibrant night scene. Within hours, traffic jams clogged the exits of the city as people rushed to leave.
Two years before, while countries like Spain or Italy had dramatically shut down in the face of a deadly coronavirus, New Yorkers barely paid attention to the news. Within days, their city became the epicenter of the pandemic. It’s almost as if the orchestra always keeps playing on the Titanic while the ship sinks. Today, with AI gearing up, we are again in the calm that precedes the storm.
“The most important issue facing Americans is not President Donald Trump’s antics. Nor is it the (now much reduced) antics of woke progressives. Yet that’s what we’re mostly fighting about, while we neglect the single biggest question we have to answer: What are we going to do about AI?,” wrote Megan McArdle in The Washington Post last May. “But when I ask people this question, the most common response is a blank stare or a shrug.”
One person who saw it coming is 76-year-old Secretary General António Guterres. According to insiders, he is focused on AI to the point of obsession. On his initiative, the global Digital Compact was launched with the objective of ensuring that digital technologies are used responsibly and for the benefit of all, as part of the Pact for the Future approved last September, following the Summit of the Future.
As Ban Ki-moon significantly contributed to global climate action during his tenure by prioritizing climate change as a major global challenge, Guterres could be known for placing AI on the global agenda. This threat is even more urgent. The General Assembly endorsed the creation of the Intergovernmental Panel on Climate Change in 1988 but it wasn’t until 2015 that Paris Agreement was adopted. Since the Intergovernmental Panel on AI is modeled after the climate one, it’s fair to say its development is happening at lightning speed. The urgency of the moment and the speed our society is embracing AI require a match.

AI is now everywhere. There are summits and initiatives happening in every country, but nothing is organized at a global level. “We cannot depend on the initiative of a company, a university, or a country to host these meetings. We need to establish an organized framework for action,” he warns. This is where the UN steps in, with the purpose to establish a scientific panel in a multilateral frame where member states, private initiative, universities and all actors bring their concerns and solve the existing tensions over the AI development. “From that moment on, there will be a long way to go. That’s why we need to have the best minds in this scientific panel,” says Ambassador Gómez.
The document that was being negotiated contemplates the selection of 40 experts who will identify the weaknesses of the multilateral system and the challenges of coexistence between the public and private sectors in the development of AI. In principle, the majority agree on the need to work in a more balanced way between public and private, with a multilateral commitment. “They are grateful for the UN initiative,” Ambassador Gómez says.

The Global Digital Compact itself followed consultations with one million people from around the world. To establish the framework that will be presented in September, there have been numerous rounds of consultations, with virtual open rooms where “everybody who wanted to participate” joined at their convenience, from tech giants like Google or Microsoft to 123 member states. They have listened to the input from individuals, governments, private sector entities, civil society organizations, and other stakeholders all brought together to work collaboratively under the UN leadership.
From the need to supply energy to the breathtaking speed at which AI is being applied in many fields like medical science, education, public response, etc, the world needs answers that not one person or one company can provide. “There are so many fields of advanced research that it must be the protagonists of each sector themselves who do the work of raising awareness about the necessary commitment,” explains the ambassador. “They will be the ones to identify the weaknesses of the multilateral system and of the coexistence between the public and private sectors; to pinpoint vulnerabilities, seek balance, and commit to multilateralism.” Only then will it be possible to make the best decisions, share knowledge, and determine whether there should be regulation.

One of those leading voices in AI is Yoshua Bengio, the most cited computer scientist globally and one of the world’s 100 most influential people in last year’s Time Magazine list. As a pioneer of artificial neural networks and deep learning, he used to dream of machines that could think. Now he worries they might decide. At a recent meeting in Montreal convened by the United Nations to explore the shape of a future AI governance framework, the founder of the AI research institute MILA stood among diplomats and regulators warning of a future dominated not by human brilliance, but by machine goals.
“We are driving into the fog,” he told them, invoking his favorite metaphor. “And we don’t know if there is a precipice ahead.”
He could have added: and no one’s really hitting the brakes.
Bengio is no alarmist. For decades he pursued machine intelligence with the conviction that it could expand human potential. But since the release of ChatGPT in late 2022, his calculus has shifted. “We have reached the mastery of language,” he says.
As AI advances faster than most governments can legislate, the UN is racing to answer a critical question: Can it build the guardrails of global AI governance before the precipice appears? Ambassador Gómez assures that the big tech companies have been present and they also agree on the need for governance. “They understand that work must move in a common direction.”
There is no doubt AI will disrupt society and especially jobs. Some experts ponder more unsettling questions. “How do you keep power forever over something that is much more powerful than you?”, wonders Stuart Russell, professor of Computer Science at UC Berkeley in the documentary The Thinking Game. Among the growing chorus of critics is Geoffrey Hinton, known as the “Godfather of AI” for his “foundational discoveries and inventions that enable machine learning with artificial neural networks”. The Nobel laureate has become a prominent voice expressing concerns about the rapid advancement of AI. He has warned that AI could potentially take control away from humans and could possibly lead to the end of the world, a risk that he feels people haven’t fully grasped. “The best way to understand it, emotionally, is if we like somebody who has this really cute tiger cub. Unless you can be very sure it’s not going to want to kill you when it’s grown up, you should worry,” he said to CBS in a recent interview. These are not isolated voices. In 2023, more than 1,000 technology leaders and researchers signed an open letter asking for a moratorium on the development of new systems warning AI poses “a profound risk to society and humanity”. But of course, technology won’t stop.
Hinton predicts AI will make authoritarians more aggressive and hackers more effective. Guterres fears it could aggravate digital inequalities and ignore cultural differences. Other experts are concerned about safety, potential misuse, accidental risks, unintended consequences, security and privacy risks, even mimicking or faking alignment with human values.
Hinton, like many of the experts who have participated in the UN project, believes that AI needs to be carefully regulated to mitigate the risks but told CBS he doesn’t really expect it soon. Many big tech moguls who recognize the danger are lobbying in Washington to avoid AI regulation, foolishly putting humanity at risk over profits. In fact, the US House Energy and Commerce Committee’s new budget reconciliation bill includes a decade-long moratorium on state regulation of artificial intelligence considered extraordinarily broad and alarming. Sen. Ted Cruz (R-Texas) intended to introduce similar legislation in the Senate. “If adopted, it would extinguish the only meaningful effort to protect Americans from AI-related risks,” wrote The Washington Post in an editorial.
The UN has an opportunity to step in that vacuum. It’s hard to believe that the world can regulate AI without the strong participation of the US, who at the time of this print, it has not even confirmed an ambassador to the UN. Hernández assumes that Washington has supervised the participation of the American mission in the open process to develop this framework, but that might be too much to assume.
Under the proper guidance, AI could reduce barriers to accessing advanced expertise and contribute to achieving the UN Development Goals. It could create new opportunities for humans to interact in a way that can accelerate discovery and innovation and trigger a new frontier of creativity. The AI technology has already spread in multiple applications across the economy and society faster than it can be taught. Even though only 18% of US school districts provide any guidance on AI, 60% of principals and 40% of teachers used AI in the 2024-2024 school year, says the 2025 UN Human Development Report. Among workers in 27 countries, almost half used AI every day in 2024, up 230% from 2023.
There are many opportunities for AI to advance innovation without machines replacing humans. But using AI requires tradeoffs. How much does society gain, what moral and ethical framework do we need to consider if machines can act as moral proxies? Will that interaction play out differently in different cultural contexts in a myriad ways? We know it’s coming for our jobs. Using it to execute tasks can replace tasks or even occupations, reduce employment and wages for incumbent workers.
There is room for concern and optimism. It could change us as individuals, as societies and as planet, but the Human Development Report concludes that AI cannot replace human judgment. Human input may be particularly valuable in situations where even small deviations in AI outputs have a wide range of implications (from extraordinarily good to catastrophic).
Instead of looking to the future through a foggy fear of the unknown, the UN invites us to shape that future by knowing more about what AI can and cannot do now and what might be possible as AI evolves. By next year, the cacophony of opinions will be replaced by the sound conclusions of a leading scientist panel that could frame the AI race in the positive light that humanity needs to harness its benefits and navigate the future. We might begin to bring fiction to life in a uniquely human way, right on time to achieve the 2030 Agenda for Sustainable Development that nobody believed possible without a superhuman acceleration. Or, perhaps, we’ll finally spot that cliff behind the fog before we drive right into it. After all, if not the UN, who else is here to guide us?