“AI is more powerful than fire or electricity.” Those were the exact words of Google CEO Sundar Pichai. Indeed AI is rapidly changing the “expected future”, what we thought was “possible”. It is revolutionizing everything from healthcare, science, communications to national security at record speed.
But with great power comes great responsibility. There are significant risks, some that we know and some that we can not even predict. No one currently understands all of AI’s inner workings enough to fully control its outputs or predict its evolution. Nor are decision makers held accountable for developing or using systems that they do not understand.
Left alone, AI can become a very dangerous beast. If used for good, it can be the best thing that ever happened to humanity.
This came as a big surprise to many: AI Experts made statements showing that they are pro regulation just like policy experts. Geoffrey Hinton, often referred to as the “Godfather of AI,” warns of existential risks posed by uncontrolled AI development. Fei-Fei Li, an advocate for human-centered AI, emphasizes the need for regulations that prioritize societal benefits. Elon Musk, while warning that excessive rules could stifle innovation, acknowledges the necessity of regulations. Sam Altman, CEO of Open AI emphasizes the importance of international regulatory oversight. While Policy and Tech Experts are optimistic about AI’s potential for good, knowing how powerful the tech is, there is a big concern about risks—especially around national and international security. (*Details in Figure 2 of the UN Report). The stakes are too high to rely only on market forces.
While Experts agree that regulations are necessary, it’s a huge challenge to come up with a holistic, global approach cutting transversally across economic, political, societal, ethical, human rights and other domains.
The big question is: How to govern a technology that is developing at rocking speed and touches all of human life. As United Nations’ “Governing AI for Humanity” report highlights there is no shortage of documents and dialogues focused on AI governance. Yet, none of them is global in reach or complete in coverage. This leads to problems of representation, coordination and implementation.
As you can see in Figure (a) of UN’s Report, whole parts of the world have been left out of governance conversations! Only 7 countries were parties to all the sampled AI governance efforts. And 118 countries were parties to none.
To top it up, even national AI regulations vary greatly. The European Union’s AI Act sets the standard with a risk-based approach, categorizing AI applications by potential harm. This pioneering framework ensures that high-risk technologies, like facial recognition, follow strict compliance rules.
In contrast, the United States takes a more flexible route, relying on voluntary industry commitments and state-level regulations, which critics argue might leave gaps in oversight.

China is leading the way with its assertive approach to algorithmic governance, emphasizing both innovation and control. Its strict regulations ensure that AI technologies align with the country’s ethical and security standards, balancing national growth with oversight. AI governance must span the globe to be effective—-effective in averting a race to the buttom on safety and rights. The technology is borderless. It’d be an illusion to think that any country alone could control it. Global coordination and governance is a must.
An MIT Professor is very worried
During a recent interview, UNESCO spoke to Daron Acemoglu, an MIT professor who is of the opinion that AI is currently only benefiting those who develop it. He mentioned that UNESCO developed the first global standard in AI and that it has been a Leader in emphasizing the social aspect of it—-that we need ethical standards as well as legal standards for the new AI age.
When asked what policy makers can do to make sure AI benefits everyone, he shared that the biggest threat related to AI is not the tools but the “human actors”: Those using AI for incentives that are not aligned with society such as people who use deep fakes, or those who silence others.
Daron is worried that if AI is not controlled by policy makers and is left to Tech Experts alone, companies might choose to use it in a way to crash their competitors and silence their employees—-like during the British Industrial Revolution.
The revolution was a truly discontinuous event from an economic point of view, but it led to significantly worsening of working conditions and wages for the middle class. Health conditions got so much worse that the life expectancy dropped to 30 years in places like Manchester.
The Path Forward
The UN’s Governing AI for Humanity report provides a comprehensive analysis of the various risks and impacts AI poses across multiple sectors. Among its key recommendations is the creation of an independent international scientific panel on AI to guide global governance. The report also advocates for a biannual policy dialogue and an AI standards exchange to foster collaboration and share expertise.
Additionally, a global fund for AI is proposed to help close the AI divide, while a global AI data framework would standardize data practices. Finally, the report recommends establishing an AI office within the UN Secretariat to coordinate AI-related activities effectively.
Ultimately, the rapid advancement of AI presents both unprecedented opportunities and formidable challenges. As the United Nations’ report makes clear, AI governance cannot be left to market forces or a patchwork of national policies. The need for a coordinated, global effort is urgent. AI’s potential to transform society is vast, but without robust, equitable, and comprehensive regulation, the risks may outweigh the rewards.
The time to act is now, and global cooperation is the key to ensuring that AI serves as a force for good across the world.