AI, Disarmament, and the Call for Responsible Regulation
“UN Disarmament Chief Mrs. Izumi Nakamitsu Warns of AI-Driven Autonomous Weapons: Calls for Global Action to Protect Humanity”
Mrs. Nakamitsu highlights crucial points about the integration of AI in military systems and the implications for disarmament and civilian protection. She emphasizes the need for a legally binding framework to regulate lethal autonomous weapons systems (LAWS) and underscores the responsibility of all actors involved in the development and deployment of AI technologies.
Responsible AI development could lead to advancements in areas like arms control verification, ultimately reducing the human cost of conflict. However, the need for human oversight, particularly in sensitive areas like nuclear command, remains a critical concern.Mrs. Nakamitsu suggests several key points regarding the integration of AI in military contexts:
Legally Binding Frameworks: There is a pressing need for legally binding instruments to regulate lethal autonomous weapons systems (LAWS) to ensure they comply with international humanitarian law.
Global Cooperation: Multilateral discussions, such as those within the Convention on Certain Conventional Weapons (CCW), are crucial for moving from general discussions to concrete negotiations regarding LAWS.

Responsible AI Use: AI should be developed and deployed responsibly across its entire lifecycle, with an emphasis on ethical standards to prevent misuse.
Human Oversight: In critical areas like nuclear command and control, human decision-making must remain paramount to avoid risks associated with AI algorithms.
Positive Applications: AI can also be leveraged for beneficial purposes, such as enhancing arms control verification and minimizing civilian casualties in conflict zones.
Inclusion of Stakeholders: Engaging private sector actors and other stakeholders in discussions about AI development is vital to establish ethical standards and mitigate risks.
Through these suggestions, Nakamitsu emphasizes the balance between harnessing the potential benefits of AI while addressing the significant risks it poses in military applications.
In the past years, you’ve repeatedly expressed concerns that the development of autonomous weapons will fundamentally alter how conflicts are planned and carried out. Can you tell us what has struck you the most lately in the ongoing wars with the increasing integration of AI?
Artificial Intelligence itself is not a weapon, it is an enabling technology – like electricity. But it can be integrated into weapon systems, with serious implications, and AI can also be weaponized. You already see some of those new technologies, whether automatic targeting systems or integration of autonomy into weapon systems, being used in the war in Ukraine, for instance. Unmanned Aerial Vehicles, or UAVs – commonly known as drones, are remotely operated, and, when integrated with AI, can potentially become autonomous in identifying targets.

Drones, I would say, have almost become a weapon of choice, because they are very cheap, readily available, and have been used in many places: in the Nagorno-Karabakh conflict between Azerbaijan and Armenia, or in Libya.
We’ve seen video clips from Ukraine where drones are targeting and killing soldiers. In Gaza, « Lavender » is reported as an AI system which identifies the locations of Hamas operatives.
And if states, or any other actors, were to actually develop autonomous weapons that will identify and make decisions to pull the trigger to kill someone: that’s a serious issue.
In that context, how to protect civilians?
Any weapon system needs to be used in accordance with existing international law, and most importantly, of course, international humanitarian law (IHL). But there are gray zones, the question marks are definitely there – which is why, as the UN, we are sending several messages. First and foremost, with any of those weapon systems, existing international law needs to be respected. Moreover, as the Secretary-General has been saying, there should be, by 2026,a legally binding instrument that will prohibit Lethal Autonomous Weapon Systems (LAWS), that we already know cannot be used in accordance with existing IHL, and to regulate all other types of autonomous weapons systems. That’s what is requested in the New Agenda for Peace.
Many different multilateral negotiations about artificial intelligence in the military domain are taking place: first of all, in Geneva.
First of all, let me clarify that governance of AI is separate from the discussions taking place on lethal autonomous weapons systems (LAWS). AI in the military domain is a much larger topic. While AI could enable autonomy, it is not a prerequisite for designing lethal autonomous weapons systems.

Going back to the topic of LAWS, intergovernmental discussions are taking place in Geneva, within a humanitarian framework agreement called the CCW – the « Convention on Certain Conventional Weapons ». The purpose of that framework Convention is to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.
We hope to move from a general discussion to the negotiation of a legally binding instrument: we’re in that critical step right now. The current mandate of that process is not a negotiating mandate, so we need to transform it, and hopefully, that will be achieved before 2026.
The treaty called by Antonio Guterres might not happen by 2026 then?
Probably not. Member states are still very much pushing for this time frame, but considering the way mandates are structured, it will be difficult to expect that they will conclude that negotiation by 2026. That said, there is a general movement towards that. And when we’ll be able to start negotiating a protocol under the CCW, it will then become a legally binding instrument.
There are still key issues you need to tackle that relate to LAWS and are not being discussed within the CCW…
The CCW process focuses mostly on the international humanitarian law discussions, the ethical use of those weapons systems, etc. But this Autumn, Austria is introducing an informal consultation process, here at the UN First Committee, to identify issues that they consider fall within the gap.
For instance, the main concerns of African states are non-state actors, terrorist organizations, international, transnational criminal organizations or rebel groups getting hold of those weapon systems and use them in internal conflict. But currently there is little discussion about the future possibility of autonomous weapons systems becoming relatively cheap, which raises additional concerns about their proliferation. So, Austria brought a new resolution to the UN First Committee that will be voted on later this year to start informal consultations on those, and other topics.

Apart from the CCW discussions, there are also other initiatives. What are the ones that can make the debate go forward?
The topic of AI, which as I explained is distinct from the issue of LAWS, is gaining increasing interest and traction. In February 2023, the Netherlands and the Republic of Korea launched the Responsible Artificial Intelligence in the Military Domain (REAIM) process. It covers a broader AI application in the military domain, both negative and positive elements. For example, general situational awareness, intelligence analysis, logistics or supply chain management: these can be considered positive aspects of AI.
They are also tackling issues related to responsible use of AI in the military domain, and they are identifying a couple of key issues to be addressed. The endorsing States participating in the REAIM initiative just adopted the « Blueprint for Action » last September, in Seoul at the second summit. One of the important aspects is that many States, including China, are deeply and actively engaged in this REAIM process. And another one is that the Netherlands and the Republic of Korea, with a couple of other countries as co-sponsors, are now bringing this issue to the United Nations. In all likelihood, this draft resolution on AI in the military domain and its implications to international peace and security will be adopted – so there will be a mandate coming to the Secretary-General, to put together a substantive report of the SG, to analyze what the challenges and opportunities are, with the national views of member states on this issue annexed to that report. And next year, Member States will start discussing what it is that they need to do with broader AI applications in the military domain in general. What we, as the UN, advocate is responsibility throughout the life cycle of artificial intelligence.
What does that concretely mean?
That would mean that, starting with the design, data collection, development, deployment and use of AI in the military domain, there has to be responsible behavior. For example, you should not design artificial intelligence systems that could be used for malicious purposes; you should not test those AI-enabled weapon systems in the battlefield, without making sure that these can be used in a responsible way and in accordance with international humanitarian law.
Responsible behavior across the life cycle of AI in the military domain is a general message that we are advocating. And I think Member States are now coming together and agreeing on this notion of responsibility – not all of them, but there is a general awareness that if these things are left unchecked, then collectively, we will have a backlash.
But can we count on responsibility alone? After all, it is what’s supposed to govern the development of nuclear weapons and look where we are 65 years later, with the same threat over our heads… So is it enough to put responsibility at heart of the UN doctrine?
It’s a first step. I agree, the speed of multilateral discussions is very often outpaced by the speed of the development of those technologies – that is an issue. And now that you mentioned nuclear weapons issues, I hope that all nuclear- weapons-States will agree that AI should not be integrated into nuclear command, control and communications systems, because that would bring major risks. One of the most pressing risks relates to compressed decision-making time for human commanders. Also, very often, we don’t know why AI reaches certain conclusions, how it actually arrives at a certain decision – it’s called the black box problem. So, we already know that by introducing AI into nuclear command, control and communications systems, there will be a greater risk of miscalculation and misunderstanding that could lead to the use of nuclear weapons. So, one of the really important norms that we need to make sure all nuclear-weapon-States agree on, is that when it comes to nuclear weapons systems, humans should always, always remain in charge of any decision making.
Algorithms and data cannot embrace ethics or responsibilities themselves, and so delegating decisions that relate to whether human beings should survive or be killed, that’s a moral line that we as humankind should not cross. I met with nuclear experts in Washington in October, and they were very much concerned about the integration of AI and algorithms into nuclear command, control, and communications. This is in large part because the impact will be unpredictable. We don’t know what will happen. And when it comes to pushing the button or not of nuclear weapons, we don’t want anything, anything unpredictable.
Allow me to go back to the multilateral frameworks: the US are also at the head of a political declaration when it comes to the AI framework for weapons.

Yes, 56 countries have endorsed that US political declaration: that process itself is at the moment outside the UN discussions, but we are informed regularly about the progress. All these discussions on these issues outside the UN are quite helpful in creating a global level conversation. And I can tell you, the cybersecurity discussions took a similar path – meaning the conversations started outside of the UN but have since many years now been discussed by States within the UN. And next year, it is expected that there should be a permanent UN platform to engage on cybersecurity issues.
Given that the private sector is the one designing most of the AI weaponry new developments, and you were talking about responsibility, how important is it that these companies are involved in these negotiations?
It’s a key aspect. Again, I use the example of the cybersecurity discussions: in many of those UN discussions now, we invite other stakeholders, where engineers, academia and industry can participate and contribute. The UN is an intergovernmental entity, so decision making always rests with the governments, but there is a strong awareness that they can benefit from the contributions of those multi-stakeholders. A few years back, I visited Silicon Valley and spoke to those engineers themselves: they were saying that there has to be a strong sense of ethics standard for those private sector engineers to make sure that the technologies, the algorithms that they are developing, would not be misused.
In both REAIM summits, there was a very strong private sector participation as well, and I hope that this trend will continue, because things are moving so fast. And, there is a very specific challenge here, which is that these technologies are dual use or multi-purpose . The technologies that are being developed for civilian, peaceful uses, can be misused, or used with malicious intentions: they could be weaponized. And how to tackle that risk has also to be part of the conversations.
We’ve talked a lot about the dark side and the risk of technology, but you believe that technology can bring progress. So how does AI, can actually be beneficial in reducing the human cost of weapons?
This is an area where people should be focusing their attention. We’ve been very busy talking about the dark side of the issues, and not enough is being discussed on some of the positive aspects. Artificial intelligence can be useful for the verification of arms control and disarmament agreements for instance: with satellite data and imagery, it’s becoming much more difficult to hide things that are taking place in military operations. AI will be much better than human beings in analyzing images in a short time also. And there are many other areas: how to make sure that explosive weapons are not used in densely populated areas. Unfortunately, this is a phenomenon that we are seeing in Gaza, in Ukraine, or now in Lebanon as well.
If we actually have the political will to use artificial intelligence and collect and analyse data for positive purposes, namely to avoid collateral damage or human casualties, then artificial intelligence will be really quite helpful to avoid the currently unacceptable level of human cost of weapons.