U.N. Principles Against Disinformation: Tech Giants’ Responsibility for AI

In an increasingly digital landscape fraught with the perils of disinformation, Melissa Fleming, United Nations Under-Secretary-General for Global Communications, discusses the U.N.’s comprehensive strategies to combat misinformation, the evolving digital communication landscape, and the implications of artificial intelligence in an exclusive interview. Through powerful storytelling and a commitment to truth, Fleming highlights the U.N.’s ongoing efforts to continue to be a source of trusted information and inspiration. This interview has been shortened and edited for clarity.

You have been quite outspoken about the dangers of misinformation and disinformation, especially in the context of global issues. Could you share some insights on how the U.N. is tackling this challenge?

Well, first we are just waking up the whole U.N. system to the fact that it is making our work so much more difficult, whether it is trying to keep the peace in the dimension of peacekeeping, whether it’s conflict prevention, whether it is just trying to get people on board with climate action, and the SDGs. Each one of these areas is being threatened by disinformation. And then we have the dimension of hate speech, which is also fueling conflicts making environments worse. And so, what we found with the rise of social media over the years is that without very many guardrails where anyone can be a publisher, anyone can say pretty much whatever they want, and get huge amplification. Many who realize that the louder their voices,  the more hateful, the more outrageous and the more amplification they’re going to get is a stimulus. So you have this environment that has become an information environment that has become polluted at best, and toxic at worst. And, like our natural environment, a toxic information environment is really difficult to thrive in as human beings. And so, it’s more than just a kind of communications issue, it’s more difficult for us to get our message across that is true, but it is actually harming societies. it is fueling distrust in institutions in media. It is driving horrific online attacks against journalists, especially women journalists. Women, leaders, activists, and people who are trying to make the world a better place and discouraging them or silencing them from doing their work. We have to demand a stop to this and that’s what I’m really inspired to work on beyond just getting the message of the U.N. out there.

How is the U.N. adapting its communication strategies to effectively engage with a global audience in this digital age?

The other thing about the digital age is that there’s just so much information and it’s a wonderful thing in one way, the access to information. We’ve never had more access to information and, it’s so easy to do research now. It’s so easy to access all kinds of sources of news and information at your fingertips. But one of the problems as the Reuters Institute for Media studies is that people feel overwhelmed. The other problem is that because the algorithms are designed to promote posts that are provocative or evoke fear, news headlines have become angrier and angrier. And so even if you’re trying to do responsible, conveying of information in the digital ecosystem, what we’re seeing is just heightened or the volume has been turned up and it’s just become an angrier space and people are actually starting to feel really distressed, really anxious. Actually, more and more people are turning away from the news, and at a time when we need people to be informed, we need people to be seeking, trusted, and reliable information and feel like they can access it and that they can believe what is written there. And that’s the challenge and that’s something that we’re working on.

Secretary-General António Guterres (left) swears in Melissa Fleming, Under-Secretary-General for Global Communications.

So, does social media have a role in news headlines becoming more aggressive?

It is because algorithms are designed to make money off of the posts that generate the most clicks. And what generates the most clicks is the one that what makes people’s hearts palpitate, not the latest facts about a certain situation and beyond that social media channels have,  in particular, Meta have downgraded news. When social media first came onto the scene a lot of news organizations and U.N. organizations were very excited. Like we could not only communicate through journalists, but we could also speak to people directly and many news organizations used social media channels as a distribution model as a way to reach people and bring people back to their news sites. But soon recently, especially in 2018, and then also with the advent of threads, Meta has decided that it is going to deemphasize news and that’s been a tremendous problem for many news organizations that were dependent on that distribution. The U.N. itself is downranked on Meta. That means that our content is much more difficult to surface than the content of an individual. So it is a constant struggle for us to compete with U.N. news content or on our U.N. accounts about a latest report, the news around the U.N., or posts about climate action, because we’re being seen as a political news kind of organization and we’re downranked, so our starting point is much lower and it’s much more difficult to get our voices out so we’ve had to diversify. We go on other platforms like LinkedIn where we’re not downranked, not downranked on X, but it has become a very hateful space, very disturbing, but we still have millions of followers on X, and people who come to us still for information and inspiration. And I think that’s why we want to continue to be a source of trusted information and a source of inspiration.

Has the U.N. or your department tried to communicate with X to discuss hateful content? 

For years we’ve been communicating with social media platforms and we engage with them at various levels, sometimes on a day-to-day basis flagging different content, and asking them to take things down. At the higher policy level, the Secretary-General has met with the highest levels of many social media companies, but not with Elon Musk. Since he took over the platform it is very difficult for us to have any contact that is meaningful with that platform.

Moving on to AI, everybody is now talking about artificial intelligence and rapid advancements in that field. And we’re also witnessing incredible benefits but at the same time, you also talked about the risks and serious concerns it has on various platforms. What are the benefits and risks of AI?

I am just going to speak about the information realm because there are countless potential benefits of AI in the areas of health, education and medicine. You could probably in every dimension make a case that AI could accelerate human understanding and our capabilities to do things that we weren’t able to do before but there is a dark side and I’ll just speak about the dark side that we’re concerned about in the context of information. There are fantastic AI tools that kind of make our professional lives easier. Think about journalism, and the AI needs to train on content that is already out there. So it has basically scraped the entire internet to train itself that the internet is full of fantastic information, and it’s full of terrible information. And so, we’ve already seen that sometimes when you ask AI agents some questions you get back 

absolutely crazy answers. So, what we need to make sure of is that the AI is first of all trained on information that is based on reliable information which we’re not so sure of. Secondly, biases are not coded into their DNA. These could be racial biases, these could be gender-based biases. And then AI tools are being already deployed and could be weaponized in the disinformation field. Disinformation actors are already using AI tools to create deep fakes. This has been extremely worrying in the election sphere, deep fakes of politicians that deceive voters about who they might choose. But it’s very disturbing to many women and girls, deep fake pornography that is being used against them. We’ve seen it, teenage girls are the victims, and women leaders are the victims. And even boys are the victims. So, it’s a tool that can be cheaply used to create content that is very difficult to distinguish whether it is real or not real before with disinformation and fake photos. There were ways to kind of be a sleuth and an investigator and to geotag or geolocate. So that’s where the deep fakes are worrying. We’re worried about fake news websites, which are proliferating. They’ve existed for many years. But now with just a very small amount of money, one can create the capability to establish an entire newspaper a digital newspaper, populate it with AI-created articles that are kind of rewrites of existing articles and then interspersed with absolute, deceptive, and fake content, and disguise this as being real papers. It’s called pink slime. In the US, I just saw a study that 50% of the local news outlets, and digital news outlets now are fake.  And so the risk that AI is going to infect our digital ecosystem with even more disinformation and misinformation and fake information is so high that we’re really calling on it, and I think governments also are alarmed by it. We can’t wait like we did with social media until it’s too late. The AI actors need to build in accountability from the start and safety by design. And governments are going to have to figure out how to regulate this.

Last March, the U.N. General Assembly adopted a landmark resolution on artificial intelligence on the promotion of safe, secure, and trustworthy artificial intelligence systems, but it doesn’t include an enforcement mechanism. How do you think the international community can work together to ensure that AI tools are used responsibly?

The Secretary General has since established an AI advisory board with very prominent AI experts from around the world who are advising him on just this, how to create the kind of governance global governance that is needed to protect the world from the dark side of AI this, these recommendations are coming soon…I think most people are clear. Just like with social media,  you can’t regulate or you can’t address AI in one country. It doesn’t know borders. And so there needs some kind of global system. We have existing organizations like the ITU that is already very active, particularly in promoting AI for good, but at the same time, you know, understanding where the dangers are and making recommendations, and they’ve been at it for many years already. Also, UNESCO has put out a document called the ethics of AI. I know that the global digital compact that is going to be part of the Summit of the Future also has a big section on AI, and then the U.N. tends to sound like a lot of bureaucracy but what my department and what I’ve been leading in the production is U.N. Global principles on information integrity. This is being launched on the 24th of June and this also is in part going to make strong recommendations to AI actors and to states on how to address the disinformation that could result from AI applications.

What would you like to see the AI developers and big tech companies do to align with the U.N.’s calls for the responsible use of AI?

We believe that AI actors have a responsibility. They’re developing this technology. Many of them are actually saying this could be dangerous, and they’re actually calling for regulation. But the onus is also on them. They’re kind of in a race against each other because it’s their responsibility if they’re going to be rolling out a technology to ensure that it is safe for society and humanity.

Do you think the U.N. could play a role in bringing them, the big tech, together?

We can convene. And we do convene. And I think the beauty is we convene not only governments, but we can convene civil society together with governments and I think that’s kind of a rare, a rare capability. And we can do it globally. And then we can bring together private sector actors with governments with civil society with those who are with scientists, with those who are concerned. We have the convening power we have the advisory power and it depends on what is established. At the moment there isn’t a bespoke body like some have said, the IAEA where I used to work, which is the body that ensures that nuclear is only used for peaceful purposes. Others say AI is a bunch of private actors and you can develop it in the basement so it’s different. So, again, what is the best, the best way to make sure that a technology that is going to have an impact probably on every single person on this earth is developed in a way that benefits us all and not hurts us.

RECOMMENDATIONS FOR STAKEHOLDERS

  • TECHNOLOGY COMPANIES
  • ARTIFICAL INTELLIGENCE (AI) ACTORS
  • NEWS MEDIA
  • RESEARCHERS AND CIVIL SOCIETY ORGANIZATIONS
  • STATES AND POLITICAL ACTORS
  • UNITED NATIONS
  • Governments, tech companies, advertisers, media and other stakeholders should refrain from using, supporting or amplifying disinformation and hate speech for any purpose.
  • Governments should provide timely access to information, guarantee a free, viable, independent, and plural media landscape and ensure strong protections for journalists, researchers and civil society.
  • Tech companies should ensure safety and privacy by design in all products, alongside consistent application of policies and resources across countries and languages, with particular attention to the needs of those groups often targeted online. They should elevate crisis response and take measures to support information integrity around elections.
  • All stakeholders involved in the development of AI technologies should take urgent, immediate, inclusive and transparent measures to ensure that all AI applications are designed, deployed and used safely, securely, responsibly and ethically, and uphold human rights.
  • Tech companies should scope business models that do not rely on programmatic advertising and do not prioritize engagement above human rights, privacy, and safety, allowing users greater choice and control over their online experience and personal data.
  • Advertisers should demand transparency in digital advertising processes from the tech sector to help ensure that ad budgets do not inadvertently fund disinformation or hate or undermine human rights.
  • Tech companies and AI developers should ensure meaningful transparency and allow researchers and academics access to data while respecting user privacy, commission publicly-available independent audits and co-develop industry accountability frameworks.
  • Government, tech companies, AI developers and advertisers should take special measures to protect and empower children, with governments providing resources for parents, guardians and educators.

Moving on to your extensive tenure at the U.N., particularly during your long-standing role with UNHCR you have spent much of your time in the field, could you share some of those experiences that have shaped your personal and professional journey?

I think it is one of the most heart-wrenching things is to sit and listen to stories of refugees particularly those who have just fled and haven’t had time to reestablish their lives but who have just had to leave everything behind. I always asked the same question when I sat with refugee families and individuals. What did you take when you had to flee your home and you probably didn’t have much time? And I remember one young man, he was probably 19 years old at the time. It was in a muddy refugee camp in the Beqaa Valley in Lebanon. His name was Hani and he got up when I asked him the question, we were sitting on pillows in his tent and drinking tea, and he got up and went to the back of the tent and came back with a document that was covered in silk. And he held up the document and uncovered it and showed it to me and he said, ‘I took my high school diploma.’ And he said ‘I took my high school diploma because my life depended on it. Without education, I am nothing.’ It was just such a beautiful thing that I used his story in a TED talk I gave later that actually got over a million views. He was partly blind and was struggling but had so much positive spirit and had this goal. No matter what he was going to go to university, but the odds of him going to university with almost a million refugees like him in Lebanon, with almost no access to the universities was very low. But the story inspired some Canadian government officials and long story short, he and his family were resettled to Canada, and he ended up becoming the youth advisor to Justin Trudeau and getting his university degree and just demonstrating the resilience of the human spirit. And I think that’s what I saw in refugees. And that was this resilience and that always gave me so much strength. And the second story was the story of Doaa, which inspired the book I wrote about her, a Syrian teenage refugee, also 19 years old, who survived one of the worst shipwrecks on the Mediterranean Sea when 500 fellow refugees drowned including the love of her life, her fiancé, and when she was rescued after four days and four nights on the water, she had two babies on her chest who were not her own. And one of them pulled through and so did she. And so it’s called ‘’A Hope More Powerful Than The Sea.’’ And again, it tells the story of the Syrian war. How awful it was to be a girl growing up in this horrific situation where you go from Peace to war, almost immediately, and then what it’s like to flee to another country. What it’s like to feel that despair and limbo and to keep having hope and to take on one of those awful boats even though you know how dangerous it is,  and you can’t swim. What it’s like to lose everybody to be in the water when everyone is drowning around you and you still survive because you want to save a baby. It was inspiring, but also as a communications professional, these were the stories that I sought out, because I knew from studying behavioral science and also social science, that people do not respond to numbers. There’s a saying statistics are human beings with the tears dried off. Another social scientist Paul Slovic, who studies the arithmetic of compassion has found the more who die, the less we care and that the way to get people to care is not by throwing numbers at them, but the way to get people’s hearts is through individual stories… So I started, you know, deploying that in my communication strategy at the U.N.

What inspired you to join the U.N. and what advice would you give to young professionals aspiring to work in global communications or with international organizations like the U.N.?

The U.N. is working for the greater good of humankind, and there’s no more inspiring profession than to wake up every day and know that what you’re doing is in the service of humanity and that you’re making a small contribution to making the world better. It’s tough work, tough work, because you’re exposed to so much, so much misery, so much suffering, so much pain, but you’re also exposed to so many ways that the U.N. is lifting people up and helping people not just survive but to thrive.

You have been working with the Secretary-General for so long, three words to describe him. What would you say?