Can AI bring peace to the Middle East?
There has been a surge of interest in AI as a tool to help end wars. But the assumptions baked into these tools risk repeating the mistakes of the past.
Scroll through the Google results for “artificial intelligence” and “peace”, and you’ll find some very contradictory suggestions. “Doomers” warn that AI has the potential to end life on earth as we know it. Meanwhile techno-optimist “boomers” claim AI could help to solve everything from loneliness to climate change to civil wars.
Polarised views on AI are nothing new, but interest in it as a tool for creating “peace” has risen since 2022, when OpenAI, one of the leading companies in the sector, launched its chatbot ChatGPT. A growing number of tech companies now say they’ve developed AI technologies that will help end wars.
But what are these tools? How do they work? And what are the risks when they are applied to deadly conflict?
What kind of AI are being used in peace processes?
“Artificial intelligence” refers to an array of technologies that solve problems, make decisions, and “learn” in ways that would usually require human intelligence. Some, however, argue AI is “neither artificial nor intelligent,” because it requires vast amounts of human labour and natural resources.
Still, “AI” is the term used to describe many tools that “PeaceTech” companies are developing to address violent conflict. Dedicated AI funds have been established, and the UN is actively promoting AI as a tool to support innovation.
How are AI tools being used to resolve conflicts?
Some AI tools have been built to respond to specific challenges faced by peace negotiators, such as how to gather information about public perspectives. Others serve several purposes, including recommending policies and making predictions about how people could behave.
Improving information access
In Libya and Yemen, the UN has used NLP tools to help more people share their views on politics. LLMs were used to analyse data collected when people were asked to share opinions and pose questions online. The aim was to identify agreement and disagreement across diverse groups and make peace processes more transparent and “inclusive” – which experts believe helps prevent wars.
Another example is a tool called Akord.ai, developed by an NGO called Conflict Dynamics International (CDI). This LLM chatbot was trained on 1,500 documents about Sudan, which is in the grip of another brutal civil war, with a particular focus on past peace agreements.
Azza M Ahmed, a senior advisor on CDI’s Sudan Program, explained that the tool is designed to help young people who want to contribute to Sudan’s peacebuilding, but who don’t know about past processes or can’t access practical guidance on negotiations.
“In negotiations there is concentration of knowledge and expertise in the hands of the few,” said Tarig Hilal, AI innovation lead at Akord.ai. “So Akord.ai is like an advisor, a co-pilot, a friend.”
Negotiators said tools like these, built to address specific barriers to involvement in peacebuilding can be useful.
Prescriptive technologies
Some tools predict and prescribe solutions for conflicts. These tools, experts argue, should be more carefully scrutinised because the information they generate will reflect biases baked into their training data and algorithms.
Akord.ai’s chatbot also claims to help peacemakers “develop political processes and governance options”. The platform itself looks quite a bit like ChatGPT – a box to type a question or request and then the LLM’s response – although unlike OpenAI, Akord has made its training data public.
Its recommendations clearly reflect the worldview of its creator. CDI promotes a method of resolving conflict called “political accommodation”, based on power sharing and compromise. It has its supporters, but also critics. Some argue that accommodating actors with no genuine interest in sharing power has driven Sudan’s present conflict, which has killed untold numbers of civilians and displaced 11 million people.
“Groups fighting their way to the table to be ‘accommodated’ ... is part of what led to the conflict,” said Jonas Horner, a visiting fellow at the European Council on Foreign Relations who has worked on past Sudanese peace negotiations.
What’s more, a chatbot that learns from past peace agreements tends to recommend failed approaches, and provides only very shallow responses to questions about what stopped them succeeding. “This is not a technical set of issues,” Horner added. “This is anthropological, social … pure power calculations.”
Akord.ai is aware of these risks and told TBIJ it wants to expand its training data and get feedback from Sudanese users. Hilal emphasised that Akord.ai is not meant to replace politics. “It’s a fantastic tool, but it’s just a tool,” he said. “It’s not meant to be something you depend upon entirely.”
For now, Akord.ai is just being used to help reduce barriers to information. But if chatbots are used to help design peace agreements, poor or biased data could have serious consequences. “LLMs spit out patterns of text that we’ve trained them on, but they also make stuff up,” said Timnit Gebru, founder and executive director of the Distributed AI Research Institute
Using information from past agreements to inform future ones may also limit creative problem-solving, leading peacebuilders to design solutions that work well on paper, but fail politically.
Gebru notes that people tend to trust automated tools – a phenomenon called “automation bias”. “Studies show people trust these systems too much and will make very consequential decisions based on them,” she said.
LLMs are also being used to inform the timing of peace deals. Project Didi, an Israel-based startup, is developing tools to identify “moments of ripeness” – periods where deals may seem more acceptable, even if terms do not substantially change.
Project Didi began as an LLM trained on the language used in the years that led up to the Good Friday Agreement, which ended most of the active fighting in Northern Ireland. Project Didi’s CEO and founder Shawn Guttman said the model showed the timing of an agreement can matter more than its content.
Guttman and his colleagues are adapting the model for Israel’s war on Gaza. Didi scrapes data from Israeli and Palestinian news sources and applies machine learning that claims to detect shifts in popular sentiment about peace.
According to the theory underpinning Didi’s model, confidence in winning has to drop on both sides, and people have to see a way out of the fighting at similar times. But the Palestinian model is not yet being used. LLMs are better in Hebrew than Arabic, Guttman said, and Didi has not been able to gather as much data from Palestinian media.
“There is less of a robust media presence there,” Guttman said. Israel’s war on Gaza has killed more Palestinian journalists and media workers than any modern conflict, according to data from the Committee to Protect Journalists.
Recommended Articles
Like political accommodation, ripeness has its critics. Some experts say that “moments of ripeness” tend to come when local people are exhausted, leading to bad peace deals that don’t address the root causes of violence. And both sides have to be exhausted at the same time, which is rare.
Guttman said he sees Didi as more proactive – generating moments of ripeness by providing peace activists with better information about whether they are shaping “hearts and minds”.
Predictive technologies
The scientific evidence that LLMs can interpret human emotions is weak. They are better at understanding unambiguous coding languages than often vague human ones.
But this hasn’t stopped tech companies from trying. The founders of a company called CulturePulse claim the models it uses can demonstrate “causal links” between people’s personalities and emotions, and broader social events like conflict. One co-founder has claimed its model is 95% accurate in “managing future outcomes by predicting human behaviour”.
Last year, Wired reported that the UN Development Programme (UNDP) contracted CulturePulse to model Israeli and Palestinian individuals and societies to create a “digital laboratory” where they could game out different political interventions.
There is no record of the project, which is referred to as the Palestine-Israel Virtual Outlook Tool, on the UNDP’s transparency portal.
TBIJ co-publishes its stories with major media outlets around the world so they reach as many people as possible.
Find out how to use our workA UNDP spokesperson told TBIJ that they “engaged CulturePulse for a set of services to the value of $113,000 for a contract which ended in December 2023,” but denied using CulturePulse technology. They claimed the contract was for an “exploratory exercise to create a model analyzing different variables and factors influencing conflict and cooperation in the Palestinian-Israeli context” but that the analysis didn’t progress because of the war.
AI providers are quick to say that their tools are not silver bullets and can get things wrong. But their products (and they are products – most are also used for marketing purposes) tap into the enthusiasm for AI to deliver simple, seemingly scientific solutions to political problems.
Gebru said LLMs and NLP algorithms cannot provide insight into what people think and feel. Her claims are supported by a growing body of scholarly work.
“It’s pseudoscience,” she said. “Even if you could accurately detect markers of emotion, that doesn't translate into being able to detect someone’s internal emotional state. And even if these models could do that, they would be extremely unethical.”
CulturePulse didn’t reply to repeated requests to discuss its products.
Risks and opportunities
Political problems can have technical challenges, where AI tools could help. But war is fundamentally a problem of power and politics.
Experts told us tools that improve access to information and make peace processes more inclusive, like Akord.ai, could be useful. Media analysis tools like Didi could provide some insight into communication strategies and timing of peace talks.
However, this will only work if AI tools are developed in a transparent and ethical manner to supplement the human, political work of ending wars.
Computer scientists and conflict negotiators also worry about the assumptions baked into these models, the limitations of LLM technology, and the risks of retreading failed paths to peace. These risks are especially pronounced with predictive technologies, or ones that recommend solutions. With 56 armed conflicts happening around the globe, the most since the second world war, the stakes couldn’t be higher.
Reporter: Claire Wilmot
Tech editor: Jasper Jackson
Deputy editor: Katie Mark
Editor: Franz Wild
Production editor: Frankie Goodway
Fact checker: Lucy Nash
TBIJ has a number of funders, a full list of which can be found here. None of our funders have any influence over editorial decisions or output.
-
Area:
-
Subject: