The war in Ukraine is on everyone’s mind. Many people are trying to help the Ukrainian population, even from afar. However, on the front line, countless humanitarian organisations are getting involved to provide the best possible support to the threatened civilian population. Unfortunately, it is precisely these organisations that often become targets themselves for technological attacks during wars and conflicts – through viruses, spyware, disinformation and so-called “bot farms”, i.e. entire masses of false accounts that spread misleading information.
However, humanitarian organisations often lack the means and knowledge to protect themselves digitally. This is quickly becoming a problem, especially in the face of increasingly “virtual” wars and new technologies that are fundamentally changing modern warfare. But what tools and technologies are available to humanitarian organisations to arm themselves against digital attacks and threats?
Artificial Intelligence: Opportunity or New Challenge in Humanitarian Aid?
Artificial intelligence (AI) can help humanitarian organisations gather information and identify patterns in large, multi-layered data sets that would be difficult for us humans to access without the intelligent algorithms. This makes it possible to predict and analyse certain tendencies and trends so that organisations can better prepare. In addition, AI can be used to digitally automate many processes – this allows organizations to keep better track of resources and, in addition, can increase the efficiency of their work.
There are already some projects applying AI in the non-profit and humanitarian sectors, but many of them are still in the development phase. For example, the UN Refugee Commission’s (UNHCR) “Project Jetson” uses machine learning to make predictions about the routes of people displaced from their homes. Save the Children has a similar program that uses a variety of datasets to predict the time period and extent of flight and displacement caused by conflict. Meanwhile, the World Food Program (WFP) is engaging in direct exchanges with people affected by crisis and conflict through a chatbot and similar mobile technologies. The virtual dialog aims to find out in which regions the need for food is high and, above all, whether the food supply will be secure in the future.
But as with any other technology, the risks and challenges must be considered. AI systems are “fed” data to make predictions or forecasts. If this is biased data, the output of AI can be biased, even discriminatory, as is often criticised of voice control software and intelligent ‘home assistants’. As always, when data is involved, data security and privacy must be scrutinised very carefully for technologies designed to facilitate humanitarian assistance. Security vulnerabilities in contexts of war and conflict can exacerbate the risk situation for humanitarian organisations
Media Literacy as the Most Important Weapon Against Disinformation
Algorithms that manipulate media and publish targeted disinformation are becoming more widespread and sophisticated. There is already talk of an “information crisis.” So-called ‘deepfakes’ are just one example of how easy it already is to present false information as indisputably authentic. In recent days, videos have already circulated on the Internet that showed an almost deceptive resemblance of Ukrainian President Zelensky announcing Ukraine’s surrender in a speech. Similarly fictitious videos have also surfaced about the Russian president.
Targeted disinformation campaigns can influence the perception of humanitarian aid organisations among the population, as well as internationally. This can destroy the necessary trust between the people on the ground and the organisation, or make it more difficult to access the people who really need help in crisis situations. False or misleading information about where to find help and support is also dangerous and can lead to situations being misjudged.
That’s why it’s important for humanitarian actors to rely on sound information and to communicate transparently and publicly themselves. The Massachusetts Institute of Technology (MIT) is now offering a first-of-its-kind media literacy course to address misinformation education on many different levels – from public legislation to technological innovation.
Essential: Humanitarian Cybersecurity Strategies
Attacks on online data and information are becoming a common theme in Ukraine at the moment. Digital surveillance technology has become a surprisingly easy and inexpensive way to tap into general information – such as location data. Advanced technology also makes it possible to obtain more sensitive information without much effort.
International humanitarian organisations and NGOs are increasingly falling victim to such cyber attacks. On the one hand, their security networks are often insufficiently developed, making them easy targets. Equally, however, the attacks can also serve political purposes, as was the case in 2020 with a suspected Russian-backed hacking attack on the U.S. humanitarian organisation USAID. What exactly the hackers’ goal was remains unclear, but it is certain that sensitive, protected data can be viewed, modified or even completely destroyed in this way.
In order to continue to effectively assist people in crisis situations, humanitarian organisations must arm themselves against these attacks. A first step could be to develop a humanitarian cyber security strategy. However, organisations often lack the financial support needed to do so. The Cyber Peace Institute’s Swiss-based platform is currently working on a cyber incident tracer, which will make it possible to monitor digital attacks and their impact on humanitarian organisations. From this data, information can then be drawn back for possible future attacks – and how to counter them. The Tactical Tech team also takes a holistic approach to advising human rights organisations on how to protect their well-being in operations. This includes digital security and information protection.
Cybersecurity Needs Targeted Funding
Time will tell how effective such initiatives are in the ongoing Ukraine war. In the meantime, however, the top priority should continue to be protecting innocent and uninvolved civilians in this war, and thus also the humanitarian organisations that are working on their behalf, sometimes at the risk of their own lives. So what to do about cyber attacks and disinformation? First, humanitarian organisations, most of which are non-profit, need more financial support. In addition, humanitarian workers need targeted training in cyber security to understand what the threats are and how to counter them.
Ultimately, the tech sector itself can also take action. Companies that focus on cloud-based innovations, for example, can offer specialised products for the humanitarian sector. There are already initial forays in this area; Microsoft, for example, offers AccountGuard for nonprofits. Cloudfare’s Project ‘Galileo’ provides free cybersecurity services for organisations working on human rights, among other things.