The reports include additional details on Kigali’s efforts to silence critics.
As a political scientist with a background studying digital disinformation and African politics, I work with the Media Forensics Hub, which monitors the internet for evidence of coordinated influence operations. Following the publication of Rwanda Classified, we identified at least 464 accounts that flooded online discussions of the report with content supportive of the Paul Kagame regime.
Many of the accounts we tied to this network had been active on X/Twitter since January 2024. During this time the network produced over 650 000 messages.
Rwandans are due to vote on 15 July 2024. The presidential result is a foregone conclusion due in no small part to the expulsion of opposition candidates, harassment of journalists and assassination of critics. In the country’s last election in 2017, Kagame garnered over 98% of the vote.
Even though the outcome is inevitable, accounts in the network have been repurposed to promote Kagame’s candidacy online. The inauthentic posts will likely be used as evidence of the president’s popularity and the legitimacy of the election.
In both the response to Rwanda Classified and the pro-Kagame presidential campaign, we identified the use of AI tools to disrupt online discussions and promote government narratives. The Large Language Model ChatGPT was one of the tools used.
The coordinated use of these tools is ominous. It’s a sign that the methods used to manipulate perceptions and maintain power are getting more sophisticated. Generative AI enables networks to produce a higher volume of varied content in comparison to solely human-operated campaigns.
In this instance, the consistency of posting patterns and content markers made it easier to detect the network. Future campaigns will likely refine these techniques, making it harder to detect inauthentic discussion.
African researchers, policymakers and citizens need to be aware of the potential challenges posed by the use of Generative AI in the production of regional propaganda.
Influence networks
Coordinated influence operations have become commonplace in African digital spaces. Though each network is distinct, they all aim to make inauthentic content look genuine.
These operations often “promote” material aligned with their interest and try to “demote” other discussions by flooding them with unrelated content. This appears to be the case with the network we identified.
In east Africa alone, social media platforms have removed networks of accounts created to appear legitimate that targeted Ugandan, Tanzanian and Ethiopian citizens with false and partisan political information.
Non-state actors, including several global PR firms, have also been traced to the operation of bots and websites in South Africa and Rwanda.
Most of these previous influence networks were identified through their use of “copy-pasta”. That is the use of verbatim text taken from a central source and used across multiple accounts.
Unlike these prior campaigns, the pro-Kagame network we identified rarely copied text word for word. Instead, the associated accounts used ChatGPT to create content with similar but not identical topics and targets. It then posted the content alongside a range of hashtags.
Likely due to the inexperience of the actors involved in the campaign, it was sloppy. Mistakes in the text-generating process allowed us to track associated accounts. In certain messages, for example, affiliated accounts included the instructions used to produce pro-Kagame propaganda.
These messages were then used to flood legitimate discussions with either unrelated or pro-government content. This included information on Rwandans’ ties to sporting clubs, and direct attempts to discredit reporters and outlets involved in the Rwanda Classified investigations.
In recent weeks, several of the accounts in the coordinated network have promoted election-related hashtags, such as #ToraKagame2024 (tora means “vote”). As a result of the large number of posts generated by the network, interested readers are likely to encounter content that looks like uncritical support for the country and its leader.
AI and propaganda
The integration of AI tools in online campaigns has the potential to alter the influence of propaganda for several reasons.
Scale and efficiency: AI tools enable the rapid production of large volumes of content. Producing that output without the tools would require greater resources, people and time.
Borderless reach: Techniques such as automated translation enable actors to influence discussions across borders. For example, the most common target of the inauthentic Rwandan network was the conflict in eastern Democratic Republic of Congo.
Network attribution: Certain patterns of behaviour can indicate coordination. Generative AI tools enable the seamless creation of subtle variations in the text which complicate attribution efforts.
What can be done
As the primary target of most influence operations, citizens need to be prepared to manage the evolution of these efforts. Governments, NGOs and educators should consider expanding digital literacy programmes to aid in the management of digital threats.
Improved communication is also needed between operators of social media platforms, such as X/Twitter, and providers of Large Language Model services, such as OpenAI. Both play a role in allowing influence networks to flourish. When inauthentic activities can be tied to specific actors, operators should consider temporary bans or outright expulsions.
Lastly, governments should aim to raise the costs of using AI tools improperly. Without real consequences, such as restrictions on foreign aid or targeted sanctions, self-interested actors will continue to experiment with increasingly powerful AI tools with impunity.
Written by Morgan Wack, Assistant Research Professor of Political Science in the Media Forensics Hub, Clemson University
This article is republished from The Conversation under a Creative Commons license. Read the original article.