Tech companies must stand firm: Global lessons for South Africa’s upcoming election

28th May 2024

Tech companies must stand firm: Global lessons for South Africa’s upcoming election

Artificial intelligence (AI) and technology have revolutionised many aspects of our lives, and elections are no exception. These tools can enhance the democratic process by improving access to information, voter registration, streamlining vote counting, and facilitating voter engagement, thereby securing greater protection of fundamental civil and political rights. However, the potential for misuse is equally significant, raising concerns about the integrity of electoral processes worldwide.

Generative AI is developing rapidly. We live in a world where AI can create text, images, and videos in seconds in response to prompts, heightening fears that this new technology could be used to sway major elections this year. With more than half of the world's population heading to the polls this year, a group of about 20 tech companies announced in February 2024 that they have agreed to work together to prevent deceptive AI content from interfering with elections globally. Companies committed to this endeavour include those building generative AI models used to create content, such as OpenAI, Microsoft, and Adobe. Others include social media platforms facing the challenge of keeping harmful content off their sites, such as Meta Platforms, TikTok, and X (formerly known as Twitter).

Tech Companies, Government, and Fundamental Human Rights

The abovementioned commitment comes as no surprise. The world is no stranger to exploitative abuses of technology in national elections. Cambridge Analytica and the 2016 U.S. Presidential Election come to mind. Cambridge Analytica's use of data mining and AI algorithms to target voters with personalised political ads demonstrated the power of technology in shaping electoral outcomes. The scandal also highlighted issues of data privacy and the potential for manipulation, as voters were targeted based on their psychological profiles without their explicit consent.

In various elections, including the 2016 U.S. Presidential election and the Brexit referendum, social media platforms like Facebook and Twitter were used to spread disinformation and fake news. AI-powered bots amplified divisive content, undermining public trust in the electoral process.

In January this year, voters in New Hampshire, USA, received a robocall purportedly from President Joe Biden urging Democrats to stay home and not vote. The message recorded, "You must save your vote for the November election… voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again." The White House confirmed that the call was not recorded by President Biden, highlighting the challenges presented by emerging technologies in the run-up to the November presidential elections.

Quite aside from the misuse of digital platforms and technology to spread fake news, Tech companies regularly face pressures from state powers, often being coerced to compromise on user privacy and freedom of expression. In India and the US, for example, Apple has faced significant pressure to install backdoors in its devices, which would allow government access to user data. Apple’s steadfast refusal highlighted the company’s commitment to user privacy and security. This stance is crucial, as compromising encryption standards can lead to widespread privacy breaches and loss of consumer trust, not just in the context of elections.

Similarly, X faced multiple takedown orders from the Indian government during India's national elections this year to remove specific posts containing political speech from elected politicians, political parties, and candidates for office. While X complied with the takedown orders, it did so under protest, maintaining that freedom of expression should be extended to these posts and political speech in general.

Cyber-attacks on electoral infrastructure have also become a growing concern. For instance, during the 2017 French presidential election, Emmanuel Macron's campaign suffered a significant data breach. Such incidents expose vulnerabilities in digital electoral systems and raise questions about the security of election-related data.

Business and Human Rights

The abovementioned risks posed by rapidly developing technology are sobering. The intersection of business operations and human rights is critical, particularly for tech companies. These companies are not just commercial entities but also guardians of digital rights and the exercise of other rights online. Upholding principles of privacy, freedom of expression, and non-discrimination aligns with international human rights standards. By adopting a business and human rights policy framework, tech firms can proactively ensure they are not complicit in state actions that undermine these rights, reinforcing their role in fostering a just and equitable society.  As South Africans head to the polls, the lessons learned from global experiences with AI and technology in elections are more relevant than ever. It is all of our collective responsibility as corporate citizens to ensure the sanctity of our electoral process. 

Even at this stage in the process, and to mitigate the risks posed by abuses of technology and AI, stakeholders must ensure robust cybersecurity protocols to protect electoral systems from cyber-attacks; conduct regular security audits and vulnerability assessments to identify and address potential weaknesses; and ensure that all digital systems involved in the election process are regularly updated with the latest security patches and safeguards. Stakeholders must also ensure strict compliance with data protection laws, such as the Protection of Personal Information Act (POPIA) and implement measures to safeguard voter data from unauthorised access and misuse. Equally important is the protection of digital rights, including the right to privacy and freedom of expression and the prevention of incitement, hate speech and disinformation. 

South Africa's legal framework, underpinned by a strong Bill of Rights and a progressive Constitution, provides a solid foundation for addressing the challenges posed by AI and technology in elections. The country's commitment to human rights and democratic principles necessitates a proactive approach to safeguarding electoral integrity and the exercise of civil and political human rights.  South Africa’s tech industry plays a pivotal role in its democratic process, providing platforms for communication, information dissemination, and civic engagement. Tech companies operating in South Africa must prioritise user privacy, enhance cyber security measures, resist undue pressure from state actors, and ensure that their platforms are not used to spread disinformation or manipulate voters. 

By doing so, they can help protect the democratic process and uphold the principles of a free and fair election, reinforcing the trust of the South African public in their electoral system.

 

Written by Pooja Dela & Paula-Ann Novotny, Partner at Webber Wentzel