The combined power of AI and WhatsApp could wreak havoc for the next election
The speed of AI generation of deepfakes, combined with WhatsApp’s power for dissemination, could set up for a new battleground at the next election, writes Giles Kenningham
The next general election is fast approaching but we already know the main messages on which the election will be fought. Do the British public want more of the same, or do they want change? The “stick or twist” question has characterised many elections – 1992, 1997, and 2015 – and with varying results.
Historically, the key battleground for any election is the Westminster village, TV studios, and the editing rooms of newspapers. Party leaders make promises over lecterns, into microphones, and debate policy across benches. In recent years, we’ve also seen the proliferation of digital campaigning with micro-targeted ads flooding our social media newsfeeds.
Digital campaigning has been controversial, with concerns around data privacy, targeted ads based on personal information, and misinformation. However, what this kind of digital campaigning has in common with the old-fashioned set piece media events is that it is still possible to study and track campaign messages. Technology companies have (to varying degrees) limited the political content that can be promoted and mandated transparency about who is paying for what ad. This is about to change. At the next general election, the battleground will be secret; taking place on Whatsapp, Telegram and Signal.
Each of these platforms opens up a new world of text, image, video and voice communication to political parties and, more worryingly, their outriders. What is more, they are effective. How many advertisements do you remember seeing today? Probably a few? How many unread emails do you have? Thousands? How many unread messages do you have on WhatsApp? None?
It is not just the efficacy of these platforms that make them attractive during an election, it’s also their relative anonymity. All content exchanged is protected with end-to-end encryption, ensuring that only the intended recipient can read a message. There is no moderator fact-checking political messages or record of who is paying to spread what message. For political parties it also makes rebutting falsehoods almost impossible. A slur could be circulating for days, reaching thousands of voters before you get wind.
Alarmingly, there are already multiple examples of messaging platforms facilitating disinformation campaigns around the world. The Brazilian presidential election of October 2018 stands out as a notable example. A study conducted by the fact-checking platform Agencia Lupa, found that 56 per cent of the fifty most widely shared images in Brazilian political group chats during the election campaign were misleading. Many were considered completely false or used out of context.
These platforms also enable incredible speed, with the ability to ‘forward’ a message to other individuals or group chats. Following multiple deaths in India as a result of misinformation spread on WhatsApp networks, the platform has made attempts to moderate this feature and introduced ‘forward’ labels in 2019. They also limited the number of ‘forwards’ on a given message to either five individuals or groups. Each group can include up to 256 people, creating the possibility to share it with nearly 1,300 individuals.
Generative AI makes it easier than ever to create misinformation – whether that text, images or video. Pressure groups like Just Stop Oil will be able to leverage this technology to push out content at record speeds and at minimal cost. Be in no doubt the Russians are probably already pushing out hundreds of deep fake videos.
All this might suggest why a mighty row is brewing about the impact of the Online Safety Bill on end-to-end encryption and the ability of these platforms to continue to operate in the UK. Bosses of messaging services Whatsapp, Signal and Element have warned the Bill “opens the door for mass surveillance” by government and that the end of end-to-end encryption “will be exploited by hackers, hostile nation states, and those wishing to do harm”. Given the history of these platforms’ use in elections around the globe, you can imagine the argument being met with rolled eyeballs in Westminster.
Instead of dreaming up ways of using these technologies and their platforms to their partisan advantage, politicians should use their powers to limit all the ways they can be exploited to cause potential harm.