We can’t wait for Big Tech to monitor deepfakes, it’s on us to decipher what is real
In a world of deepfakes, AI regulation can only go so far; we must also take responsibility to separate fact from fiction, writes Anna Moloney
Last week, the already noisy din of voices warning about the risks of AI was joined by London Mayor Sadiq Khan. For him, this was personal, a video purporting to show him saying he didn’t “give a flying s*** about the Remembrance weekend” and encouraging people to attend pro-Palestinian marches went viral.
As disturbing as the fake video of Khan was, it didn’t have near the impact of a similar incident during the Slovakian election last month. Two days before polls opened in a tight race, a fake audio clip of candidate Michael Simecka appeared to show him plotting to rig the election. He subsequently lost to Robert Fico, a populist candidate campaign to withdraw support from Ukraine.
After an initial review, the Met Police said the deepfake of Khan was not a criminal offence. Meanwhile, though Simecka immediately flagged the audio as fake, a 48-hour moratorium ahead of voting made it near impossible for the video to be debunked effectively.
There has been no shortage of politicians and business leaders warning about the potential risks of AI, but this is not some far-flung theoretical possibility – it’s happening now.
Almost every major social media platform already has some sort of policy regarding deepfakes, but like many of their safety programmes, their efficacy is far from watertight. Both Tiktok and Youtube tell creators to disclose when AI is used to make realistic content. Meta says it must be disclosed for political advertisements, and any deepfakes likely to “mislead an average person” to believe someone said something they didn’t will be removed. X/Twitter is more lax, saying it “may label posts containing misleading media” to provide context, but this is not a guarantee to remove where it is “unable to reliably determine if media have been altered or fabricated”.
In varying degrees, these policies all depend on an honour system whereby users sufficiently label their own content as misleading. In the case of X, posts are innocent until proven guilty. The margin for error, it goes without saying, is vast.
If you turn to government regulation, rules on deepfakes are murky at best, non-existent at worst. The Online Safety Bill will criminalise the non-consensual sharing of manufactured pornographic content, but little else.
While the EU pushes forward with its AI Act and US President Joe Biden vows to create “the strongest set of actions” in the world on AI safety, the UK’s first minister for AI Viscount Jonathan Camrose last week said the government’s “pro-innovation approach” to AI meant there would be no UK law on AI “in the short term”. Lawyers are meanwhile left with a hodgepodge of defamation, data protection and intellectual property laws to work with.
There is only so far legislation can go. Lawyers have said new rules in Texas and California – which prevent the distribution of deepfakes within 60 days of an election – are a bit of a hack job. According to Oliver Lock, an IP lawyer at Farrer & Co, there are questions over policing, and the arbitrary 60 day line.
It’s tempting to pin responsibility for the solution on government and Big Tech. But the nature of the problem goes far beyond the technical or legislative, with the proliferation of deepfakes affecting our very perception of truth. Even with disclaimers, can we really trust ourselves to remember what was true and what was false in the blur of social media scrolling? If we look back to Khan’s case, while the artificiality of the video was quickly determined, arguably the damage had already been done.
The waters of reality are being muddied and there is only so much regulation can do to fix that. Every one of us is responsible for how we consume media. We’re quickly living in a world with two twin problems: either we question everything we see, including genuine news which could alert us to dangers either in our politics or personal lives, or we believe everything we see and fall victim to disinformation at every turn. There is, as Lock says, “a responsibility… to teach ourselves about media literacy”. To be more discerning in how we consume media, and how we share it with our friends.
As AI gets better, the ability to distinguish between fact and fiction will undoubtedly become more difficult, but it is a problem we all must take responsibility for. Both of these twin risks threaten the integrity of democracy and civic institutions. If we wish to consume media in the same way many of us currently do, we must hold ourselves to higher standards.