Earlier this year, the following headline ran in the Daily Mail Online: “AI-generated photos and videos pose threat to General Election”. Similarly, citing comments from Dame Wendy Hall, who co-chaired the Government’s 2017 AI review, The Independent ran with “Too late to prevent AI risk to elections this year”.
There was rightly a great amount of concern surrounding our election integrity and the role that artificial intelligence, social media and disinformation would play in this. But, did we see the AI interference that was predicted in the UK General Election? The short answer: no.
AI and the election
One of the biggest concerns surrounding AI and the election was that of generative AI – tools such as ChatGPT and DeepAI that can easily and cheaply create convincing text, images, and videos.
The fear was that as these tools become increasingly accessible that the barrier to entry for creating and promoting disinformation had been lowered. There’s a history of this. Around the world generative AI tools are already being used to deceive people, whether this is the Pope wearing designer clothes or fake images of Donald Trump with Black supporters.
The reality is, in the recent UK election we simply did not see the tidal wave of AI generated election that was predicted by some.
That’s not to say it wasn’t present. The BBC’s disinformation correspondent Marianna Spring discovered, through setting up dummy social media profiles, that TikTok’s algorithm was pushing AI-generated content to potential voters. But, by far the biggest AI story of the election was an unexpected one: Minecraft.
In a series of videos shared to social media, the main political leaders such as Nigel Farage, Keir Starmer, Boris Johnson, and Ed Davey were seen live streaming themselves playing the popular video game. They would build houses, set traps for each other, and burn down each other’s islands. These videos were incredibly convincing and the AI voice clones and video replications of each individual were life like.
But they were, fundamentally, harmless. While some leaders felt the need to clarify the videos weren’t legitimate, they were widely received on social media as knowingly fake. People recognised them as satire, and not genuine attempts to deceive. That does not mean some people did not fall for it. Luke Tryl, director of UK polling company More In Common was seen on BBC Newsnight remarking on the resources Nigel Farage was dedicating to creating such Minecraft videos.
Old-school lies
Amid all the fears of artificial intelligence and sophisticated information operations, it was actually the Sunak-Starmer tax row that demonstrated how sometimes it’s the most basic forms of (mis)information that can have the biggest impact.
The claims over £2094 of additional tax under a Labour government were lambasted as ‘lies’ by Keir Starmer but had a big splash on social media.
This happened on linear broadcasting and was then screenshotted/clipped and shared to social media platforms. It was a very simple and basic form of message dissemination, and one that didn’t rely on sophisticated technologies. It reminded us it’s important not to forget the basics in how some false content spreads.
Tainted truth
How we talk about AI and disinformation matters. The very concern surrounding AI can be more harmful than the AI itself. Warnings of disinformation can contribute to what’s called the ‘tainted truth effect’, the process whereby we start to doubt legitimate information due to being primed for false information.
This happens because people essentially over correct. We are constantly told to watch out for the effects of AI, social media bots, and disinformation, and as a result we sometimes see it even when it isn’t there. We need to balance awareness with alarm.
In my view, AI will do to disinformation what social media did. Disinformation existed long before platforms such as Facebook and X, but they made it easier to spread false content. AI has done the same. It has become easier, cheaper and quicker to deceive people en masse.
The next election
We won’t ever know the extent to which AI is used behind the scenes. It’s likely all the political parties’ campaigns in the past six weeks used AI, whether that was to microtarget undecided voters with adverts or report back on the constituencies likely to be lost.
AI was figuratively and literally on the ballot at this election, but has not had the impact we all expected. Earlier this year I wrote that 2024 will be a year of lessons learned when it comes to AI and elections. And that we could face something never seen before, or we could be having a Y2K moment where our concerns simply don’t come to fruition.
The lesson learned is that we need to strike a fine balance between educating the public on AI, and alarming them with claims of vast deception and manipulation. It appears that this time round the UK electorate was more robust than we thought, but as these AI tools become more and more sophisticated, only time will tell what a future election might entail.