Keir Starmer

FFS explains: Why is Keir Starmer audio ‘leak’ likely to be fake?

The Labour Party conference took place in recent days with the party keen to promote its policies ahead of the next general election. 

Leader Keir Starmer spoke on Tuesday, but allegedly leaked audio of the Labour leader speaking was shared on social media platforms on Sunday. 

In the audio, which was shared by an X (formerly Twitter) user, Starmer’s voice was heard swearing and criticising someone. 

Another clip from the same user purported to record Starmer criticising Liverpool, which was the location of this year’s conference. 

Experts told Ferret Fact Service it is likely the clip was created using artificial intelligence (AI) software.

Ferret Fact Service | Scotland's impartial fact check project

What was the audio that was shared? 

Two audio clips were circulated online, both of which recorded a voice that sounded like Keir Starmer’s. 

In the first clip, Starmer is seemingly complaining about a tablet device, swearing at another person who is not featured in the audio clip. The poster of the clip says it is a recording obtained of Starmer verbally abusing a party staff member.  

In the second clip, the voice similar to Starmer’s says he was shouted at and called a “Tory” by a Liverpool resident, then complains about having to do the Labour Party conference in the city, claiming “I “f***king hate Liverpool”.

Where was it shared? 

The two clips were shared on X (formerly Twitter) by a user, who claimed to have “obtained audio of Keir Starmer verbally abusing his staffers at conference” and obtained a “secret recording of Keir Starmer at conference, this time appearing to take aim at the city of Liverpool”.

It was also shared widely on TikTok and other social platforms, and was promoted by public figures including George Galloway. 

Why is it likely to be false? 

Identifying AI audio is very difficult, but there are often simple clues which can be used to cast doubt on a piece of audio. The user who posted the clips appears to have made it for comic effect to make fun of Starmer, and has retweeted a number of comments suggesting it has been artificially generated. 

Oli Buckley, professor of cyber security at the University of East Anglia, tells Ferret Fact Service that AI audio can be tough to spot, but there are certain clues that something is artificially generated. 

“Often you will have things like unnatural pauses or patterns of speech,” he explains. “The rhythm or flow of what they’re saying may not match up with the words.”

Regarding the alleged Starmer audio clip, Professor Buckley says: “The emphasis is not where you’d expect it. When he’s swearing and seemingly becoming annoyed (based on the words he’s saying) his tone isn’t keeping up with that. It’s all quite flat and level.”

Other good indicators of authenticity include the cadence and tempo of speech, he explains.

“Often the stresses are placed on the wrong words”. 

According to Madeline Roache, UK managing director of NewsGuard, a disinformation research group, AI has “transformed the disinformation landscape”. 

“Unlike AI-generated images and video, AI-generated audio seems to be more difficult to reliably detect, which may be putting people at greater risk of misinformation.”

How easy is it to make AI audio fakes? 

Technology has moved quite quickly in this area, and there are multiple free websites that will make a short audio clip resembling someone’s voice using AI, with limited technical knowledge of the process.

“For around £5 a month you can have access to something that will let you create a relatively convincing clone from as little as one minute of clean audio,” Buckley explains. 

It is easier to create AI clips with the voice of a public figure because there is likely to be a lot of audio available online of them talking with limited background noise, that can be used to synthesise their voice. 

Are we going to see more of this in future? 

The answer is “potentially”, says Buckley. 

“The biggest hurdle is the patience to collect and tidy up any audio samples you can get. This might mean you trim out other people talking or try to remove background noise, but even without doing that you can make a fairly convincing clone quite quickly.”

Roache says AI serves as “a powerful tool for generating new kinds of disinformation and enables it to spread faster and cheaper than before.”

While this is thought to be the first major AI audio clip that has gained traction on UK politics, a recent example was identified in Slovakia. 

Thousands of social media users shared an audio recording, allegedly of a conversation between Michal Šimečka, the chairman of Progresívne Slovakia (PS), and Monika Tódová, a journalist, where they discussed how to rig the country’s then forthcoming election. Association France-Presse (AFP) found the recording was a hoax created by AI and synthetic voice technology. 

Main image: Keir Starmer speaking at the 2020 Labour Party leadership election hustings in Bristol. Credit: Rwendland

Ferret Fact Service (FFS) is a non-partisan fact checker, and signatory to the International Fact-Checking Network fact-checkers’ code of principles.

All the sources used in our checks are publicly available and the FFS fact-checking methodology can be viewed here.

Want to suggest a fact check?

Email us at factcheck@theferret.scot or join our Facebook group.

Leave a Reply

Your email address will not be published. Required fields are marked *

Hi! You can login using the form below.
Not registered yet?
Having trouble logging in? Try here.