Deepfake technology will be “massively problematic” in the run-up to next year’s US Presidential Election, Jess Kelly has warned.
Facebook owner Meta has announced new rules that will require political advertisers to declare when social media posts have been produced using AI or manipulated using digital tools.
The rules will govern all posts on Facebook and Instagram – and they will be enforced using a mix of human and AI fact-checkers.
The rules come into force in the new year and the company plans to enforce them globally.
On Newstalk Breakfast this morning, Newstalk Tech Correspondent Jess Kelly said the rules will not prevent deepfakes from impacting on political life.
“It can be very serious and I think the scariest part of the deepfakes is, even if Meta and other platforms do put labels on videos, it may not be instantaneous,” she said.
“So, a post could be up online for say 12 hours, but a lot of damage can be done in 12 hours. A lot of people will see a video and believe the content.”
Ayo. Blessed be. pic.twitter.com/DQrYhgmKqL
— Jake Flores (@feraljokes) March 26, 2023
Jess noted that one of the most entertaining examples of deepfakes going viral was this year’s image of the Pope in a puffa jacket.
“Absolutely everybody loved it,” she said. “The amount of people who shared it on to me saying, ‘Oh my God, this is gas, look at the style of him, he clearly shops in Zara Man’ – but it wasn't a genuine image.
“I think particularly when we come closer to or as we progress closer to the US presidential election, this is going to be massively problematic.
“We had an instance earlier in the year where Meta produced a deepfake of Mark Zuckerberg, showcasing how convincing the videos can be and there's no denying from a tech fan’s point of view, it's incredible technology – but when it's being manipulated and when people believe the content, that is when it's very troubling.”
'A lot of people will see these videos and believe the content" @jesskellynt explains how deepfake technology can be “massively problematic”, on @NTBreakfast. pic.twitter.com/1y9BZ5nZeg
— NewstalkFM (@NewstalkFM) November 13, 2023
Under Meta’s new rules, advertisers will be obliged to disclose it whenever a post contains a photorealistic image or video, or realistic-sounding audio, that was digitally created or altered to:
- Depict a real person as saying or doing something they did not say or do.
- Depict a realistic-looking person who does not exist.
- Depict a realistic-looking event that did not happen.
- Alter footage of a real event that happened.
- Depict a realistic event that allegedly occurred, but that is not a true image, video, or audio recording of the event.
They do not need to disclose it when content is created or altered in ways that are inconsequential or immaterial to the claim, assertion, or issue raised in the ad.
Meta said examples of this include size adjustment, cropping, colour correction and image sharpening.
Jess said we can’t trust people to use their own common sense when it comes to deepfakes.
“I think that's one of the big lessons that we've learned - particularly with social media - when it comes to political figures and political issues over the last number of months and indeed years,” she said.
“Because you have people who are willing and wanting to believe that certain politicians are not great people or that they are hiding something.
“You know, if you scroll on any part of the dark web, or even if you go on to certain people’s profiles on X/Twitter, you will see the tripe that they're pushing out there.
“If they can create a video or a photograph that supports their narrative, regardless of the truth of it or not, that is deeply worrying.”
Jess said the nature of deepfake content makes it “very difficult to police” – but said she believes progress is being made, “which is a good thing”.
You can listen back here: