Facebook has announced a ban on so-called 'deepfakes' as part of its efforts to crack down on misleading videos shared on its platform.
A deepfake refers to videos that use technology to create video clips where a person appears to say something they never actually said.
Such clips typically use artificial intelligence or machine learning to 'merge, replace or superimpose' content onto a video.
One widely shared deepfake last year - created by artists for an exhibition called 'Spectre' - manipulated footage of Facebook's own Mark Zuckerberg:
Announcing the ban on such videos, Facebook said it will apply to video that "has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say".
Any videos found to be breaking the rules will be removed from the platform.
However, the policy will not cover videos that have simply been edited to omit words or change their order.
It also won't apply to videos that are parody or satire.
The company insisted that other manipulated videos will continue to be examined by fact-checkers, and labelled as 'fake' if necessary.
Facebook said: "If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem.
"By leaving them up and labelling them as false, we’re providing people with important information and context."
The move comes ahead of the US election in November, with the campaign likely to lead to fresh scrutiny of social media platforms' policies on manipulated content and 'fake news'.