Microsoft is introducing a new tool to identify deepfakes – photos, audio, or videos that have been artificially manipulated to spread disinformation.
Deepfakes, which can be created by AI to evade human detection, can make subjects appear to say things they didn’t actually say, or appear in places they never visited.
As deepfakes are developed by AI that can continue to learn, they will eventually evade conventional detection technology; however, according to Tom Burt, Corporate Vice President of Customer Security & Trust and Eric Horvitz, Chief Scientific Officer at Microsoft, advanced detection tools such as Microsoft’s new AI-powered Video Authenticator can help identify deepfakes.
“Video Authenticator can analyse a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated. In the case of a video, it can provide this percentage in real-time on each frame as the video plays.
“It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye,” they said.
Additionally, Microsoft will launch new technology to detect manipulated content and assure users that the content they are viewing is authentic, by adding digital hashes and certificates to content and checking for authenticity to show users where the content came from.
“As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.
“There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered,” said Burt and Horvitz.
Microsoft will distribute Video Authenticator through the AI Foundation’s Reality Defender 2020 (RD2020) initiative.