Skip to content

2021 Trends in Combating Deepfake Impersonation Attacks

December 8, 2020

This post is part of G2's 2021 digital trends series. Read more about G2’s perspective on digital transformation trends in an introduction from Michael Fauscette, G2's chief research officer and Tom Pringle, VP, market research, and additional coverage on trends identified by G2’s analysts.

Cybersecurity companies step toward deepfake detection

Executive impersonation through business email compromise attacks (BEC) is already a huge problem and it is about to get worse due to the surge of manipulated media known as deepfakes. 

Executive impersonation attacks, where we are now: business email compromise (BEC)

Stories of BEC, where cybercriminals impersonate C-level executives via email to have employees divulge sensitive data or take actions against the company’s interest have been common for the last several years. According to ZDnet, the average loss per BEC complaint is $75,000. In comparison, ransomware averaged $4,400 per complaint. 

Executive impersonation attacks, the next generation: deepfakes

BEC attacks are effective because instead of relying on sophisticated, yet malevolent technology like ransomware, these attacks rely on social engineering; they rely on deception. The rise of convincing and deceptive deepfakes impersonating C-level executives using manipulated media generated with the help of artificial intelligence (AI) makes the issue even more terrifying to combat.

This is why researchers at the Dawes Centre for Future Crime at University College London have ranked audio and video impersonation deepfakes as the most harmful AI-related criminal threat in a study published in August 2020.

Audio and video deepfakes

What are Deepfakes?

Deepfakes, sometimes known as synthetic media, are media manipulated by AI in a way that is compelling and difficult to detect. 

Criminals targeting CEOs and other executives often use manipulated audio and video deepfakes of the executive’s likeness. False audio of executives could be used to ask employees to make wire transfers or to buy gift cards that are directed to the cybercriminals.

a deepfake video generated using actor Amy Adams as the original with actor Nicolas Cage’s face edited onto hers.A deepfake video generated using actor Amy Adams as the original with actor Nicolas Cage’s face edited onto hers. (Source: towardsdatascience.com)

It's not just deception, it's also blackmail

Deepfakes are also used for corporate reputation attacks; realistic likenesses of company executives can be used to blackmail the company. If a deepfake video shows an executive in a negative light, such as making embarrassing or damaging remarks in audio format, or doing something disreputable or violent in a video, cybercriminals can threaten to release this falsified information to the public unless a ransom is paid.

Our analysts reveal what's big right now in their 2021 Digital Trends reports.     See our predictions here →

Why CEOs are good targets

CEOs are particularly vulnerable to deepfake attacks because many of them speak on public platforms, meaning there is a rich source of publicly available audio and video recordings for the AI to use, in order to generate convincing deepfakes. 

For about $100, a malevolent actor can have a deepfake video generated using his target’s likeness, and the ROI can be huge. For example, the Wall Street Journal reported that a company was defrauded of $243,000 when a deepfake audio recording of the CEO was used to execute a wire transfer to a false supplier.

How the cybersecurity industry is stepping up to fight back

There are two ways to fight against deepfake fraud: The first way is the people aspect—by offering social engineering and deception fraud prevention training to staff using security awareness training tools and the second way is through deepfake-detection technology.

Want to find tools to train your staff against fraud? Explore G2's security awareness training software category. 

Security Awareness Training Software ➜

While the deepfake-detection market is still nascent, some big players are entering the field. For example, Microsoft announced their new tool Microsoft Video Authenticator in September 2020 to detect deepfake videos and McAfee announced the launch of the McAfee Deepfake Lab in October 2020. 

Traditional cybersecurity companies poised to address this market

Cybersecurity companies are particularly suited to tackle the deepfake detection market. As they develop tools to combat malicious email and software, they are well poised to extend offerings to combat malicious deepfakes.

For instance, Zemana, which was founded in 2007 as a cybersecurity company focused on endpoint protection and malware detection, has pivoted to deepfake detection with their new Zemana Deepware.ai product, which is in beta, at the time of writing.

What's next?

G2 does not (yet) have a category for deepfakes and other types of disinformation detection, but we are keeping a close eye on this market in 2021. Once a minimum of six deepfake-detection software products are available to buyers on the market, we will add a new category for this type of software.

Disclaimer: I am not a lawyer and am not offering legal advice. If you have legal questions, consult a licensed attorney.

Don’t fall behind.

Subscribe to the latest software news & updates from the expert analysts at G2.

By submitting this form, you are agreeing to receive marketing communications from G2.