Executive impersonation through business email compromise attacks (BEC) is already a huge problem and it is about to get worse due to the surge of manipulated media known as deepfakes.
In 2021, there will be a rise in cybersecurity companies getting into the deepfake-detection business to help industries combat this troubling threat.
Executive impersonation attacks, where we are now: business email compromise (BEC)
Stories of BEC, where cybercriminals impersonate C-level executives via email to have employees divulge sensitive data or take actions against the company’s interest have been common for the last several years. According to ZDnet, the average loss per BEC complaint is $75,000. In comparison, ransomware averaged $4,400 per complaint.
Executive impersonation attacks, the next generation: deepfakes
BEC attacks are effective because instead of relying on sophisticated, yet malevolent technology like ransomware, these attacks rely on social engineering; they rely on deception. The rise of convincing and deceptive deepfakes impersonating C-level executives using manipulated media generated with the help of artificial intelligence (AI) makes the issue even more terrifying to combat.
This is why researchers at the Dawes Centre for Future Crime at University College London have ranked audio and video impersonation deepfakes as the most harmful AI-related criminal threat in a study published in August 2020.
Audio and video deepfakes
What are Deepfakes?
Deepfakes, sometimes known as synthetic media, are media manipulated by AI in a way that is compelling and difficult to detect.
Criminals targeting CEOs and other executives often use manipulated audio and video deepfakes of the executive’s likeness. False audio of executives could be used to ask employees to make wire transfers or to buy gift cards that are directed to the cybercriminals.
A deepfake video generated using actor Amy Adams as the original with actor Nicolas Cage’s face edited onto hers. (Source: towardsdatascience.com)
It's not just deception, it's also blackmail
Deepfakes are also used for corporate reputation attacks; realistic likenesses of company executives can be used to blackmail the company. If a deepfake video shows an executive in a negative light, such as making embarrassing or damaging remarks in audio format, or doing something disreputable or violent in a video, cybercriminals can threaten to release this falsified information to the public unless a ransom is paid.
Why CEOs are good targets
CEOs are particularly vulnerable to deepfake attacks because many of them speak on public platforms, meaning there is a rich source of publicly available audio and video recordings for the AI to use, in order to generate convincing deepfakes.
For about $100, a malevolent actor can have a deepfake video generated using his target’s likeness, and the ROI can be huge. For example, the Wall Street Journal reported that a company was defrauded of $243,000 when a deepfake audio recording of the CEO was used to execute a wire transfer to a false supplier.
How the cybersecurity industry is stepping up to fight back
There are two ways to fight against deepfake fraud: The first way is the people aspect—by offering social engineering and deception fraud prevention training to staff using security awareness training tools and the second way is through deepfake-detection technology.
While the deepfake-detection market is still nascent, some big players are entering the field. For example, Microsoft announced their new tool Microsoft Video Authenticator in September 2020 to detect deepfake videos and McAfee announced the launch of the McAfee Deepfake Lab in October 2020.
Traditional cybersecurity companies poised to address this market
Cybersecurity companies are particularly suited to tackle the deepfake detection market. As they develop tools to combat malicious email and software, they are well poised to extend offerings to combat malicious deepfakes.
For instance, Zemana, which was founded in 2007 as a cybersecurity company focused on endpoint protection and malware detection, has pivoted to deepfake detection with their new Zemana Deepware.ai product, which is in beta, at the time of writing.
G2 does not (yet) have a category for deepfakes and other types of disinformation detection, but we are keeping a close eye on this market in 2021. Once a minimum of six deepfake-detection software products are available to buyers on the market, we will add a new category for this type of software.
Explore the highest-rated software in related categories:
Disclaimer: I am not a lawyer and am not offering legal advice. If you have legal questions, consult a licensed attorney.
Merry Marwig is a senior research analyst at G2 focused on the privacy and data security software markets. Using G2’s dynamic research based on unbiased user reviews, Merry helps companies best understand what privacy and security products and services are available to protect their core businesses, their data, their people, and ultimately their customers, brand, and reputation. Merry's coverage areas include: data privacy platforms, data subject access requests (DSAR), identity verification, identity and access management, multi-factor authentication, risk-based authentication, confidentiality software, data security, email security, and more.