In tandem with developments in cybersecurity technology, cybercriminals have started getting more innovative with their attacking techniques. Threat actors are leveraging advanced technologies like Artificial Intelligence (AI) and Machine Learning (ML) to launch Deepfake attacks.
What is a Deepfake?
Deepfakes are specially crafted images, audio, and video content using AI and ML technologies to look like legitimate content. With Deepfake technology, threat actors can replace the voice/image of a particular person’s speech to manipulate information. Deepfakes confuse and spread disinformation campaigns, targeting popular personalities.
In addition to spreading disinformation, Deepfake technology is often misused for malicious purposes, including scams, election manipulation, social-engineering attacks, identity theft, and financial frauds.
Deepfake Attack – A Growing Cybersecurity Threat
According to security researchers from CyberCube, the spread of deep fake video and audio content could become a major security threat to businesses globally within the next two years. It is also anticipated that the increased dependence of organizations on video-based communication could motivate cybercriminals to focus on Deepfake attacks.
“As the availability of personal information increases online, criminals are investing in technology to exploit this trend. New and emerging social engineering techniques like deep fake video and audio will fundamentally change the cyber threat landscape and are becoming both technically feasible and economically viable for criminal organizations of all sizes,” said Darren Thomson, CyberCube’s Head of cybersecurity strategy.
“There is no silver bullet that will translate into zero losses. However, underwriters should still try to understand how a given risk stacks up to information security frameworks. Training employees to be prepared for deep fake attacks will also be important,” Darren added.