Social media and news sites are awash with video and audio clips that add interest and an element of dynamism to static content. But can we really believe everything we see and hear?

Deepfake technology, the combination of Artificial Intelligence (AI) and deep learning with digitally manipulated images and/or sounds is a growing reality. While the technology was originally created for benevolent purposes, like anything else it has the potential to be misused by those with malicious intentions. Last year, the UAE was hit with a heist that cost a Dubai-based company millions.

Are businesses prepared for the deepfakes threat?

Innocent beginnings

Deepfake uses existing images, video, and audio along with AI and machine learning to create highly realistic digitally manipulated clips depicting people saying things they have not actually said. It is also the technology behind many filters on social media that enable the swapping of faces and other seemingly harmless and fun elements. It uses facial mapping and deep learning to mimic features, movements, sounds, and more, and it can be highly realistic.

It was also originally developed with all of the good intentions in the world. Take for example the viral malaria awareness campaign by David Beckham in 2019. In this video, Beckham appears to speak nine different languages, and the results are highly realistic.

Deepfake audio is also used to make dubbing more realistic in movies and television programs for different countries around the world. There are many examples of where the technology is used for good, as was its intention, but there is significant potential for malicious purposes.

📺 A deepfake video released by the Government of Gabon showing ex-President Ali Bongo delivering the annual New Year's address on 31 December 2018.

Combination of deepfake threats

Fake voice and video clips have severe political implications, but they also affect businesses. The heist in Dubai is a good example of this. This example highlights how this type of attack can be expected to be carried out – the fake voice call was used to add validity to e-mails that had already been sent, making the entire transaction seem more credible.

While most people now know not to click on a link in a suspicious e-mail, deepfake can be used to add an element of doubt into this. If we receive an e-mail, followed by a phone call seemingly verifying the authenticity of the e-mail, it lends credence and removes some of the suspicions.

Deepfakes can also be used to extract more information from people via the phone, which we often believe is more secure than e-mail trails.

Fake voice and video clips, in the form of deepfakes, have severe political implications, but they also affect businesses.

Share this via: Twitter / WhatsApp

Defending against the deepfake threat

There is, unfortunately, no silver bullet technology that will enable organizations to protect themselves from attacks using deepfake technology. The same is true of phishing – this threat is decades old, and is still difficult to defend against without addressing the human element. There are video authentication tools available that can be used to give a confidence rating in the authenticity of a video – AI is not perfect, and often there are traces that can be identified, but this is not a foolproof solution and cannot be applied to every video or voice clip.

Education needs to form the crux of any cyber-defense strategy, and deepfakes only add to this requirement. It is also essential to address processes, to make sure that those who are dealing with money as part of their job, are empowered with the right security checks and verifications to prevent a repeat of the Dubai heist. There should be no exceptions on compliance and due diligence, especially in the political and financial services sectors where the risk of these types of attacks is higher.

Above all, there needs to be a healthy dose of skepticism around what we see and hear, especially on social media. Fake news is a real issue, and the only way to defend against it is to verify. The same is true of deepfake threats. When in doubt, always take steps to check and verify authenticity.

— By Kumar Vaibhav

Share this via