Deepfakes and Their Possible Impact on Business Data Security

By 2035, artificial intelligence could double economic growth in Canada and economic growth rates globally. It’s not, however, just legitimate businesses that will take advantage of AI-based tools. Cybercriminals will also exploit the technology to improve their income. 

While there are several ways to do this, deepfakes may pose one of the greatest threats. This article will examine what deepfakes are, the threat they pose, and how you can defend against these attacks. 

How Do Deepfakes Work? 

You’ve already seen what Photoshop can do. You can remove or add elements from photos, completely changing the scene. Deep learning allows AI to take things a step further. The software tracks different aspects of someone’s face and voice and creates a reasonable facsimile.

This video from the BBC is an excellent example of this technology at work. 

Why Are Deepfakes Dangerous? 

The clip from the BBC shows that you can make anyone do almost anything with the correct software. However, the presenter points out that the examples given are fakes that are relatively easy to spot. This isn’t, however, because they lack realism, but rather because the actions of the person in them are implausible. 

If the videos were more reasonable, it would be easy to believe them real. Also, over the last couple of decades, we’ve learned that we can’t always trust text or pictures. When someone takes a video, it must be real, right? 

Not anymore, thanks to deepfakes. 

How Can Deefakes Affect Your Business? 

To date, most of the videos used by cybercriminals center on blackmailing the victim. The bad actors might, for example, create a fake pornographic photo or image. They then threaten to share it on social media if the person doesn’t do what they demand.

In a corporate environment, the idea of this kind of image getting out is embarrassing. There is, however, a more insidious risk from criminals targeting the business’s bank balance. 

Criminal Blackmail That Affects Your Firm’s Reputation

The criminals may, for example, create a fake video of a key employee saying or doing something that your clients will view as wrong. With today’s cancel culture, very few people examine the validity of the claims. Such videos, even if eventually proven fake, can damage your business reputation. 

 

Attacks That Make Conventional Phishers Seem Like Amateurs

Deepfakes often work because we believe what we see and hear. Unfortunately, in 2019 in Europe, we saw an example of the dangers of this belief.

The CEO of Euler Hermes Group SA in Europe received a call from the company’s Chief Executive, Rüdiger Kirsch. The Chief Executive asked the CEO to transfer a sum equivalent to $243,000 to a supplier in Hungary. It was a very clever scam using AI-based tech to spoof the boss’s voice. 

The CEO never had any doubt that he was speaking to his boss. The AI-based software used by the criminals got all the details correct, including the tone and inflections that the Chief Executive used. 

It was only when the fraudsters tried to elicit a second and third payment that the CEO realized that something was amiss. Those two attempts failed, but the criminals made a reasonable amount of money considering how little work was involved. 

It’s reasonable to expect fraudsters to ramp up these types of attacks in the future. All they need is: 

  • Information about the company and its key team members (all on your website) 
  • Sample pictures of the target employee (easy to obtain online and through social media) 
  • Samples of the person’s voice (simple to get by calling in or looking for presentations online) 

The software required to perform this fakery is available online for free. All someone needs is a computer with the right graphics capabilities to pull it off and a little knowledge of how the company runs. 

Protecting Against Deepfakes

There is already software that protects against deepfakes. By using an AI-based engine, these programs identify potentially fake videos by analyzing light reflections and other elements in the image. 

This technology is helpful with material published online, but does it protect you if someone calls you? Can it stop this type of attack as it occurs? Not quite yet. At this point, your best protection comes from security awareness training, better data protection, and intelligent internal policies. 

Security Awareness Training 

Security awareness training highlights the potential vectors of attack. Had the CEO in our earlier example been aware of the technology, he would have been more careful before making the first transfer.

Many companies run phishing tests on employees to see if they can recognize an attack. Firms may run similar tests by using the software online. You might, for example, try to set up a fake video call and see if you can fool the employee. 

Security awareness training will also allow employees to understand the dangers of sharing information online better. 

Better Data Protection

Here we refer not only to the protection of customer data. Companies must make a greater effort to keep the company’s procedures secret as well. Authorization processes, for example, should be protected. 

Smart Internal Policies

Using a multi-factor authorization system is an excellent way to prevent unwanted access to your data. Companies may use a similar system to ensure that more than one person checks all money transfers. 

It’s more challenging to halt the dissemination of information. Employees may be fooled into giving information to someone posing as a client, for example. The best way to combat this is to provide customers with robust self-service options. Employees should then be given clear guidance on what security information to request from clients making telephonic inquiries. 

Final Notes

Cybercriminals are becoming more sophisticated. AI-based tools are as useful for them as they are for the rest of us. To avoid becoming a victim of a deepfake, firms must ensure that their employees understand the signs of this attack vector.