Trusted by over half a million customers

Our service is rated 'Excellent' on Feefo

Over 2,000 experts ready to help

The dangers of artificial intelligence: Deepfakes and malicious actors

Cybersecurity warning on login security

Over the past couple of years, the world has watched as artificial intelligence (AI) and deepfake technology have rapidly advanced, bringing with them a whole range of possibilities for their use - both good and bad. But what happens when you can almost perfectly digitally recreate a person’s likeness, or create incredible photographs from nothing? These technologies have opened the door to new, real-world risks, including misinformation, reputational damage, and fraud.

What are deepfakes?

Deepfakes are media that depict real or non-existent people. They are created by AI and may be in the form of video, images, or audio. While it can be used in a non-malicious way, such as by film studios for special effects, it can also be used to trick people into believing that the deepfake’s subject has said or done something that they haven’t.

The rise of AI in cybercrime and malicious activities

As can be expected with a technology that allows for manipulation of digital reality, AI has also become a tool for cybercriminals.

In 2023, a hacker used a voice deepfake to impersonate an IT team member and trick another employee into providing them with a multi-factor authentication (MFA) code. This then allowed the hacker to add their device to the employee’s account and access company data.

In another incident, a finance worker was tricked into transferring $25 million to scammers after the cybercriminals used AI to pose as the company’s chief financial officer in a video call.

With the use of deepfake technology, it’s now easier than ever for cybercriminals to impersonate people and manipulate others into handing over cash or data. However, some parties have employed the use of AI to fight cybercrime and scammers. For example,O2 have created an “AI grandma” named Daisy to waste the time of phone scammers and keep them from extorting money from real people.

The dangers of artificial intelligence to businesses and individuals

Deepfake technology also poses a risk to individuals and businesses, thanks to the ease of image, video and audio manipulation and generation. As discussed in the examples above, deepfakes can be used to manipulate employees and trick them into allowing scammers access to company data or money. But they can also cause reputational damage, as it has never been simpler to create and distribute fake media that can paint a business and/or its employees in a bad light.

As AI continues to develop and deepfakes become more realistic, there is also an increased difficulty in verifying authenticity. This can lead people who are relatively internet-savvy to be tricked into spreading fake media that they believe is real, in turn damaging the reputation of the individual or business featured. In a lighter example, in 2023 a fake image of Pope Francis wearing a Balenciaga puffer jacket circulated online and received over 20 million views.

How to spot a deep fake and use of AI

So how do you detect a deepfaked video? According to MIT Media Lab and The Guardian, you should look out for the following:

      Is the video focused on the face? Most deepfakes focus on the face.

      Does the skin appear too smooth or too wrinkly?

      Do the person’s facial features look real? For example: moles, facial hair, eyebrows.

      Does the blinking seem natural?

      Is there any sign of pixelation around the head?

There are also online tools and programs such as SightEngine that you can use which will tell you the likelihood of AI being used to create the video, but these tools can offer very varied results. 

How to protect your business against deepfakes and AI used by malicious actors

As well as encouraging them to use AI detection tools, you should make sure your employees receive regular and extensive employee training on cyber safety. While the hacker in the first example we discussed used deepfaked audio to trick the employee, they should not have given out their MFA token over the phone, even if they believed it was a member of the IT team. Using multi-factor authentication wherever possible should help secure your business’s data.

Cyber insurance is also important. In the event that a hacker does obtain access to your systems, having cyber insurance in place will provide compensation for loss of income, including where caused by damage to your reputation.

Protect yourself against the risks of AI

As AI continues to evolve, so do the associated risks. Social media sites such as Facebook are tagging deepfake videos as fake, and other websites are also putting measures in place, but because of their believability, there’s always the possibility that deepfakes can slip through the cracks and so it’s up to the user to be vigilant when browsing online. 

We stood by you as Towergate, now we’re standing by you as Everywhen

Our new name reflects exactly what we stand for: being here for you, “always” and “at all times” (which is the literal definition of Everywhen). While our name has changed, we still offer an expert team, great service and we now have the added benefit of being part of a business united by a shared purpose.

Let’s talk

Having cyber insurance in place can help your care home get back on its feet in the event of a data breach. Get in touch with James Anscombe on 07967 850015 or email james.anscombe@everywhen.co.uk. You can also visit our website to find out more.

jason-brown

Jason Brown

Head of Product - Care, Charity and Medical Malpractice

Jason Brown is a respected leader in the care insurance industry with over 15 years’ experience. He works across a number of insurance areas including commercial insurance and medical malpractice.

His current role is Head of Product - Care, Charity and Medical Malpractice at Everywhen. Everywhen combines regional care with national reach, deep sector knowledge and strong insurer relationships to deliver tailored solutions across 55+ schemes. We help our clients navigate everyday and emerging risks with confidence, always and at all times.

Consistent with our policy when giving comment and advice on a non-specific basis, we cannot assume legal responsibility for the accuracy of any particular statement. In the case of specific problems, we recommend that professional advice be sought.