Santander, a leading global bank, has announced a new initiative to combat the growing threat of AI deepfake scams.
The campaign highlights the increasing risk posed by advanced generative AI technologies and aims to educate the public on recognising and protecting themselves from these sophisticated fraud tactics.
The primary motivation behind this new move is to address the alarming lack of awareness among Brits regarding deepfakes.
Recent data from Santander reveals that over half of the population (53%) either have not heard of the term “deepfake” or misunderstand what it means.
This lack of knowledge is concerning, particularly as just 17% of people feel confident in their ability to identify a deepfake video.
In this latest effort, the company has teamed up with Timi Merriman-Johnson, popularly known as ‘finfluencer’ Mr Money Jar, to create deepfake videos featuring himself and Santander’s fraud lead, Chris Ainsley.
These videos are designed to demonstrate how realistic deepfakes have become and to educate the public on spotting such scams. The campaign includes social media dissemination to maximise reach and impact.
Deepfakes are convincingly manipulated videos, images, or sounds of real people created using artificial intelligence.
With the availability of deepfake generators and software, fraudsters can easily fabricate realistic footage of their targets, often using content sourced from social media or other online platforms.
Santander’s research indicates that many people have encountered deepfakes, with 36% of Brits having knowingly watched one. These fake videos are most commonly seen on social media platforms such as Facebook (28%), X (formerly Twitter) (26%), TikTok (23%), and Instagram (22%).
Chris Ainsley, Head of Fraud Risk Management at Santander, emphasised the urgency of the issue, “Generative AI is developing at breakneck speed, and we know it’s ‘when’ rather than ‘if’ we start to see an influx of scams with deepfakes lurking behind them. We already know fraudsters flood social media with fake investment opportunities and bogus love interests, and unfortunately, it’s highly likely that deepfakes will begin to be used to create even more convincing scams of these types.
“More than ever, be on your guard and just because something might appear legitimate at first sight – doesn’t mean it is. Look out for those telltale signs and if something – or someone – appears too good to be true, it’s probably just that.”
Additional findings from the data reveal that 54% of Brits are most concerned about deepfake technology being used to steal money, followed by concerns about its use in election manipulation (46%) and the creation of fake biometric data (43%).
Furthermore, 78% of respondents expect fraudsters to exploit this technology, and 59% report being more suspicious of what they see or hear due to deepfakes.
Mr Money Jar, online ‘finfluencer’ Timi Merriman-Johnson, commented on the rapid development of AI, “The rate at which generative AI is developing is equal parts fascinating and terrifying. It is already very difficult to spot the difference between deepfake videos and ‘real’ ones, and this technology will only get better from this point forward. This is why it’s very important for users to be aware of the ways in which fraudsters use technology like this to scam people.
“As I said in the video, if something sounds too good to be true, it probably is. People don’t tend to broadcast lucrative investment opportunities on the internet. If you are ever in doubt as to whether a company or individual is legitimate, you can always search for them on the Financial Conduct Authority Register.”
Santander fraud experts also offer key tips to avoid falling victim to deepfake scams. These include looking out for imperfections such as blurring around the mouth, less blinking than normal, or odd reflections.
They also advise applying common-sense questions: Is this too good to be true? If it is real, why isn’t everybody doing this? If it is legitimate, why are they asking me to lie to my family and/or bank?
Copyright © 2024 RegTech Analyst
Copyright © 2018 RegTech Analyst