AI Assisted Social Engineering: When Humans Become the Weak Link

In the ever-evolving risk landscape, one often-overlooked vulnerability remains constant: human nature. However, the dynamic part of this equation is the constantly evolving technology that allows adversaries to exploit this weak link. In an era dominated by digital innovation, the rise of deepfakes has introduced a new dimension to our perception of reality and approach to security. 

 

What are deepfakes? 

Deepfakes are a form of synthetic media created using deep learning techniques, particularly deep neural networks. The term “deepfake” is a combination of the words “deep learning” and “fake.” These sophisticated algorithms analyse and learn patterns from large datasets, enabling them to generate highly realistic and often deceptive content, such as videos, images, or audio.  

The technology behind deepfakes continues to evolve, making it increasingly challenging to distinguish manipulated content from authentic media. This has raised concerns about the potential misuse of deepfakes for deceptive purposes, including creating false narratives, impersonating individuals, or generating fabricated evidence. 

 

Hello? Who’s there? 

Imagine this situation – you are a branch manager in a multinational corporation. One day, you receive a call from a familiar voice—the director of your parent company. Excitement fills the air as the director shares some promising news: the company is on the brink of a significant acquisition, and your authorization is needed for transfers totalling $35 million. In your inbox, there are emails from both the director and the company lawyer confirming the details of the transactions. Everything checks out, and you begin making the necessary transfers. 

Well, joke on you! This case was a real-life example of deepfakes in action, where AI-generated voice was used to social engineer people into sending money. And those are getting more and more common. A similar situation took place in the UK when fraudsters used voice-generating AI software to mimic the voice of the chief executive. It was so convincing that they managed to persuade an employee to transfer the funds. Not only organizations but virtually anyone can be a victim of this type of fraud, as this example from Florida is showing.  

This might seem like a novelty threat, but according to Nina Schick, an author of “Deep Fakes and the Infocalypse: What You Urgently Need to Know”, “This is not an emerging threat. This threat is here, now”, and risk managers should be aware of it.  

 

Guardians Against the Illusion: Strategies to Combat AI-Assisted Social Engineering 

Fighting against AI-assisted social engineering is not an easy task, particularly concerning deepfakes. It often requires a multifaceted approach that involves technology, education, regulation, and individual awareness. How can you make your organisation more resilient to this type of threat?  

 

Investing in ongoing training and awareness programs to educate employees about the dangers of AI-assisted social engineering can be a very effective strategy, as social engineering relies on human error. Users who receive more time in security training demonstrate a higher success rate in defending against phishing attacks compared to those who spend less time in training. Provide clear guidelines on how to recognize and respond to potential threats: how to identify suspicious communications, verify the identity of individuals, and practice safe digital hygiene. ISC2 Security Congress, Radcliffe, argued that “Unfortunately it’s a very technical problem that can only be solved by a human solution, which is knowing what to look for.”. Teach your employees what to look for, and strengthen your awareness programs.  

 

Technical solutions, such as watermarks and MFA (Multi-Factor Authentication) can also be useful in minimizing this particular risk, but they can also be prone to social engineering. This might force some organisations to start looking for other solutions, such as AI tools that scan the messages, URLs and metadata of the incoming content in order to find phishing or deepfake indicators or try and make the existing technology, such as the MFA, to be somewhat phishing-resistant. However, this technological adjustment can be timely or costly to implement. 

 

To minimise this risk, you could also develop robust policies and procedures that specifically address AI-related security threats. Establish clear protocols for verifying the authenticity of digital content and communicating sensitive information. Implement verification mechanisms for any key, high-impact decisions, perhaps by introducing another human link in the decision-making process.  

 

AI Trickery 

In the ongoing battle against AI trickery, one thing is clear: the number of those technology-powered threats is rising, but so is the awareness of this phenomenon. Whether it’s sharpening our detection skills or deploying AI-powered tools to outwit cybercriminals, every effort counts in fortifying organizational resilience.  

 

Read more?

We can help you today

If you want to see what the Human Risks platform can do, for your company.  Contact us today

Contact