The Next Frontier of Social Engineering: Generative AI

You'd have to be living under a rock these days to not read an article about Generative AI, such as ChatGPT, Google Bard, or image generators like Stable Diffusion. Breathless descriptions of the powers of these tools permeate the literature and social media. To be fair, Generative AI is an extremely powerful new technology, and I believe it has quite a bit of promise to boost productivity and augment human potential. However, as we learn to live with our new machine assistants, I think it's also important to understand their superpower: they are really good at making things up. In fact, they are really good at making up things that sound really convincing, even if they are completely false. That's why I think one of the biggest threats they pose is helping attackers create novel content that will supercharge all sorts of social engineering and phishing attacks.

Phishing, spearfishing, and other social engineering attacks rely on human interaction to trick people into revealing sensitive information or taking actions that are harmful to themselves or their organization. Generative AI can be used to create realistic-looking social engineering attacks that are very difficult to detect. Many of these AI have been trained on text data obtained across the internet. A lot of this publicly available data can be leveraged by an adversary to create that perfect email, SMS, or watering hole website that will convince users to click a malicious link.

So, when faced with such a powerful mechanized onslaught, what's a poor human to do? Luckily, some of our existing controls will still thwart even the most convincing phishing attack.

  • Email filtering and malware detection will still work against malicious links, suspicious attachments, or embedded malware.

  • Strong phishing-resistant MFA or passwordless authentication is a great line of defense against social engineering attacks aimed at individual users or the service desk.

  • Good intelligence and even general awareness of the latest social engineering scams helps blunt the advantage adversaries may have in generating these convincing attacks.

Future countermeasures may include those based on detection of content created by Generative AI. These techniques could be used as more powerful filters in our email systems or browsers. They could help our human intuition in detecting machine-generated misinformation for the purposes of circumventing our security. Of course, it might also detect blog posts from lazy CTOs actually written by ChatGPT. Probably should have edited that out.



Steve Giovannetti

Steve is the Founder and CTO of Hub City Media

Previous
Previous

Winners and Losers in a Passkey Future

Next
Next

Solving the Challenges of an Identity Governance and Administration (IGA) Deployment