Humans are still better at crafting phishing emails than AI — for now

AI-generated phishing emails, including those created by ChatGPT, pose a potential new threat to security professionals, says Hoxhunt.

Image: Gstudio/Adobe Stock

Amidst all the buzz surrounding ChatGPT and other AI applications, cybercriminals have already started using AI to create phishing emails. For now, human cybercriminals are still more successful at designing successful phishing attacks, but the gap is closing, according to a new report by security trainer Hoxhunt released Wednesday.

Phishing campaigns created by ChatGPT against humans

Hoxhunt compared phishing campaigns generated by ChatGPT to those generated by human beings to determine which were more likely to trick an unsuspecting victim.

To conduct this experiment, the company sent 53,127 users in 100 countries phishing simulations designed by either human social engineers or ChatGPT. Users received the phishing simulation in their inbox as they would receive any type of email. The test was set up to trigger three possible responses:

  1. Success: The user successfully reports the phishing simulation as malicious via the Hoxhunt threat reporting button.
  2. Miss: The user does not interact with the phishing simulation.
  3. Failure: The user takes the bait and clicks on the malicious link in the email.

The results of the phishing simulation led by Hoxhunt

In the end, human-generated phishing emails caught more victims than those created by ChatGPT. Specifically, the user churn rate for human-generated messages was 4.2%, while the rate for AI-generated messages was 2.9%. This means that human social engineers outperformed ChatGPT by about 69%.

A positive takeaway from the study is that security training can prove effective in preventing phishing attacks. More security-conscious users were much more likely to resist the temptation of engaging in phishing emails, whether they were generated by humans or AI. The percentages of people who clicked on a malicious link in a message dropped from more than 14% among less educated users to between 2% and 4% among those with more education.

I SEE: Security awareness and education policy (TechRepublic Premium)

The results also differed by country:

  • US: 5.9% of surveyed users were scammed by human-generated emails, while 4.5% were scammed by AI-generated messages.
  • Germany: 2.3% were cheated by humans, while 1.9% were cheated by artificial intelligence.
  • Sweden: 6.1% were cheated by humans, while 4.1% were cheated by artificial intelligence.

Current cybersecurity defenses can still cover AI phishing attacks

Although human-generated phishing messages were more convincing than AI-generated ones, this result is fluid, especially as ChatGPT and other AI models improve. The test itself was conducted before the release of ChatGPT 4, which promises to be smarter than its predecessor. AI tools will certainly evolve and pose a greater threat to organizations than cybercriminals who use them for their own nefarious purposes.

The upside is that protecting your organization from phishing and other threats requires the same defense and coordination whether the attacks are human- or AI-generated.

“ChatGPT allows criminals to launch impeccably worded phishing campaigns at scale, and while this removes one key indicator of a phishing attack – bad grammar – other indicators are easily observable to the trained eye,” said Hoxhunt CEO and co-founder Mika Aalto. “As part of your holistic cybersecurity strategy, be sure to focus on your people and their email behavior because that’s what our adversaries are doing with their new AI tools.

“Embedded security as a shared responsibility across the organization with ongoing training that empowers users to spot suspicious messages and rewards them for reporting threats until human threat detection becomes a habit.”

Security advice or IT and users

Toward this end, Aalto offers the following advice.

For IT and security

  • Two-factor authentication or multi-factor authentication is required for all employees who have access to sensitive data.
  • Give all employees the skills and confidence to report a suspicious email. such a process should be seamless.
  • Provide security teams with the resources needed to analyze and respond to employee threat reports.

For users

  • Hover over any link in an email before clicking on it. If the link appears out of place or unrelated to the message, report the email as suspicious to IT support or the support team.
  • Review the sender field to ensure that the email address contains a legitimate business domain. If the address refers to Gmail, Hotmail or another free service, the message is likely a phishing email.
  • Confirm a suspicious email with the sender before acting on it. Use a method other than email to contact the sender about the message.
  • Think before you click. Socially engineered phishing attacks attempt to create a false sense of urgency by urging the recipient to click on a link or engage with the message as quickly as possible.
  • Pay attention to the tone and voice of an email. Currently, AI-generated phishing emails are written in a formal and robust manner.

Read next: As a cyber security blade, ChatGPT can cut both ways (TechRepublic)

Leave a Reply

Your email address will not be published. Required fields are marked *