IBM Study Reveals Human-Crafted Phishing Emails Outperform AI, Urges Businesses to Prioritize Human-Centric Email Security

Gallery Thumb 1

In a recent research initiative spearheaded by Chief People Hacker Stephanie "Snow" Carruthers and her team at IBM X-Force, the efficacy of phishing emails took center stage. The study, conducted in collaboration with a prominent healthcare company in Canada, delved into the comparison between phishing emails written by humans and those generated by AI, specifically ChatGPT.

The experiment aimed to shed light on the success rates of these two approaches, with a focus on a more personalized and business-oriented perspective. While two other organizations initially intended to participate, concerns about the potential success of phishing emails led them to withdraw from the study, underlining the real-world implications of such threats.

Customizing social engineering techniques to target businesses was a pivotal aspect of the research. Carruthers and her team discovered that, contrary to expectations, human-crafted phishing emails exhibited a 3% higher click rate than those generated by ChatGPT. This finding raises significant questions about the prevailing assumptions regarding the effectiveness of AI-driven phishing attacks.

The research unveiled a striking revelation - leveraging a large language model (LLM) to compose phishing emails proved significantly faster than the traditional, time-consuming manual approach. Carruthers noted that the X-Force Red team spent approximately 16 hours on research and personalization, while the LLM reduced the task to a mere five minutes. The speed and efficiency of AI in generating convincing content pose a notable concern for businesses of all sizes.

In their experimentation, IBM researchers prompted ChatGPT to create a persuasive email mimicking an internal human resources manager, incorporating social engineering and marketing techniques. Meanwhile, the X-Force Red team crafted their own phishing email based on targeted research and experience. The results were enlightening - the human-generated phishing email outperformed its AI counterpart with a 14% click rate compared to 11%.

The researchers attribute the success of human-crafted emails to their ability to resonate with human emotional intelligence and their focus on specific programs within the organization rather than broader topics. This insight is crucial for small and medium-sized businesses looking to fortify their cybersecurity defenses against evolving threats.

Despite the experiment's findings, Carruthers emphasized that the use of generative AI in phishing attacks is not yet widespread. However, tools like WormGPT, a variant of ChatGPT, are available on the black hat market, indicating potential risks in the future.

X-Force recommends taking the following precautions to keep employees from falling prey to phishing emails.

  1. If an email seems suspicious, call the sender and double check on the origin of the email.
  2. Don’t assume all spam emails will have incorrect grammar or spelling; instead, look for longer-than-usual emails, which may be a byproduct of AI generation.
  3. Train employees on how to avoid phishing by email or phone.
  4. Use advanced identity and access management controls such as multifactor authentication.
  5. Regularly update internal tactics, techniques, procedures, threat detection systems and employee training materials to keep up with advancements in generative AI and other technologies malicious actors might use.

As phishing remains a prevalent vector for cybersecurity incidents, Carruthers recommends continuous vigilance and regular updates to internal security protocols. For businesses looking to bolster their defenses, adopting multifactor authentication and staying informed about advancements in generative AI and other technologies are imperative measures.

Other Posts you might be interested in:

Report Reveals Majority of Organizations Lack Asset Visibility, Posing Cybersecurity Threats

Report Reveals Majority of Organizations Lack Asset Visibility, Posing Cybersecurity Threats

A new report suggests that too many firms have IT assets that are outside the sight and control of the security team, or of the software responsible for protecting them. These assets represent an ideal ingress point for attackers as they can exploit your IT Environment without knowledge of the deed, making it a major security risk.

Read More
Report Reveals Majority of Organizations Lack Asset Visibility, Posing Cybersecurity Threats

Be careful of new DLL Sideloading Exploit

New Studies from BitDefender and Arctic Wolf show that cybergroups are employing new tactics that exploit popular social channels such as Facebook and Youtube. The exploit uses DLLs, shared code libraries used by every operating system to hide malicious code by in the form of a legitimate DLL.

Read More
Google to add Search Labels and new security upgrades to combat misinformation

Google to add Search Labels and new security upgrades to combat misinformation

Amidst the proliferation of AI tools, Google has announced new features that allow users to protect themselves from threats, identify AI-generated images and further protect sensitive data.

Read More