New Gmail Phishing Attack Exploits AI Prompt Injection to Bypass Security

Phishing has always been about tricking people—but attackers are now also trying to trick the machines meant to protect them.

A new Gmail phishing campaign, analyzed by security researcher Anurag, goes beyond standard tactics like urgency and fake login notices. This time, the attackers embedded hidden AI prompts inside the email’s code, designed specifically to confuse automated defenses.

The phishing message itself looked familiar:

  • Subject line: Login Expiry Notice 8/20/2025 4:56:21 p.m.
  • Body: A warning that the user’s Gmail password was about to expire, urging them to “confirm” their credentials.

For recipients, this is classic social engineering—urgency, impersonation of official branding, and pressure to act fast.

But the real twist happens behind the scenes. Hidden in the source code was text crafted like prompts for large language models (LLMs) such as ChatGPT or Gemini. These “prompt injections” attempt to hijack the very AI-driven security tools that Security Operations Centers (SOCs) now rely on to analyze threats.

Instead of flagging the email for malicious links, an AI system could get sidetracked by the injected instructions—looping into irrelevant reasoning or generating misleading interpretations. The result: misclassification, delayed alerts, or worse, the phishing email slipping through undetected.

This marks a dangerous evolution in phishing tactics. Attackers are no longer just exploiting human psychology—they’re also targeting machine intelligence.

For defenders, that means strategies must adapt. Organizations now need to secure not just their people from social engineering but also their AI systems from manipulation.

Source

Control F5 Team
Blog Editor
OUR WORK
Case studies

We have helped 20+ companies in industries like Finance, Transportation, Health, Tourism, Events, Education, Sports.

READY TO DO THIS
Let’s build something together