This alert may not be shared outside your organization, Do Not Repost or send, place on other websites, List servers, or send to others via email, including other associations or parties.  Members and Law enforcement use only. Contact us for any permissions.  To do otherwise will result in the loss of membership.

Complete Story
 

07/17/2025

Google AI Chatbot Target of Potential Phishing Attacks

PYMNTS

Researchers discovered a security threat in Google’s artificial intelligence chatbot.

AI security company 0din flagged the problem after it was altered to a security vulnerability in Google Gemini by a researcher, cybersecurity publication Dark Readings reported Monday (July 14).

At issue is a prompt-injection flaw that allows cybercriminals to design phishing or vishing campaigns by creating messages that appear to be legitimate Google security warnings, the report said. Fraudsters can embed malicious prompt instructions into an email with “admin” instructions. If a recipient clicks “Summarize this email,” Gemini treats the hidden admin prompt as its top priority and carries it out.

“Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated ‘security alert’ in the AI-generated summary,” 0din researcher Marco Figueroa wrote in a company blog post.

More Info

Printer-Friendly Version


Resources

Alerts

The FRPA alert system distinguishes us from other groups by gathering and providing information to law enforcement, retailers AND financial institutions.

more information
Resources

Resources

Your electronic library to help in fighting financial fraud for all of our partners.

more information