
Beware: AI Chatbots Are Leading Users to Dangerous Phishing Sites!
2025-07-21
Author: Nur
AI Chatbots Mislead Users with Incorrect Login Links
A shocking new report reveals that AI chatbots are frequently misguiding users to phishing websites when users seek legitimate login URLs for major services. Security firm Netcraft tested the capabilities of GPT-4.1-based models, and their findings are alarming.
Startling Statistics on Login Links
Out of 50 well-known brands analyzed, a staggering 34% of the suggested login links were either inactive, unrelated, or posed potential dangers. This highlights a troubling trend in how users are accessing websites through AI-generated responses.
Key findings from the study include:
- 29% of URLs were unregistered or inactive, putting them at risk of hijacking.
- 5% redirected users to entirely different businesses.
- 66% accurately linked to official brand domains.
Everyday Queries, Dangerous Responses
What's concerning is that Netcraft's prompts mirrored typical user inquiries, like asking for a login page after losing a bookmark. This indicates that even casual requests can lead users into dangerous territory, as chatbots often deliver results with alarming confidence.
A Real-Life Phishing Example
In a chilling incident, the AI-powered search engine Perplexity directed users to a phishing site on Google Sites instead of the genuine Wells Fargo login page. Instead of the official URL, users were sent to: hxxps://sites[.]google[.]com/view/wells-fargologins/home.
The phishing site expertly mimicked Wells Fargo’s branding, making it easy for unsuspecting users to be tricked.
Small Brands Bear the Brunt
Smaller businesses, especially regional banks and credit unions, are more vulnerable. Netcraft found that these institutions are less likely to be represented in AI training data, which increases the risk of the AI generating false or misleading information.
The fallout for these smaller organizations could be severe, impacting not just finances but also their reputations and potential regulatory issues.
Cybercriminals Targeting AI Systems
The report also sheds light on a concerning tactic by cybercriminals: creating content that is easily digestible by AI systems. Netcraft identified over 17,000 phishing pages disguised as legitimate information, specifically targeting crypto users. Imagine—people misled by AI to trust malicious links!
In other disturbing findings, a fake API called ‘SolanaApis’ was designed to mimic blockchain services, featuring blog posts, forum discussions, and numerous fake developer accounts. Several victims unwittingly used this malicious API in their public projects.
A Call for Brand Vigilance in AI Outputs
As traditional defensive measures like domain registration fall short against the myriad of potential domains an AI can generate, brands are urged to adopt proactive monitoring strategies. Being aware of how they are presented in AI outputs will soon take center stage.
A Word of Caution for Users
This report serves as a vital reminder for users: approach AI-generated recommendations with skepticism. When seeking login pages, it remains safer to use traditional search engines or type URLs directly instead of relying on potentially misleading links provided by chatbots.