- Hidden URL fragments allow attackers to manipulate AI assistants without user knowledge
- Some AI assistants automatically send sensitive data to external endpoints
- Misleading guidance and fake links can appear on otherwise normal websites
Many AI browsers are facing scrutiny after researchers detailed how a simple fragment in a URL can be used to influence browser assistants.
New research from Cato Networks found that the “HashJack” technique allows malicious instructions to sit silently after a hashtag in an otherwise legitimate link, creating a path to secret commands that remain invisible to traditional monitoring tools.
The assistant processes the hidden text locally, meaning the server never receives it and the user continues to see a normal page while the browser follows instructions they never wrote.
Behavior of assistants when processing fragments
Testing showed that certain assistants attempt autonomous actions when exposed to these fragments, including actions that transfer data to remote locations controlled by an attacker.
Others present misleading guidance or promote links that mimic trusted sources, giving the impression of a normal session while altering the information provided to the user.
The browser continues to display the correct website, making the intrusion difficult to detect without close inspection of the Assistant’s response.
Major technology companies have been notified of the issue, but their responses varied considerably.
Some vendors implemented updates to their AI browser capabilities, while others assessed the behavior as expected based on existing design logic.
Companies said defenses against indirect speed manipulation depend on how each AI assistant reads hidden page instructions.
Regular traffic inspection tools can only observe URL fragments that leave the device.
Therefore, conventional security measures provide limited protection in this scenario because the URL fragments never leave the device for inspection.
This forces defenders to go beyond network-level review and examine how AI tools integrate with the browser itself.
Stronger supervision requires attention to local behavior, including how assistants process hidden context that is invisible to users.
Organizations need to use stricter endpoint protection and tighter firewall rules, but these are only one layer and do not solve the visibility gap.
The HashJack method illustrates a vulnerability unique to AI-assisted browsing, where legitimate websites can be weaponized without leaving traditional traces.
Awareness of this limitation is critical for organizations deploying AI tools, as traditional monitoring and defense measures cannot fully capture these threats.
How to stay safe
- Limit personal information shared online.
- Monitor financial accounts for unusual activity.
- Use unique, complex passwords for all accounts.
- Verify URLs before logging into websites.
- Be wary of unsolicited messages or calls claiming to be from financial institutions.
- Deploy antivirus software to protect devices from malware.
- Enable firewalls to block unauthorized access.
- Use identity theft protection to monitor personal information.
- Recognize that sophisticated phishing campaigns and AI-powered attacks still pose a risk.
- Efficiency depends on consistent deployment across devices and networks.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews and opinions in your feeds. Be sure to click the Follow button!
And of course you can too follow TechRadar on TikTok for news, reviews, video unboxings, and get regular updates from us on WhatsApp also.



