Brave says AI browsers can be tricked—here’s how (and why India should care)
If your browser starts acting like a super-helpful intern, ask a tougher question: who’s whispering in its ear? Brave’s security team has shown that AI browsers—including Perplexity’s Comet—can be manipulated by “indirect prompt injection.” Translation: a malicious webpage (or even a screenshot of it) can hide instructions your AI agent will blindly follow. Not great when your tabs include UPI, netbanking, or corporate email.
This isn’t another vague scare. Brave published concrete research, timelines, and demo attacks. Other outlets and researchers have echoed the concern. And yes, OpenAI’s new Atlas-style agentic browsing ideas face similar risks unless the fundamentals are fixed.
What Brave’s researchers actually found
Brave’s posts (Aug 20 and Oct 21, 2025 IST) document two nasty pathways:
- Hidden text on webpages: AI agents treat untrusted page content as part of your “intent,” so attackers can plant invisible or low-contrast text that instructs the agent to open sensitive sites, read email, or exfiltrate data.
- Prompt injection via screenshots: Some AI browsers let you screenshot a page and “ask” about it. Brave showed that near-invisible text embedded in images can carry malicious instructions the model obeys.
They also published disclosure timelines (e.g., Oct 1–21 for Perplexity Comet) and stressed the issue is systemic across agentic browsers, not just one product.
And it’s not just Comet
Coverage across security blogs and tech media points to a broader class of “agentic browsing” problems: if a browser gives an AI the authority to click, fetch, and summarize across multiple sites while logged in, a hostile page can steer the AI. Tech outlets have generalized Brave’s warning beyond Comet, and independent researchers (e.g., LayerX) described “CometJacking”-style hijacks that leak emails or calendar data. In short: the architecture is the problem, not one brand.
Did vendors patch this?
Perplexity has said it addressed specific reports, but even with patches, Brave’s position is clear: the pattern remains risky until AI browsers reliably separate trusted user intent from untrusted web content and gate sensitive actions. Multiple sources emphasize that defenses must be deeper than quick filters.
Why Indian users should care (today, not “someday”)
- Banking & UPI: Logged-in sessions (netbanking, UPI portals) are catnip for injection attacks. If your AI agent can navigate with your cookies, that’s a high-value target.
- Gov & work apps: From Income Tax e-filing to DigiLocker to corporate SSO—these tabs matter.
- Small businesses & creators: If your browser “assistant” helps with orders, invoices, or emails, injection can misdirect messages or data.
Practical guardrails you can use now
- Use a separate, boring browser for sensitive tasks (banking/UPI/Gov/HRMS). Keep the AI browser for research only.
- Disable autonomous actions in agentic tools. Force a human confirmation for anything involving login, payments, email, calendar, or file access.
- Treat screenshots as inputs from the internet. Don’t assume an image is “safe text.”
- Least-privilege logins: Separate work and personal profiles; avoid keeping everything logged in everywhere.
- Update aggressively: If you still test AI browsers, stay current with versions and read their security notes.
- Security hygiene: 2FA on banking and email, password manager, and phishing skepticism (AI won’t fix that for you).
What needs to change under the hood
Brave’s write-ups imply the fix is architectural:
- Trust boundaries: Models must treat page content (and images) as hostile by default.
- Action gating: Sensitive steps need explicit user confirmation, ideally with a readable diff of what the agent intends to do.
- Context compartmentalization: Cookies and tokens shouldn’t freely flow into the model’s “thoughts.”
- Better red-teaming for multimodal (text+image) prompt injection, not just text sanitization.
Until these are standard, “AI browser” should mean “AI-assisted reading,” not “AI acting for you while you’re logged in everywhere.”
The balanced view
AI help in the browser is genuinely useful—summaries, translations, quick comparisons. But giving that helper agency without robust isolation turns every webpage into a potential puppeteer. Brave’s research doesn’t say “never use AI”; it says “don’t give it the keys to your vault.” That’s a distinction worth remembering the next time a browser offers to “handle this for you.”
Risks and unknowns
- Residual exposure: Even after patches, undisclosed routes (image steganography, CSS tricks, SVG text) may bypass filters.
- Vendor transparency: Not all vendors publish timelines or technical details, making it hard to verify claims.
- User behavior: Convenience wins. People will re-enable autonomy unless products make safe defaults non-annoying.
If you’re in India and testing these tools, segregate your browsing, gate actions, and assume any page (or screenshot) might talk your AI into doing things you didn’t intend.
Brave’s team didn’t just find bugs; they spotlighted a design flaw across AI browsers: mixing untrusted content with trusted action. Until vendors rebuild around strict trust boundaries and consent, treat AI browsing like a sharp knife—great in the right hands, dangerous when left unattended near your wallet.