Close Menu
Owen Daily

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Car crashes into people waiting to watch parade in Dutch town, injuring 9 people

    U.S. FDA approves Wegovy weight loss drug – National

    Nicki Minaj praises 'handsome' Donald Trump at Turning Point USA event – National

    Trending
    • Car crashes into people waiting to watch parade in Dutch town, injuring 9 people
    • U.S. FDA approves Wegovy weight loss drug – National
    • Nicki Minaj praises 'handsome' Donald Trump at Turning Point USA event – National
    • OpenAI says AI browsers can always be vulnerable to prompt injection attacks
    • MetaX, Moore Threads IPO exploded, but it's not easy for foreigners to participate
    • Canada changes chemical regulations to curb illegal fentanyl and other drugs – National
    • Jelly Roll receives pardon from Tennessee governor for drug and robbery convictions – National
    • A tough week for hardware companies
    Tuesday, December 23
    Owen Daily
    • Health
    • Latest News
    • Real Estate
    • Technology
    • Entertainment
    Owen Daily
    You are at:Home»Technology»OpenAI says AI browsers can always be vulnerable to prompt injection attacks
    Technology

    OpenAI says AI browsers can always be vulnerable to prompt injection attacks

    December 23, 202505 Mins Read
    Openai says ai browsers can always be vulnerable to prompt

    OpenAI is working to harden its Atlas AI browser against cyberattacks and acknowledges that prompt injection is a type of attack that manipulates an AI agent to follow malicious instructions hidden in web pages or emails. This is a risk that isn't going away anytime soon, raising questions about how securely AI agents can operate on the open web.

    “As with fraud and social engineering on the web, instant attacks are unlikely to be fully 'solved',” OpenAI said in a blog post on Monday, detailing how the company is hardening Atlas' defenses to counter the constant attacks. The company acknowledged that ChatGPT Atlas' “Agent Mode” “expands the surface of security threats.”

    OpenAI announced its ChatGPT Atlas browser in October, and security researchers have rushed to release a demo showing that you can change the behavior of the underlying browser by writing a few words in a Google Doc. On the same day, Brave published a blog post explaining how indirect prompt injection is an organizational challenge for AI-powered browsers, including Perplexity's Comet.

    OpenAI isn't the only company to realize that prompt-based injection isn't going away. Earlier this month, the UK's National Cyber ​​Security Center warned that prompt injection attacks on generative AI applications “may not be completely mitigated”, leaving websites at risk of falling victim to a data breach. UK government agencies have advised cyber experts to reduce the risk and impact of immediate injections, rather than thinking they can “stop” an attack.

    Regarding OpenAI, the company said, “We believe rapid injection is a long-term AI security challenge, and we need to continually strengthen our defenses against it.”

    What is the company's answer to this Sisyphean-like challenge? The company says its proactive, rapid response cycle is showing early promise in helping discover new attack strategies internally before they can be exploited “in the wild.”

    This is not entirely different from what competitors like Anthropic and Google claim. This means defenses must be layered and continually stress-tested to combat the persistent risk of prompt-based attacks. For example, recent efforts at Google have focused on architectural and policy-level controls for agent systems.

    But what OpenAI does differently is its “LLM-based automated attacker.” The attacker is essentially a bot trained by OpenAI using reinforcement learning to play the role of a hacker looking for a way to secretly send malicious instructions to an AI agent.

    Bots can test attacks in a simulation before actually using them, and the simulator shows how the target AI will think and act if it recognizes the attack. The bot can then study that response, fine-tune its attack, and try again and again. In theory, OpenAI's bots should be able to discover flaws faster than real-world attackers, since insights into the target AI's internal reasoning are inaccessible to outsiders.

    This is a common tactic in AI safety testing. Build an agent to find edge cases and quickly test it in simulation.

    “With our (reinforcement learning) training, an attacker can coax an agent into executing a lengthy, sophisticated, and harmful workflow that unfolds over dozens (or even hundreds) of steps,” OpenAI wrote. “We also observed new attack strategies that did not appear in human red teaming operations or external reports.”

    Image credit: OpenAI

    In a demo (partially pictured above), OpenAI showed how an automated attacker could sneak a malicious email into a user's inbox. Later, when the AI ​​agent scanned the inbox, it followed the instructions hidden in the email and sent a resignation message instead of creating an out-of-office reply. However, the company says that after a security update, “Agent Mode” was able to successfully detect the prompt injection attempt and flag the user.

    The company says prompt injections are difficult to defend against in a fool-proof manner, but it relies on extensive testing and faster patch cycles to harden systems before they appear in an actual attack.

    An OpenAI spokesperson declined to say whether Atlas' security updates led to a measurable reduction in successful injections, but said the company has been working with third parties to harden Atlas against rapid injections since before its launch.

    Rami McCarthy, principal security researcher at cybersecurity firm Wiz, said reinforcement learning is one way to continually adapt to an attacker's behavior, but it's only part of the picture.

    “A useful way to infer risk in an AI system is to multiply autonomy with access,” McCarthy told TechCrunch.

    “Agent browsers tend to be at the difficult end of the spectrum, which is a combination of moderate autonomy and very high access,” McCarthy said. “Many of the current recommendations reflect that trade-off: Restricting login access primarily reduces risk, but requiring review of confirmation requests constrains autonomy.”

    These are two of OpenAI's recommendations to help users reduce their own risks, and a spokesperson said Atlas is also trained to obtain confirmation from users before sending messages or making payments. OpenAI also suggests that users give the agent specific instructions, rather than giving the agent access to their inbox and telling them to “perform the required action.”

    According to OpenAI, “wide tolerance makes it easier for hidden or malicious content to impact agents, even when safety measures are in place.”

    OpenAI says protecting Atlas users from prompt injections is a top priority, but McCarthy is skeptical about the return on investment for the risk-prone browser.

    “For most everyday use cases, agent browsers still don't provide enough value to justify their current risk profile,” McCarthy told TechCrunch. “Even though that access is what makes them powerful, given their access to sensitive data such as email and payment information, the risks are high. That balance will evolve, but the trade-offs are still very real today.”

    attacks browsers injection Openai Prompt vulnerable
    Share. Facebook Twitter Email
    Previous ArticleMetaX, Moore Threads IPO exploded, but it's not easy for foreigners to participate
    Next Article Nicki Minaj praises 'handsome' Donald Trump at Turning Point USA event – National

    Related Posts

    A tough week for hardware companies

    December 22, 2025

    Google and Apple reportedly warned employees with visas to avoid traveling abroad

    December 21, 2025

    Resolve AI, a startup led by former Splunk executives, reaches $1 billion Series A valuation

    December 20, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Tiktok now allows users to send voice memos and images via DMS

    August 29, 2025

    Review Week: Meta reveals Oakley Smart Glasses

    June 21, 2025

    Here are our biggest takeaways from the 24-hour “Vibe Coding” hackathon

    October 23, 2025

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    About us
    About us

    Owen Daily is a dynamic digital platform dedicated to delivering timely and insightful news across a spectrum of topics, including world affairs, business, politics, technology, health, and entertainment. Our mission is to bridge the gap between global developments and local perspectives, providing our readers with a comprehensive understanding of the events shaping our world.​

    Most Popular

    Tiktok now allows users to send voice memos and images via DMS

    August 29, 2025

    Review Week: Meta reveals Oakley Smart Glasses

    June 21, 2025

    Here are our biggest takeaways from the 24-hour “Vibe Coding” hackathon

    October 23, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    © 2025 Owen Daily. All Rights Reserved.
    • About Us
    • Contact us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer

    Type above and press Enter to search. Press Esc to cancel.