OpenAI
openai.com › security-and-privacy
Security | OpenAI
Visit our security portal to learn more about our security controls and compliance activities. ... The OpenAI API and ChatGPT business plans undergo regular third-party penetration testing to identify security weaknesses before they can be exploited by malicious actors.
TechCrunch
techcrunch.com › home › openai says ai browsers may always be vulnerable to prompt injection attacks
OpenAI says AI browsers may always be vulnerable to prompt injection attacks | TechCrunch
2 days ago - OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches.
Videos
08:14
OpenAI Delays Open Model: Safety Concerns or Something Else? - YouTube
03:08
Adaptive: OpenAI's Investment for AI Cyber Threats. Next-Generation ...
20:36
OpenAI’s privacy disaster (it isn’t their fault) - YouTube
Massive domain hijacking exploitation, OpenAI ChatGPT ...
03:15
What experts are saying about security concerns between Apple, ...
OpenAI
openai.com › index › update-on-safety-and-security-practices
An update on our safety & security practices | OpenAI
It will oversee, among other things, the safety and security processes guiding OpenAI’s model development and deployment. The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.
OpenAI
openai.com › policies › supplier-security-measures
Supplier Security Measures | OpenAI
If the services that Supplier provides involve Access to Covered Data of OpenAI or its affiliates, then Supplier represents and warrants that: (i) neither it nor any of its affiliates is or will be organized or chartered in a Country of Concern, has or will have its principal place of business in a Country of Concern, or is or will be 50% or more owned, directly or indirectly, individually or in the aggregate, by one or more Countries of Concern or Covered Persons; and (ii) neither Supplier, nor any of its affiliates, nor any employee or contractor of Supplier who has Access to such Covered Data is or will be located in a Country of Concern, has been determined by the U.S.
Venturebeat
venturebeat.com › security › openai-admits-that-prompt-injection-is-here-to-stay
OpenAI admits prompt injection is here to stay as enterprises lag on defenses | VentureBeat
13 hours ago - What concerns security leaders is the gap between this reality and enterprise readiness. A VentureBeat survey of 100 technical decision-makers found that 34.7% of organizations have deployed dedicated prompt injection defenses. The remaining 65.3% either haven't purchased these tools or couldn't confirm they have. The threat is now officially permanent. Most enterprises still aren’t equipped to detect it, let alone stop it. OpenAI's defensive architecture deserves scrutiny because it represents the current ceiling of what's possible.
Fortune
fortune.com › 2025 › 12 › 23 › openai-ai-browser-prompt-injections-cybersecurity-hackers
OpenAI says prompt injections that can trick AI browsers may never be fully 'solved'
2 days ago - For example, an attacker could embed hidden commands in a webpage—perhaps in text that is invisible to the human eye but looks legitimate to an AI—that override a user’s instructions and tell an agent to share a user’s emails, or drain someone’s bank account. Following the launch of OpenAI’s ChatGPT Atlas browser in October, several security researchers demonstrated how a few words hidden in a Google Doc or clipboard link could manipulate the AI agent’s behavior.
Gizmodo
gizmodo.com › openais-outlook-on-ai-browser-security-is-bleak-but-maybe-a-little-more-ai-can-fix-it-2000702902
OpenAI's Outlook on AI Browser Security Is Bleak, but Maybe a Little More AI Can Fix It
1 day ago - OpenAI said on Monday that prompt injection attacks, a cybersecurity risk unique to AI agents, are likely to remain a long-term security challenge.
Reddit
reddit.com › r/cybersecurity › why do people trust openai but panic over deepseek
r/cybersecurity on Reddit: Why do people trust openAI but panic over deepseek
February 11, 2025 -
Just noticed something weird. I’ve been talking about the risks of sharing data with ChatGPT since all that info ultimately goes to OpenAI, but most people seem fine with it as long as they’re on the enterprise plan. Suddenly, DeepSeek comes along, and now everyone’s freaking out about security.
So, is it only a problem when the data is in Chinese servers? Because let’s be real—everyone’s using LLMs at work and dropping all kinds of sensitive info into prompts.
How’s your company handling this? Are there actual safeguards, or is it just trust?
Top answer 1 of 105
331
Trust? We trust no one. And we trust China even less.
2 of 105
281
Because let’s be real—everyone’s using LLMs at work and dropping all kinds of sensitive info into prompts. AAAAAHHHHH I know OTHER people are doing that, but I'm incredibly thankful that my org isn't doing this and has taken a very hard line on LLMs since day 1 - only the locally hosted one is allowed, no data out, and every means of accessing others is blocked except for a cleared dev group on a moderately careful basis. Edit: We have standard DLP measures in place, what I mean to convey above is we have a default block policy for known LLM domains, and our own locally hosted one most users are encouraged towards. That's all, it's not fancy.
Axios
axios.com › 2025 › 12 › 10 › openai-new-models-cybersecurity-risks
Exclusive: Future OpenAI models likely to pose "high" cybersecurity risk, it says
2 weeks ago - What they're saying: "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios in an exclusive interview. These kinds of brute force attacks that rely on this extended time are more easily defended, Matin says. "In any defended environment this would be caught pretty easily," he added. The big picture: Leading models are getting better at finding security vulnerabilities — and not just models from OpenAI.