🌐
OpenAI
openai.com › security-and-privacy
Security | OpenAI
Visit our security portal to learn more about our security controls and compliance activities. ... The OpenAI API and ChatGPT business plans undergo regular third-party penetration testing to identify security weaknesses before they can be exploited by malicious actors.
🌐
TechCrunch
techcrunch.com › home › openai says ai browsers may always be vulnerable to prompt injection attacks
OpenAI says AI browsers may always be vulnerable to prompt injection attacks | TechCrunch
2 days ago - OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches.
🌐
OpenAI
openai.com › index › update-on-safety-and-security-practices
An update on our safety & security practices | OpenAI
It will oversee, among other things, the safety and security processes guiding OpenAI’s model development and deployment. The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.
🌐
Gizmodo
gizmodo.com › openais-outlook-on-ai-browser-security-is-bleak-but-maybe-a-little-more-ai-can-fix-it-2000702902
OpenAI's Outlook on AI Browser Security Is Bleak, but Maybe a Little More AI Can Fix It
1 day ago - OpenAI said on Monday that prompt injection attacks, a cybersecurity risk unique to AI agents, are likely to remain a long-term security challenge.
🌐
SoftKraft
softkraft.co › openai-data-security
Is OpenAI Safe? - A Practical Look at OpenAI Data Security
September 12, 2024 - OpenAI's methodology, which involves ... requirements. A central concern regarding GDPR compliance is the potential use of personal data without explicit consent....
🌐
OpenAI
platform.openai.com › docs › guides › safety-best-practices
Safety best practices | OpenAI API
Learn how to implement safety measures like moderation, adversarial testing, human oversight, and prompt engineering to ensure responsible AI deployment.
🌐
The Verge
theverge.com › ai › tech › openai
OpenAI is plagued by safety concerns | The Verge
July 12, 2024 - The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. “Current frontier AI development poses urgent and growing risks to national security,” a report commissioned by the US State Department ...
🌐
TechCrunch
techcrunch.com › home › openai tightens the screws on security to keep away prying eyes
OpenAI tightens the screws on security to keep away prying eyes | TechCrunch
July 9, 2025 - The beefed-up security includes “information tenting” policies that limit staff access to sensitive algorithms and new products, the report said. For example, during development of OpenAI’s o1 model, only verified team members who had ...
🌐
OpenAI
openai.com › policies › supplier-security-measures
Supplier Security Measures | OpenAI
If the services that Supplier provides involve Access to Covered Data of OpenAI or its affiliates, then Supplier represents and warrants that: (i) neither it nor any of its affiliates is or will be organized or chartered in a Country of Concern, has or will have its principal place of business in a Country of Concern, or is or will be 50% or more owned, directly or indirectly, individually or in the aggregate, by one or more Countries of Concern or Covered Persons; and (ii) neither Supplier, nor any of its affiliates, nor any employee or contractor of Supplier who has Access to such Covered Data is or will be located in a Country of Concern, has been determined by the U.S.
Find elsewhere
🌐
Financial Times
ft.com › content › f896c4d9-bab7-40a2-9e67-4058093ce250
OpenAI clamps down on security after foreign spying threats
July 8, 2025 - Artificial intelligence group has added fingerprint scans and hired military experts to protect important data
🌐
News18
news18.com › tech
OpenAI Is Worried About Security Risks With AI Browsers, Uses AI To Fight The Threat | Tech News - News18
1 day ago - It is quite clear that OpenAI is not assuring users about the risks posed by these attacks on its AI browser but any long term fix for these risks will help in a big way, not only for Atlas but other similar browsers.
🌐
OpenAI
openai.com › index › mixpanel-incident
What to know about a recent Mixpanel security incident | OpenAI
... The information that may have been affected here could be used as part of phishing or social engineering attacks against you or your organization. Since names, email addresses, and OpenAI API metadata (e.g., user IDs) were included, we encourage ...
🌐
Venturebeat
venturebeat.com › security › openai-admits-that-prompt-injection-is-here-to-stay
OpenAI admits prompt injection is here to stay as enterprises lag on defenses | VentureBeat
10 hours ago - What concerns security leaders is the gap between this reality and enterprise readiness. A VentureBeat survey of 100 technical decision-makers found that 34.7% of organizations have deployed dedicated prompt injection defenses. The remaining 65.3% either haven't purchased these tools or couldn't confirm they have. The threat is now officially permanent. Most enterprises still aren’t equipped to detect it, let alone stop it. OpenAI's defensive architecture deserves scrutiny because it represents the current ceiling of what's possible.
🌐
Technology Org
technology.org › 2025 › 12 › 23 › ai-browsers-permanent-security-weakness-prompt-injection
OpenAI Says AI Browsers May Face Permanent Security Weakness, Vulnerability to Prompt Injection Attacks
2 days ago - OpenAI admits Atlas AI browser and similar tools will never be fully secure from prompt injection attacks that exploit hidden commands in web content.
🌐
Chatbase
chatbase.co › blog › is-openai-safe
Is OpenAI Safe? Privacy and Data Concerns
OpenAI states that all customer data is encrypted both in transit and at rest. This prevents unauthorized access to sensitive information. They also comply with SOC 2 Type 2 standards.
🌐
Fortune
fortune.com › 2025 › 12 › 23 › openai-ai-browser-prompt-injections-cybersecurity-hackers
OpenAI says prompt injections that can trick AI browsers may never be fully 'solved'
1 day ago - However, some cybersecurity experts are skeptical that OpenAI’s approach can address the fundamental problem. “What concerns me is that we’re trying to retrofit one of the most security-sensitive pieces of consumer software with a technology that’s still probabilistic, opaque, and easy ...
🌐
Reddit
reddit.com › r/cybersecurity › why do people trust openai but panic over deepseek
r/cybersecurity on Reddit: Why do people trust openAI but panic over deepseek
February 11, 2025 -

Just noticed something weird. I’ve been talking about the risks of sharing data with ChatGPT since all that info ultimately goes to OpenAI, but most people seem fine with it as long as they’re on the enterprise plan. Suddenly, DeepSeek comes along, and now everyone’s freaking out about security.

So, is it only a problem when the data is in Chinese servers? Because let’s be real—everyone’s using LLMs at work and dropping all kinds of sensitive info into prompts.

How’s your company handling this? Are there actual safeguards, or is it just trust?

🌐
Axios
axios.com › 2025 › 12 › 10 › openai-new-models-cybersecurity-risks
Exclusive: Future OpenAI models likely to pose "high" cybersecurity risk, it says
2 weeks ago - What they're saying: "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios in an exclusive interview. These kinds of brute force attacks that rely on this extended time are more easily defended, Matin says. "In any defended environment this would be caught pretty easily," he added. The big picture: Leading models are getting better at finding security vulnerabilities — and not just models from OpenAI.
🌐
Reuters
reuters.com › business › openai-warns-new-models-pose-high-cybersecurity-risk-2025-12-10
OpenAI warns new models pose 'high' cybersecurity risk | Reuters
2 weeks ago - OpenAI on Wednesday warned that its upcoming artificial intelligence models could pose a "high" cybersecurity risk, as their capabilities advance rapidly.
🌐
Digit
digit.in › home › openai admits ai browsers may never fully escape prompt injection attacks
OpenAI admits AI browsers may never fully escape prompt injection attacks
2 days ago - OpenAI says prompt injection attacks remain unsolved, long-term security risk for AI-powered browsers like its Atlas agent, despite ongoing defensive upgrades.