🌐
OpenAI
openai.com › security-and-privacy
Security | OpenAI
Visit our security portal to learn more about our security controls and compliance activities. ... The OpenAI API and ChatGPT business plans undergo regular third-party penetration testing to identify security weaknesses before they can be exploited by malicious actors.
🌐
TechCrunch
techcrunch.com › home › openai says ai browsers may always be vulnerable to prompt injection attacks
OpenAI says AI browsers may always be vulnerable to prompt injection attacks | TechCrunch
2 days ago - OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches.
🌐
OpenAI
openai.com › index › update-on-safety-and-security-practices
An update on our safety & security practices | OpenAI
It will oversee, among other things, the safety and security processes guiding OpenAI’s model development and deployment. The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.
🌐
SoftKraft
softkraft.co › openai-data-security
Is OpenAI Safe? - A Practical Look at OpenAI Data Security
September 12, 2024 - OpenAI's methodology, which involves ... requirements. A central concern regarding GDPR compliance is the potential use of personal data without explicit consent....
🌐
OpenAI
platform.openai.com › docs › guides › safety-best-practices
Safety best practices | OpenAI API
Learn how to implement safety measures like moderation, adversarial testing, human oversight, and prompt engineering to ensure responsible AI deployment.
🌐
The Verge
theverge.com › ai › tech › openai
OpenAI is plagued by safety concerns | The Verge
July 12, 2024 - The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. “Current frontier AI development poses urgent and growing risks to national security,” a report commissioned by the US State Department ...
🌐
TechCrunch
techcrunch.com › home › openai tightens the screws on security to keep away prying eyes
OpenAI tightens the screws on security to keep away prying eyes | TechCrunch
July 9, 2025 - The beefed-up security includes “information tenting” policies that limit staff access to sensitive algorithms and new products, the report said. For example, during development of OpenAI’s o1 model, only verified team members who had ...
🌐
OpenAI
openai.com › policies › supplier-security-measures
Supplier Security Measures | OpenAI
If the services that Supplier provides involve Access to Covered Data of OpenAI or its affiliates, then Supplier represents and warrants that: (i) neither it nor any of its affiliates is or will be organized or chartered in a Country of Concern, has or will have its principal place of business in a Country of Concern, or is or will be 50% or more owned, directly or indirectly, individually or in the aggregate, by one or more Countries of Concern or Covered Persons; and (ii) neither Supplier, nor any of its affiliates, nor any employee or contractor of Supplier who has Access to such Covered Data is or will be located in a Country of Concern, has been determined by the U.S.
🌐
Venturebeat
venturebeat.com › security › openai-admits-that-prompt-injection-is-here-to-stay
OpenAI admits prompt injection is here to stay as enterprises lag on defenses | VentureBeat
13 hours ago - What concerns security leaders is the gap between this reality and enterprise readiness. A VentureBeat survey of 100 technical decision-makers found that 34.7% of organizations have deployed dedicated prompt injection defenses. The remaining 65.3% either haven't purchased these tools or couldn't confirm they have. The threat is now officially permanent. Most enterprises still aren’t equipped to detect it, let alone stop it. OpenAI's defensive architecture deserves scrutiny because it represents the current ceiling of what's possible.
Find elsewhere
🌐
Financial Times
ft.com › content › f896c4d9-bab7-40a2-9e67-4058093ce250
OpenAI clamps down on security after foreign spying threats
July 8, 2025 - Artificial intelligence group has added fingerprint scans and hired military experts to protect important data
🌐
OpenAI
openai.com › index › mixpanel-incident
What to know about a recent Mixpanel security incident | OpenAI
... The information that may have been affected here could be used as part of phishing or social engineering attacks against you or your organization. Since names, email addresses, and OpenAI API metadata (e.g., user IDs) were included, we encourage ...
🌐
News18
news18.com › tech
OpenAI Is Worried About Security Risks With AI Browsers, Uses AI To Fight The Threat | Tech News - News18
1 day ago - It is quite clear that OpenAI is not assuring users about the risks posed by these attacks on its AI browser but any long term fix for these risks will help in a big way, not only for Atlas but other similar browsers.
🌐
Technology Org
technology.org › 2025 › 12 › 23 › ai-browsers-permanent-security-weakness-prompt-injection
OpenAI Says AI Browsers May Face Permanent Security Weakness, Vulnerability to Prompt Injection Attacks
2 days ago - OpenAI admits Atlas AI browser and similar tools will never be fully secure from prompt injection attacks that exploit hidden commands in web content.
🌐
Fortune
fortune.com › 2025 › 12 › 23 › openai-ai-browser-prompt-injections-cybersecurity-hackers
OpenAI says prompt injections that can trick AI browsers may never be fully 'solved'
2 days ago - For example, an attacker could embed hidden commands in a webpage—perhaps in text that is invisible to the human eye but looks legitimate to an AI—that override a user’s instructions and tell an agent to share a user’s emails, or drain someone’s bank account. Following the launch of OpenAI’s ChatGPT Atlas browser in October, several security researchers demonstrated how a few words hidden in a Google Doc or clipboard link could manipulate the AI agent’s behavior.
🌐
Gizmodo
gizmodo.com › openais-outlook-on-ai-browser-security-is-bleak-but-maybe-a-little-more-ai-can-fix-it-2000702902
OpenAI's Outlook on AI Browser Security Is Bleak, but Maybe a Little More AI Can Fix It
1 day ago - OpenAI said on Monday that prompt injection attacks, a cybersecurity risk unique to AI agents, are likely to remain a long-term security challenge.
🌐
Chatbase
chatbase.co › blog › is-openai-safe
Is OpenAI Safe? Privacy and Data Concerns
OpenAI states that all customer data is encrypted both in transit and at rest. This prevents unauthorized access to sensitive information. They also comply with SOC 2 Type 2 standards.
🌐
Reddit
reddit.com › r/cybersecurity › why do people trust openai but panic over deepseek
r/cybersecurity on Reddit: Why do people trust openAI but panic over deepseek
February 11, 2025 -

Just noticed something weird. I’ve been talking about the risks of sharing data with ChatGPT since all that info ultimately goes to OpenAI, but most people seem fine with it as long as they’re on the enterprise plan. Suddenly, DeepSeek comes along, and now everyone’s freaking out about security.

So, is it only a problem when the data is in Chinese servers? Because let’s be real—everyone’s using LLMs at work and dropping all kinds of sensitive info into prompts.

How’s your company handling this? Are there actual safeguards, or is it just trust?

🌐
Axios
axios.com › 2025 › 12 › 10 › openai-new-models-cybersecurity-risks
Exclusive: Future OpenAI models likely to pose "high" cybersecurity risk, it says
2 weeks ago - What they're saying: "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios in an exclusive interview. These kinds of brute force attacks that rely on this extended time are more easily defended, Matin says. "In any defended environment this would be caught pretty easily," he added. The big picture: Leading models are getting better at finding security vulnerabilities — and not just models from OpenAI.
🌐
Reuters
reuters.com › business › openai-warns-new-models-pose-high-cybersecurity-risk-2025-12-10
OpenAI warns new models pose 'high' cybersecurity risk | Reuters
2 weeks ago - OpenAI on Wednesday warned that its upcoming artificial intelligence models could pose a "high" cybersecurity risk, as their capabilities advance rapidly.
🌐
Fortune
fortune.com › 2025 › 10 › 23 › cybersecurity-vulnerabilities-openai-chatgpt-atlas-ai-browser-leak-user-data-malware-prompt-injection
Experts warn OpenAI’s ChatGPT Atlas has security vulnerabilities that could turn it against users—revealing sensitive data, downloading malware, and worse | Fortune
October 23, 2025 - Cybersecurity experts are warning that OpenAI’s new browser, ChatGPT Atlas, could be vulnerable to malicious attacks that could turn AI assistants against users, potentially stealing sensitive data or even draining their bank accounts.