🌐
OpenAI
openai.com › security-and-privacy
Security | OpenAI
Visit our security portal to learn more about our security controls and compliance activities. ... The OpenAI API and ChatGPT business plans undergo regular third-party penetration testing to identify security weaknesses before they can be exploited by malicious actors.
🌐
TechCrunch
techcrunch.com › home › openai says ai browsers may always be vulnerable to prompt injection attacks
OpenAI says AI browsers may always be vulnerable to prompt injection attacks | TechCrunch
2 days ago - OpenAI isn’t alone in recognizing that prompt-based injections aren’t going away. The U.K.’s National Cyber Security Centre earlier this month warned that prompt injection attacks against generative AI applications “may never be totally mitigated,” putting websites at risk of falling victim to data breaches.
🌐
OpenAI
openai.com › index › update-on-safety-and-security-practices
An update on our safety & security practices | OpenAI
It will oversee, among other things, the safety and security processes guiding OpenAI’s model development and deployment. The Safety and Security Committee will be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed.
🌐
The Verge
theverge.com › ai › tech › openai
OpenAI is plagued by safety concerns | The Verge
July 12, 2024 - The stakes around safety, according to OpenAI and others studying the emergent technology, are immense. “Current frontier AI development poses urgent and growing risks to national security,” a report commissioned by the US State Department ...
🌐
Gizmodo
gizmodo.com › openais-outlook-on-ai-browser-security-is-bleak-but-maybe-a-little-more-ai-can-fix-it-2000702902
OpenAI's Outlook on AI Browser Security Is Bleak, but Maybe a Little More AI Can Fix It
1 day ago - OpenAI said on Monday that prompt injection attacks, a cybersecurity risk unique to AI agents, are likely to remain a long-term security challenge.
🌐
SoftKraft
softkraft.co › openai-data-security
Is OpenAI Safe? - A Practical Look at OpenAI Data Security
September 12, 2024 - OpenAI's methodology, which involves ... requirements. A central concern regarding GDPR compliance is the potential use of personal data without explicit consent....
🌐
TechCrunch
techcrunch.com › home › openai tightens the screws on security to keep away prying eyes
OpenAI tightens the screws on security to keep away prying eyes | TechCrunch
July 9, 2025 - The beefed-up security includes “information tenting” policies that limit staff access to sensitive algorithms and new products, the report said. For example, during development of OpenAI’s o1 model, only verified team members who had ...
🌐
Chatbase
chatbase.co › blog › is-openai-safe
Is OpenAI Safe? Privacy and Data Concerns
OpenAI states that all customer data is encrypted both in transit and at rest. This prevents unauthorized access to sensitive information. They also comply with SOC 2 Type 2 standards.
🌐
Axios
axios.com › 2025 › 12 › 10 › openai-new-models-cybersecurity-risks
Exclusive: Future OpenAI models likely to pose "high" cybersecurity risk, it says
2 weeks ago - What they're saying: "What I would explicitly call out as the forcing function for this is the model's ability to work for extended periods of time," OpenAI's Fouad Matin told Axios in an exclusive interview. These kinds of brute force attacks that rely on this extended time are more easily defended, Matin says. "In any defended environment this would be caught pretty easily," he added. The big picture: Leading models are getting better at finding security vulnerabilities — and not just models from OpenAI.
Find elsewhere
🌐
Financial Times
ft.com › content › f896c4d9-bab7-40a2-9e67-4058093ce250
OpenAI clamps down on security after foreign spying threats
July 8, 2025 - Artificial intelligence group has added fingerprint scans and hired military experts to protect important data
🌐
OpenAI
platform.openai.com › docs › guides › safety-best-practices
Safety best practices | OpenAI API
Learn how to implement safety measures like moderation, adversarial testing, human oversight, and prompt engineering to ensure responsible AI deployment.
🌐
News18
news18.com › tech
OpenAI Is Worried About Security Risks With AI Browsers, Uses AI To Fight The Threat | Tech News - News18
1 day ago - It is quite clear that OpenAI is not assuring users about the risks posed by these attacks on its AI browser but any long term fix for these risks will help in a big way, not only for Atlas but other similar browsers.
🌐
Fortune
fortune.com › 2025 › 12 › 23 › openai-ai-browser-prompt-injections-cybersecurity-hackers
OpenAI says prompt injections that can trick AI browsers may never be fully 'solved'
1 day ago - For example, an attacker could embed hidden commands in a webpage—perhaps in text that is invisible to the human eye but looks legitimate to an AI—that override a user’s instructions and tell an agent to share a user’s emails, or drain someone’s bank account. Following the launch of OpenAI’s ChatGPT Atlas browser in October, several security researchers demonstrated how a few words hidden in a Google Doc or clipboard link could manipulate the AI agent’s behavior.
🌐
OpenAI
openai.com › policies › supplier-security-measures
Supplier Security Measures | OpenAI
If the services that Supplier provides involve Access to Covered Data of OpenAI or its affiliates, then Supplier represents and warrants that: (i) neither it nor any of its affiliates is or will be organized or chartered in a Country of Concern, has or will have its principal place of business in a Country of Concern, or is or will be 50% or more owned, directly or indirectly, individually or in the aggregate, by one or more Countries of Concern or Covered Persons; and (ii) neither Supplier, nor any of its affiliates, nor any employee or contractor of Supplier who has Access to such Covered Data is or will be located in a Country of Concern, has been determined by the U.S.
🌐
Reddit
reddit.com › r/cybersecurity › why do people trust openai but panic over deepseek
r/cybersecurity on Reddit: Why do people trust openAI but panic over deepseek
February 11, 2025 -

Just noticed something weird. I’ve been talking about the risks of sharing data with ChatGPT since all that info ultimately goes to OpenAI, but most people seem fine with it as long as they’re on the enterprise plan. Suddenly, DeepSeek comes along, and now everyone’s freaking out about security.

So, is it only a problem when the data is in Chinese servers? Because let’s be real—everyone’s using LLMs at work and dropping all kinds of sensitive info into prompts.

How’s your company handling this? Are there actual safeguards, or is it just trust?

🌐
OpenAI Developer Community
community.openai.com › t › dealing-with-cybersecurity-concerns-from-a-misinformed-it-department › 702227
Dealing with cybersecurity concerns from a misinformed IT department - Community - OpenAI Developer Community
March 30, 2024 - For context: I work as a faculty member at a medical school. I have been using ChatGPT since March 2023 with over 1260 conversations. My primary use is in research and medical education.I subscribed to GPTPlus in September 2023. My school does not have a formal AI or GenAI policy despite my ...
🌐
Microsoft Learn
learn.microsoft.com › en-us › azure › ai-foundry › responsible-ai › openai › data-privacy
Data, privacy, and security for Azure Direct Models in Microsoft Foundry - Microsoft Foundry | Microsoft Learn
Foundry is an Azure service; Microsoft hosts the Azure Direct Models in Microsoft's Azure environment and Azure Direct Models do NOT interact with any services operated by Azure Direct Model providers, for example, OpenAI (e.g.
🌐
Reuters
reuters.com › business › openai-warns-new-models-pose-high-cybersecurity-risk-2025-12-10
OpenAI warns new models pose 'high' cybersecurity risk | Reuters
2 weeks ago - OpenAI on Wednesday warned that its upcoming artificial intelligence models could pose a "high" cybersecurity risk, as their capabilities advance rapidly.
🌐
Fortune
fortune.com › 2025 › 10 › 23 › cybersecurity-vulnerabilities-openai-chatgpt-atlas-ai-browser-leak-user-data-malware-prompt-injection
Experts warn OpenAI’s ChatGPT Atlas has security vulnerabilities that could turn it against users—revealing sensitive data, downloading malware, and worse | Fortune
October 23, 2025 - Cybersecurity experts are warning that OpenAI’s new browser, ChatGPT Atlas, could be vulnerable to malicious attacks that could turn AI assistants against users, potentially stealing sensitive data or even draining their bank accounts.
🌐
Forbes
forbes.com › forbes homepage › leadership › careers
OpenAI Data Breach Exposes User Data. Here’s What To Do Immediately
3 weeks ago - Even though prompts were not exposed, enough data was leaked to result in potential “credible-looking phishing attempts,” says OpenAI. Cisco’s Cybersecurity Readiness Index (2025) highlighted that workers are often the “enemy within” that can upend organizational security.