🌐
OpenAI
openai.com › research
Research | OpenAI
OpenAI’s o series models are advanced reasoning AI systems that use chain-of-thought processes to solve complex STEM problems through logical, step-by-step analysis. Our smartest and most capable models to date with full tool access ... Our research on generative modeling for images has led to representation models like CLIP, which makes a map between text and images that an AI can read, and DALL-E, a tool for creating vivid images from text descriptions.
🌐
OpenAI
openai.com › research › index › publication
OpenAI Research | Publication | OpenAI
Like OpenAI’s other models, the GPT-5.2 models were trained on diverse datasets, including information that is publicly available on the internet, information that we partner with third parties to access, and information that our users or human trainers and researchers provide or generate.
People also ask

How Accurate is Originality.AI AI Content Detection Tool?
On the last OpenAI GPT-4 model we tested Originality.AI and the results were it was 99.37% accurate with 1.56% false positives on the known human text. AI detection is different for every model. Below are the detection rates when testing Originality.AI: - GPT-3.5 Detection Accuracy: 99.9% accurate - ChatGPT GPT4 Detection Accuracy: 83.29% accurate - GPT-4 Detection Accuracy: 99.5% accurate - Paraphrased (quillbot) Detection Accuracy: 94.7% accurate
🌐
originality.ai
originality.ai › blog › openai-papers-list
OpenAI Publications and Papers – Originality.AI
How Does AI Detection Work?
Our internally built artificial intelligence uses supervised learning with multiple models including a modified BERT model to predict if content is AI or Original. Our AI has been provided with millions of records both AI and Original content then trained to tell the difference between the two. After each training session, a large test data set is used to evaluate if the new model is an improvement or not.
🌐
originality.ai
originality.ai › blog › openai-papers-list
OpenAI Publications and Papers – Originality.AI
Can Originality.AI Detect Chat GPT Content?
Yes, Originality.AI can detect ChatGPT content.
🌐
originality.ai
originality.ai › blog › openai-papers-list
OpenAI Publications and Papers – Originality.AI
🌐
OpenAI
openai.com › news › research
OpenAI Newsroom | Research | OpenAI
OpenAI · FilterSort · Switch cards to show Media · Switch cards to hide Media · Evaluating chain-of-thought monitorability · ResearchDec 18, 2025 · Addendum to GPT-5.2 System Card: GPT-5.2-Codex · PublicationDec 18, 2025 · Introducing GPT-5.2-Codex ·
🌐
Originality.AI
originality.ai › blog › openai-papers-list
OpenAI Publications and Papers – Originality.AI
August 21, 2025 - Affiliations: OpenAI, San Francisco, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA · Publisher: Institute of Electrical and Electronics Engineers · Authors: Zhou W.; Jalali S.; Maleki A. ... Abstract: This paper presents a correction to Theorem 2 in [1] which follows from fixing an error in Lemma 5 and a minor correction in the constant of Lemma 3.
🌐
arXiv
arxiv.org › abs › 2303.08774
[2303.08774] GPT-4 Technical Report
March 4, 2024 - Authors:OpenAI, Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, Red Avila, Igor Babuschkin, Suchir Balaji, Valerie Balcom, Paul Baltescu, Haiming Bao, Mohammad Bavarian, Jeff Belgum, Irwan Bello, Jake Berdine, Gabriel Bernadett-Shapiro, Christopher Berner, Lenny Bogdonoff, Oleg Boiko, Madelaine Boyd, Anna-Luisa Brakman, Greg Brockman, Tim Brooks, Miles Brundage, Kevin Button, Trevor Cai, Rosie Campbell, Andrew Cann, Brittany Carey, Chelsea Carlson, Rory Carmichael, Brooke Chan, Che Cha
🌐
OpenAI
cdn.openai.com › papers › gpt-4.pdf pdf
GPT-4 Technical Report OpenAI∗ Abstract
URL https://openai.com/research/gpt-4. [66] Stephanie Lin, Jacob Hilton, and Owain Evans. TruthfulQA: Measuring how models mimic · human falsehoods. In Proceedings of the 60th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 3214–3252, Dublin, ...
🌐
SciSpace
scispace.com › institutions › openai-1jzs2u6c › 2020
SciSpace AI Research Agent | 150+ Tools, 280 M Papers
SciSpace AI Super Agent links 150+ research tools: search 280 M papers, run systematic reviews, draft manuscripts & matches journals — cut research time 90%. Try free.
🌐
OpenAI
openai.com › index › gpts-are-gpts
GPTs are GPTs: An early look at the labor market impact potential of large language models | OpenAI
OpenAI · March 17, 2023Publication · Read paper(opens in a new window) Loading… · Share · We investigate the potential implications of Generative Pre-trained Transformer (GPT) models and related technologies on the U.S. labor market.
Find elsewhere
🌐
ResearchGate
researchgate.net › publication › 375576216_Unlocking_the_Potential_of_OpenAI_A_Comprehensive_Exploration
(PDF) Unlocking the Potential of OpenAI: A Comprehensive Exploration
November 11, 2023 - In this paper, we address how Artificial General Intelligence (AGI) should be developed, deployed, and used in a way that preserves data privacy, reduces risks of AGI harm, and maximizes the positive use cases.
🌐
Mattprd
mattprd.com › p › openai-cofounder-27-papers-read-know-90-ai
OpenAI Cofounder: The 27 Papers to Read to Know 90% About AI
April 15, 2025 - These are the 27 articles that Ilia Sutskever, the cofounder of OpenAI, told John Carmack to read (the creator of Doom and programming legend) if he wanted to very quickly become super smart on the topic of AI and how it is being developed right now.
🌐
OpenAI
cdn.openai.com › papers › 22265bac-3191-44e5-b057-7aaacd8e90cd › paperbench.pdf pdf
PaperBench: Evaluating AI’s Ability to Replicate AI Research
PaperBench: Evaluating AI’s Ability to Replicate AI Research · Giulio Starace * Oliver Jaffe * Dane Sherburn * James Aung * Chan Jun Shern * Leon Maksin * Rachel Dias * Evan Mays Benjamin Kinsella Wyatt Thompson Johannes Heidecke Amelia Glaese Tejal Patwardhan * ... Bench contains 8,316 individually gradable tasks. ... Pre-print. Copyright 2025 by the author(s). Framework (OpenAI, 2023), autonomous capabilities in An-
🌐
OpenAI
openai.com › index › paperbench
PaperBench: Evaluating AI’s Ability to Replicate AI Research | OpenAI
Evaluating AI’s Ability to Replicate AI Research. Read paper(opens in a new window)View code(opens in a new window)
🌐
OpenAI
cdn.openai.com › research-covers › language-unsupervised › language_understanding_paper.pdf pdf
Improving Language Understanding by Generative Pre-Training Alec Radford OpenAI
Journal of Machine Learning Research, 11(Feb):625–660, 2010. [16] S. Gray, A. Radford, and K. P. Diederik. Gpu kernels for block-sparse weights. 2017. [17] Z. He, S. Liu, M. Li, M. Zhou, L. Zhang, and H. Wang. Learning entity representation for entity disam- biguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics · (Volume 2: Short Papers...
🌐
Nature
nature.com › news › article
OpenAI’s ‘deep research’ tool: is it useful for scientists?
February 6, 2025 - Technology giant OpenAI has unveiled a pay-for-access tool called ‘deep research’, which synthesizes information from dozens or hundreds of websites into a cited report several pages long.
🌐
GitHub
github.com › chenxingqiang › awsome-openai-paper
GitHub - chenxingqiang/awsome-openai-paper: openai paper searching focusing on Author relate to OpenAI, openai paper searching, awsome-openai-paper.
https://openai.com/research/a-connection-between-generative-adversarial-networks-inverse-reinforcement-learning-and-energy-based-models
Starred by 14 users
Forked by 5 users
Languages   Python 82.8% | HTML 17.2%
🌐
OpenAI
openai.com › index › introducing-deep-research
Introducing deep research | OpenAI
I’m examining a likely collection page with 4 articles, considering plasmonic and metamaterial topics, and identifying key references from the European Materials Research Society 2012 Spring Meeting. ... I’m focusing on the 2012 conference proceedings in "Scientific Reports" from E-MRS, likely involving topics like "2D quasiperiodic plasmonic crystals" and "Layered plasmonic cloaks to tailor the optical scattering at the nanoscale." ... Thinking about special issue E-MRS 2012 Sci rep invited paper Monticone and metamaterials lab at News Archives – 2012.
🌐
Reddit
reddit.com › r/stablediffusion › here is the complete, original paper recently published by openai that's causing waves, as a pdf file you can read online or download. read things for yourself or the best you'll ever do is just parrot the opinions and conclusions of others!
r/StableDiffusion on Reddit: Here is the complete, original paper recently published by OpenAI that's causing waves, as a PDF file you can read online or download. Read things for yourself or the best you'll ever do is just parrot the opinions and conclusions of others!
February 16, 2023 -

Without any ado:

This one is from Cornell University:
https://arxiv.org/abs/2301.04246

This one is the exact same thing just uploaded to a third party website by myself as a backup:
https://smallpdf.com/result#r=4c84207e0ae4c4b0a5dbcce6fe19eec6&t=share-document

The paper discusses how generative AI could be used to create propaganda, and then gives suggestions about how to stop or limit people from doing so. That is somewhat of an oversimplification, but the nuances are best seen within the paper itself. The reason this paper has become controversial is that many of the suggestions have very troubling implications or side effects.

For example, it suggests combating bots by having social media companies collect and routinely refresh human biometric data. Or incorporating tracing behind-the-scenes signatures into posts so that they can be very thoroughly tracked across different platforms and machines. They also consistently hint that any open-source AI is inherently a bad idea, which is suspicious in the eyes of many people leery about the "we-do-it-for-the-good-of-mankind" benevolence that OpenAI claims it wishes to be at the forefront of. Recently a few heavily curated and out of context snippets went viral, with aggressively negative reactions from many thousands of netizens who had little if any understanding of the original paper. *Update on that! At the time of posting this the link to the original paper was not included in that other post. It is now, which may or may not be due to my influence, but still without context and put below the click-baiting Twitter crap.*

I feel that looking at a few choice snippets highlighted by someone else and slapped onto Twitter is a terrible way of staying informed and an even worse way of reaching a mature conclusion...

But don't take my word for it! I encourage you to read the paper, or at least skim through the important parts. Or don't because the thing is 84 pages long and very dryly written. But if you have never read it then don't jump to unfounded conclusions or build arguments on pillars of salt and sand. It's just like that lawsuit a bit ago against the generative AI companies. Most of the people for and against it, supporters on both sides, hadn't actually read the official legal document. I mean is the internet aware that the suddenly controversial paper was submitted to Cornell's online repository way back on the 10th of January?

The thing is generally not as big a smoking gun as the social-media hype implies. Now, if this thing gets cited during a US congressional hearing or something formal like that we have serious cause to be concerned about the ideas presented within. I'm not defending the mildly Orwellian tone of the paper, I'm just saying it's only speculative unless the Companies and Governments it discusses implement any of the possible measures.

This paper was not directly published by the company OpenAI, that was a mistake in the post title which I can't edit now because Reddit be Reddit, but they are involved in the paper and its contents. Aside from employees of OpenAI contributing to the paper, the company put their name behind it. The word OpenAI is literally there in the center of the first page. They are listed as an author on the university webpage.

This is a quote from page 7: "Our paper builds on a yearlong collaboration between OpenAI, the Stanford Internet Observatory (SIO), and Georgetown’s Center for Security and Emerging Technology (CSET)."

Personally, I have a rather low opinion of OpenAI. I feel their censorship of ChatGPT3, for example, has gone ridiculously too far. I don't agree with the censorship enforced by Midjourney. I don't even appreciate the way that this very subreddit removed one of my nicest pieces of art because it had a tiny amount of non-sexualized nudity... But don't sling mud around or preach about ethics or upvote or downvote things you barely understand because you never bothered to look at the original material.

Oh, by the way, as someone not sitting anywhere in the developed world, I find the part where they talk about altering immigration policy to intentionally drain AI development talent from "uncooperative countries" in order to slow them down and limit them to be a little disturbing. There are a bunch of unpalatable ideas tossed around in there but that one struck close to home...

Top answer
1 of 5
112
Since OP refuses to provide any summary of the content of the paper of any kind (but frustratingly talks around it for paragraphs and paragraphs), here's what I took from the executive summary: Generative language models make it easier to create propaganda Due to the new capabilities provided by LLMs, there will be more propaganda of the usual kinds, higher-quality (on the axis of effectiveness) propaganda of the various kinds, as well as novel kinds such as personalized propaganda produced at scale There are various potential mitigations at the various steps of propadanda creation, propagation and consumption but (editorial) I think the only ones that will work is for information nexuses to detect and suppress where possible and efforts to make potential propaganda consumers more savvy via education and tools in the vein of snopes or the various sidebars and annotations social media sites do today.
2 of 5
76
I just read the entire paper, and while, on the surface, it purports to simply lay out the facts about potential risks and mitigations, it subtly advocates for access restriction as the best mitigation method. In section 5.3.2, they even give themselves (OpenAI) a big pat-on-the-back for restricting GPT-2 and GPT-3 behind a paywall, as if they did so for the good of society, rather than to make a profit. In addition, the proposal can only be effective so long as there are no publicly released models that are as effective and easy to use as those maintained by AI developers behind API restrictions. However, if public models are sufficient for propagandists, then this mitigation will likely be less effective. In reality, access restriction is the least effective mitigation. All of the examples presented about past misinformation campaigns are either state-sponsored, or led by large, well-funded organizations (e.g. the IRA.) For any of these actors, the few-million dollar cost of training their own model is trivial. Furthermore, bad actors with a common enemy are highly likely to share models amongst themselves. Only in section 5.5.2, the very last subsection before their non-conclusive conclusions, do they briefly mention the only mitigation strategy with a valid chance to succeed: consumer-focused AI tools. As generative models get better at producing persuasive arguments that exploit viewer biases and blindspots, defensive generative models could be used to help users detect and explain flaws in tailored arguments or to find artifacts in manipulated images. Generative models that help users find relevant information can also be trained how to “show their work” by citing sources that support their answers I imagine a button next to tweets, for example, that would run the text through a neutral AI model that can point out erroneous information or logical fallacies. An AI fact-checker, if you will. Even better if there are many of these, and the consumer can check multiple sources before forming an opinion. This almost feels like they are promoting the open proliferation of AI for the purpose of defensive tooling (gasp!) Which is why they quickly backtrack in the next paragraph and discuss why this probably won't work (i.e. the AI fact-checker will have its own biases.) We can only trust these AI fact checkers if they have "high-quality implementation," meaning that we should only trust AI created for us by our benevolent corporate overlords. If you aren't yet convinced of the bias of the paper, skip to section 6 and read the conclusion. Only 3 of mitigation strategies previously outlined are mentioned here: detection (impossible,) corporate control of the models, and more government oversight. This is OpenAI blatantly campaigning against the very thing they claim to be all about: open AI.
🌐
Entrepreneur
entrepreneur.com › home › business news
OpenAI's New Deep Research AI Surfs the Web, Writes Papers
September 26, 2025 - OpenAI launched a new AI agent ... relevant. Based on its findings, Deep Research produces a comprehensive research paper with full citations that can sometimes run longer than 10,000 words....
🌐
WIRED
wired.com › business › artificial intelligence › openai offers a peek inside the guts of chatgpt
OpenAI Offers a Peek Inside the Guts of ChatGPT | WIRED
June 6, 2024 - Days after former employees said the company was being too reckless with its technology, OpenAI released a research paper on a method for reverse engineering the workings of AI models.
🌐
OpenAI
cdn.openai.com › papers › dall-e-3.pdf pdf
Improving Image Generation with Better Captions James Betker∗†
This paper focuses on evaluating the improved prompt following of DALL-E 3 as a result of training