Videos
How Accurate is Originality.AI AI Content Detection Tool?
How Does AI Detection Work?
Can Originality.AI Detect Chat GPT Content?
Without any ado:
This one is from Cornell University:
https://arxiv.org/abs/2301.04246
This one is the exact same thing just uploaded to a third party website by myself as a backup:
https://smallpdf.com/result#r=4c84207e0ae4c4b0a5dbcce6fe19eec6&t=share-document
The paper discusses how generative AI could be used to create propaganda, and then gives suggestions about how to stop or limit people from doing so. That is somewhat of an oversimplification, but the nuances are best seen within the paper itself. The reason this paper has become controversial is that many of the suggestions have very troubling implications or side effects.
For example, it suggests combating bots by having social media companies collect and routinely refresh human biometric data. Or incorporating tracing behind-the-scenes signatures into posts so that they can be very thoroughly tracked across different platforms and machines. They also consistently hint that any open-source AI is inherently a bad idea, which is suspicious in the eyes of many people leery about the "we-do-it-for-the-good-of-mankind" benevolence that OpenAI claims it wishes to be at the forefront of. Recently a few heavily curated and out of context snippets went viral, with aggressively negative reactions from many thousands of netizens who had little if any understanding of the original paper. *Update on that! At the time of posting this the link to the original paper was not included in that other post. It is now, which may or may not be due to my influence, but still without context and put below the click-baiting Twitter crap.*
I feel that looking at a few choice snippets highlighted by someone else and slapped onto Twitter is a terrible way of staying informed and an even worse way of reaching a mature conclusion...
But don't take my word for it! I encourage you to read the paper, or at least skim through the important parts. Or don't because the thing is 84 pages long and very dryly written. But if you have never read it then don't jump to unfounded conclusions or build arguments on pillars of salt and sand. It's just like that lawsuit a bit ago against the generative AI companies. Most of the people for and against it, supporters on both sides, hadn't actually read the official legal document. I mean is the internet aware that the suddenly controversial paper was submitted to Cornell's online repository way back on the 10th of January?
The thing is generally not as big a smoking gun as the social-media hype implies. Now, if this thing gets cited during a US congressional hearing or something formal like that we have serious cause to be concerned about the ideas presented within. I'm not defending the mildly Orwellian tone of the paper, I'm just saying it's only speculative unless the Companies and Governments it discusses implement any of the possible measures.
This paper was not directly published by the company OpenAI, that was a mistake in the post title which I can't edit now because Reddit be Reddit, but they are involved in the paper and its contents. Aside from employees of OpenAI contributing to the paper, the company put their name behind it. The word OpenAI is literally there in the center of the first page. They are listed as an author on the university webpage.
This is a quote from page 7: "Our paper builds on a yearlong collaboration between OpenAI, the Stanford Internet Observatory (SIO), and Georgetown’s Center for Security and Emerging Technology (CSET)."
Personally, I have a rather low opinion of OpenAI. I feel their censorship of ChatGPT3, for example, has gone ridiculously too far. I don't agree with the censorship enforced by Midjourney. I don't even appreciate the way that this very subreddit removed one of my nicest pieces of art because it had a tiny amount of non-sexualized nudity... But don't sling mud around or preach about ethics or upvote or downvote things you barely understand because you never bothered to look at the original material.
Oh, by the way, as someone not sitting anywhere in the developed world, I find the part where they talk about altering immigration policy to intentionally drain AI development talent from "uncooperative countries" in order to slow them down and limit them to be a little disturbing. There are a bunch of unpalatable ideas tossed around in there but that one struck close to home...