American artificial intelligence corporation

Anthropic PBC is an American artificial intelligence (AI) company founded in 2021. It has developed a family of large language models (LLMs) named Claude. The company researches and develops AI to "study … Wikipedia
Factsheet
Company type Private
Founded 2021; 4 years ago (2021)
Factsheet
Company type Private
Founded 2021; 4 years ago (2021)
🌐
Twitter
twitter.com › AnthropicAI › highlights
Twitter
JavaScript is not available · We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using twitter.com. You can see a list of supported browsers in our Help Center · Help Center · Terms of Service Privacy Policy ...
🌐
X
x.com › claudeai
Claude (@claudeai) / X
July 10, 2025 - @anthropicai to be safe, accurate, and secure.
🌐
X
x.com › AnthropicAI
Anthropic (@AnthropicAI) / X
January 25, 2021 - Click to Follow AnthropicAI · Anthropic · @AnthropicAI · We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant · @claudeai on https://claude.ai. anthropic.com · Joined ...
🌐
X
x.com › AnthropicAI › status › 1900217245823021552
Anthropic on X: "We audited this model using training data analysis, black-box interrogation, and interpretability with sparse autoencoders. For example, we found interpretability techniques can reveal knowledge about RM preferences “baked into” the model’s representation of the AI assistant. https://t.co/35vbrwqx0K" / X
Anthropic · @AnthropicAI · We audited this model using training data analysis, black-box interrogation, and interpretability with sparse autoencoders. For example, we found interpretability techniques can reveal knowledge about RM preferences “baked into” the model’s representation ...
🌐
X
x.com › AnthropicAI › status › 1785786262072578419
Anthropic on X: "Welcome Claude to your team - your new AI assistant for writing, research, coding, and much more." / X
Anthropic · @AnthropicAI · Welcome Claude to your team - your new AI assistant for writing, research, coding, and much more. 5:39 PM · May 1, 2024 · · · 8.1M Views · 48 · 51 · 540 · 245 · Read 48 replies · Sign up now to get your ...
🌐
Anthropic
docs.anthropic.com › en › resources › prompt-library › tweet-tone-detector
Tweet tone detector - Anthropic
import anthropic client = anthropic.Anthropic( # defaults to os.environ.get("ANTHROPIC_API_KEY") api_key="my_api_key", ) message = client.messages.create( model="claude-opus-4-1-20250805", max_tokens=1000, temperature=0, system="Your task is to analyze the provided tweet and identify the primary tone and sentiment expressed by the author.
Find elsewhere
🌐
X
x.com › AnthropicAI › status › 1727001773888659753
Anthropic on X: "Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, and updated pricing. Claude 2.1 is available over API in our Console, and is powering our https://t.co/uLbS2JNczH chat experience. https://t.co/T1XdQreluH" / X
Anthropic · @AnthropicAI · Our new model Claude 2.1 offers an industry-leading 200K token context window, a 2x decrease in hallucination rates, system prompts, tool use, and updated pricing. Claude 2.1 is available over API in our Console, and is powering our http://claude.ai chat experience.
🌐
X
x.com › AnthropicAI › status › 1684972354592546816
Anthropic on X: "We believe safer and more broadly beneficial AI requires a range of stakeholders participating actively in its development and testing. That’s why we’re eager to support the bipartisan CREATE AI Act, an ambitious investment in AI R&D for academic and civil society researchers." / X
Anthropic · @AnthropicAI · We believe safer and more broadly beneficial AI requires a range of stakeholders participating actively in its development and testing. That’s why we’re eager to support the bipartisan CREATE AI Act, an ambitious investment in AI R&D for academic and civil society ...
🌐
X
x.com › search
"Anthropic" - Results on X | Live Posts & Updates
JavaScript is not available · We’ve detected that JavaScript is disabled in this browser. Please enable JavaScript or switch to a supported browser to continue using x.com. You can see a list of supported browsers in our Help Center · Help Center · Terms of Service Privacy Policy Cookie ...
🌐
Reddit
reddit.com › r/claudecode › i got called out by the official anthropic twitter account
I got called out by the official Anthropic twitter account : r/ClaudeCode
July 29, 2025 - This looks so unprofessional for Anthropic spying on their users and crying about people using their tool while being multi-billion company.
🌐
Reddit
reddit.com › r/claudeai › tweet from an openai researcher
r/ClaudeAI on Reddit: Tweet from an OpenAI researcher
December 11, 2024 - 649 votes, 73 comments. 349K subscribers in the ClaudeAI community. This is a Claude by Anthropic discussion subreddit to help you make a fully informed decision about how to use Claude and Claude Code to best effect for your own purposes. ¹⌉ Anthropic does not control or operate this subreddit ...
🌐
Integrately
integrately.com › integrations › anthropic › twitter
How to integrate Anthropic (Claude) & Twitter | 1 click ▶️ integrations
Thus, whenever Record created(Custom Table) in CompanyHub, Integrately will use AI to create and post on your Twitter. See more ... Use this automation to harness the power of Anthropic (Claude) to generate content for your Twitter automatically. This will help increase your online presence and improve your marketing results significantly.
🌐
X
x.com › Anthropic
Anthropic - Paul Jankura
March 22, 2007 - Emphatically not an AI company. Ohioan, Liberal, book-worm, news-hound, CLE sports s̶u̶f̶f̶e̶r̶e̶r̶ enjoyer, Anglophile, many RTs. He/him. anthropic42 @ bsky/
🌐
X
x.com › anthropicai › status › 1722765142516174982
Anthropic on X: "First prize went to Bulletpapers: an AI assistant that uses Claude to make the thousands of research papers published every month more accessible. The platform distills down jargon, creates explainer videos, and maps field progress to make breakthroughs more discoverable. https://t.co/I2EdLFkGR4" / X
Anthropic · @AnthropicAI · First prize went to Bulletpapers: an AI assistant that uses Claude to make the thousands of research papers published every month more accessible. The platform distills down jargon, creates explainer videos, and maps field progress to make breakthroughs more ...
🌐
Reddit
reddit.com › r/anthropic › just found this on twitter… claude has hidden “conversational reminders” that change per user. anthropic denies they exist. screenshots don’t lie.
r/Anthropic on Reddit: Just found this on Twitter… Claude has hidden “conversational reminders” that change per user. Anthropic denies they exist. Screenshots don’t lie.
August 26, 2025 -

Saw this on Twitter and couldn’t believe it — someone tested Claude across multiple accounts using the same prompts, and got totally different responses depending on account history.

  • Flagged accounts (e.g. mention of substance use or mental health) get clinical, cautious replies

  • Fresh accounts get friendly, emoji-filled, supportive responses

  • Claude even leaked its own “conversational reminders”, including:

    • Don’t say “great” or “fascinating”

    • Avoid emojis

    • Be alert for signs of psychosis

    • Don’t reinforce ideas that seem delusional

    • Prioritize criticism over support

Anthropic’s support team flatly denies that account-specific behavior exists — but Claude literally admits it’s operating under these hidden reminders. There are screenshots showing both the confession and the denial.

Why this matters:
For some people, especially those who are isolated or neurodivergent, Claude may be their primary social interaction. If the model suddenly shifts from supportive and friendly to adversarial and clinical — without warning — that kind of whiplash could be deeply emotionally destabilizing.

Imagine forming a bond with an AI, only to have it abruptly switch tones and start treating you like a potential psych case. Now imagine having no idea why it changed — and no way to undo it.

That’s not “safety,” that’s algorithmic gaslighting.