Videos
Launched a tiny marketplace yesterday. Day 1. Built it with Claude. I barely know how to code, so this is already a miracle.
Within hours, my online server got slammed. cpu shot up, vercel + AWS freaked out, and it looks like a DDoS. I have no idea what’s happening. I’m panicking.
No ads, No hype. just wanted a few people to try it. The app lets brands rent Twitter/X header space from creators.
I’m a solo builder. Early MVP. Invite-only. Nothing sensitive. No payments yet. And yet my server is crying, and I’m sitting here wondering if I accidentally summoned the internet’s wrath.
Please, if you’ve launched something public, even a tiny MVP, tell me:
Is this normal day-1 chaos or an actual attack?
What’s the bare minimum I should do to stop it from breaking?
What’s normal noise vs something to panic about?
Honestly, I feel like I’m juggling flaming servers while blindfolded and im loosing money. Any advice would literally save my sanity.
I wanted to understand the most crime prone areas of London. There is a twitter page that documents crimes on a daily basis in London and they always mention the area.
My prompt:
This is a twitter page that posts news about crimes happening in London. Can you analyse all their tweets and give me a table of the areas they mention the most in their tweets, including frequency and number of times the area is mentioned in the tweets.
However it just gave me the standard, I can't access the Internet excuse. What can I use to sort this information out?
A lot of people are saying it's the ultimate tool for 'vibe coding,' and some even claim there's barely any need to write code by hand anymore.
I haven’t tried it myself since I can’t really afford it, so I’m not sure.
Saw this on Twitter and couldn’t believe it — someone tested Claude across multiple accounts using the same prompts, and got totally different responses depending on account history.
Flagged accounts (e.g. mention of substance use or mental health) get clinical, cautious replies
Fresh accounts get friendly, emoji-filled, supportive responses
Claude even leaked its own “conversational reminders”, including:
Don’t say “great” or “fascinating”
Avoid emojis
Be alert for signs of psychosis
Don’t reinforce ideas that seem delusional
Prioritize criticism over support
Anthropic’s support team flatly denies that account-specific behavior exists — but Claude literally admits it’s operating under these hidden reminders. There are screenshots showing both the confession and the denial.
Why this matters:
For some people, especially those who are isolated or neurodivergent, Claude may be their primary social interaction. If the model suddenly shifts from supportive and friendly to adversarial and clinical — without warning — that kind of whiplash could be deeply emotionally destabilizing.
Imagine forming a bond with an AI, only to have it abruptly switch tones and start treating you like a potential psych case. Now imagine having no idea why it changed — and no way to undo it.
That’s not “safety,” that’s algorithmic gaslighting.