I say this with all the kindness in the world. Why the fuck would you burn Claude code tokens running playwright tests? Why would you not just write the tests and then have the tests run with CI/CD pipelines? Answer from Deleted User on reddit.com
🌐
Reddit
reddit.com › r/claudecode › cc using playwright directly is vastly superior to playwright mcp
r/ClaudeCode on Reddit: CC using Playwright directly is vastly superior to Playwright MCP
August 27, 2025 -

Playwright MCP uses a session and prevents proper cache clearing. How many times did Claude tell me "Perfect Deployment!" only to open the console and see a row of errors? It's all about control and caching. Claude does just fine writing its own Playwright scripts. I can't see any use for the MCP at this point. Tell me if I'm wrong.

🌐
Reddit
reddit.com › r/claudeai › claude code + playwright mcp - how did you speed up the browser interactions
r/ClaudeAI on Reddit: Claude code + playwright mcp - how did you speed up the browser interactions
May 17, 2025 -

I have successfully integrated this playwright mcp -Microsoft one ( adding tools ) to Claude code . We can now add a prompt and pass it in Claude code headless cli .. however the browser navigation is quite slow .. for example it takes more than 4 seconds for Claude code to login using username and password..

How did you speed up the process ..? I am using WSL2

Thanks in advance

🌐
Reddit
reddit.com › r/claudeai › claude code + playwright mcp = real browser testing inside claude
r/ClaudeAI on Reddit: Claude Code + Playwright MCP = real browser testing inside Claude
October 17, 2025 -

I’ve been messing around with the new Playwright MCP inside Claude Code and it’s honestly wild.
It doesn’t just simulate tests or spit out scripts — it actually opens a live Chromium browser that you can watch while it runs your flow.

I set it up to test my full onboarding process:
signup → verification → dashboard → first action.
Claude runs the flow step by step, clicks through everything, fills the forms, waits for network calls, takes screenshots if something breaks. You literally see the browser moving like an invisible QA engineer.

No config, no npm, no local setup. You just say what you want to test and it does it.
You can even ask it to export the script if you want to run the same test locally later, but honestly the built-in one is enough for quick checks.

Watching it run was kind of surreal — it caught two console errors and one broken redirect that I hadn’t noticed before.
This combo basically turns Claude Code into a test runner with eyes.

If you’re building web stuff, try enabling the Playwright MCP in Claude Code.
It’s the first time I’ve seen an AI actually use a browser in front of me and do proper end-to-end testing.

🌐
Reddit
reddit.com › r/claudeai › claude code — seamless mcp server setup (playwright, memory, serena, sequential thinking)
r/ClaudeAI on Reddit: Claude Code — Seamless MCP Server Setup (Playwright, Memory, Serena, Sequential Thinking)
October 23, 2025 -

Hey folks,

Following up on my earlier post I’ve been documenting everything I learned about Claude Code — massive thanks to everyone who shared feedback and ideas. It’s been super helpful.

Just pushed a round of new updates focused on making MCP server setup seamless:

  • Step-by-step installation guides for popular servers — Playwright, Memory, Sequential Thinking, and more

  • A consolidated troubleshooting guide for common issues across all MCP integrations

  • Short, focused use-case breakdowns

The goal here was to make getting started with MCP servers as frictionless as possible — from install to real usage. Each guide includes working config examples and fixes for the most common setup pitfalls.

📘 Repo: Claude Code — Everything You Need to Know

If you’re looking to extend Claude Code with MCP servers, these additions should help you make better decisions while saving tokens and cost.

Feedback and contributions always welcome.

🌐
Reddit
reddit.com › r/claudecode › what mcps are you using with claude code right now?
r/ClaudeCode on Reddit: What MCPs are you using with Claude Code right now?
November 1, 2025 -

I’ve been using a few MCPs in my setup lately, mainly Context 7, Supabase, and Playwright.

I'm just curious in knowing what others here are finding useful. Which MCPs have actually become part of your daily workflow with Claude Code? I don’t want to miss out on any good ones others are using.

Also, is there anything that you feel is still missing as in an MCP you wish existed for a repetitive or annoying task?

Find elsewhere
🌐
Reddit
reddit.com › r/anthropic › using playwright mcp for testing claude code
r/Anthropic on Reddit: Using playwright MCP for testing Claude code
August 3, 2025 - Now, i just write the tests, and before running it, i ask CC to use the puppeteer MCP to actually go do that exact behaviour for which i wrote the test, and then run the test to check if it works. the second i used this, it found that i had written wrong tests, i mean CC only wrote it, and then it re-configured the tests. Now, the test just works, what a cool way to give claude context of the UI. Share ... Claude Code is taking off!
🌐
Reddit
reddit.com › r/claudeai › made a lightweight playwright skill for claude code (way less context than mcp)
Made a lightweight Playwright skill for Claude Code (way less context than MCP) : r/ClaudeAI
October 20, 2025 - Had the exact same token bloat issue when we were prototyping browser automation at Notte, we ended up going a different route with semantic page understanding, but for testing and automation scenarios where you dont need the AI to "see" the page content, just writing the automation code makes total sense. The 314 line instruction approach is clever too, keeps the context lean until you actually need the full docs loaded. ... can you please stop assuming that you can replace MCP with skills? Skills also add a lot of context and if you use Playwright without MCP, it'll forward a lot of verbose logs which inherently fills up the context.
🌐
Reddit
reddit.com › r/claudecode › playwright / puppeteer mcp token usage
r/ClaudeCode on Reddit: Playwright / Puppeteer MCP Token usage
August 25, 2025 -

It seems that using Playwright or Puppeteer MCPs actually gives Claude Code “eyes” and “hands,” making it much easier to close the loop of coding → testing → validating → refactoring. However, the token consumption is massive, and after just one or two tests, the chat gets compacted. I tried delegating this to a subagent, but the results weren’t great. Do you have any tips for handling this? I’m also considering testing the browser-use MCP (https://github.com/browser-use/browser-use) - maybe I’ll give it a shot later today. Thanks!

🌐
Reddit
reddit.com › r/claudeai › have claude code really look at your site with playwrite
r/ClaudeAI on Reddit: Have Claude Code Really Look at Your Site With Playwrite
July 20, 2025 -

I have never heard of or used Playwrite until I just had issues with my Nextjs project using Tailwind 4 but CC was doing version 3 related implementations.

Suddenly Claude Code installed Playwrite & instead of just checking the code it literally looks at your site through tests to confirm: Hey the problem this dude has been saying is a problem, guess what it doesn't work!!!

Here's a link to it: https://playwright.dev/

Sorry if I sound new, but I'm not I've been study & coding for years I just never heard of this especially to use with Claude Code.

Is everyone using this already??

🌐
Reddit
reddit.com › r/claudeai › what's the best most reliable mcp to let claude code scrape a website?
r/ClaudeAI on Reddit: What's the best most reliable MCP to let Claude Code scrape a website?
July 24, 2025 -

I am doing a website migration from one CMS to the other, and have started using Claude to automate a lot of it.

However, I'm looking for a browser agent that lets Claude explore a website I give it.

Any recommendations? I largely just need content. I know Playwright is widely recommended but not too sure if its overkill, since it eats up a lot of tokens.

Top answer
1 of 6
6
My opinion: the Firecrawl MCP server https://github.com/mendableai/firecrawl-mcp-server I admittedly only used it for searching (firecrawl_search) tool than anything else, but I saw that it also has other tools such as "crawl", "scrape", "map", and "extract". You need an account to create an API key, There is a free tier of 500 credits (per month I think). It caught my attention because I was trying to use Claude to help me run a job search against job boards. I found this MCP Server to be a huge improvement over the native web search function since the "search" tool allowed me to simultaneously search and scrape content with one tool, which eases token usage. For my own practical usage though, I did eventually run out of credits and paid $19 to try it for a month (3000 credits, 1 scraped result = 1 credit). You might have to pay either way, but if you intend to keep crawling sites, it might be worth the price for efficiency. There is some jank though. They document a "batch_scrape", but I found no such tool in the code.
2 of 6
2
yeahh - Playwright is super powerful but can definitely feel like overkill if you just need content scraping without all the browser automation bells and whistles. For reliable content scraping with Claude Code, I’d suggest trying out tools like Puppeteer or even simpler HTTP scraping MCPs if your target sites are mostly static. They tend to be more token-friendly since they don’t render full browsers unless needed. Also, check out Datalayer - it’s not a scraper itself but pairs amazingly well with MCP scraping tools by helping you manage scraped data over sessions, keep your workspace state consistent, and avoid redundant scrapes. It can really help keep your automation clean and efficient, especially when you’re juggling multiple scraping tasks or need to process the content over time. If your site has lots of JS or dynamic content, Playwright might still be worth it, but layering it with Datalayer for state management can save you a lot of headaches and token costs in the long run!
🌐
Reddit
reddit.com › r/claudeai › claude code mcp windows?
r/ClaudeAI on Reddit: claude code mcp windows?
July 17, 2025 -

im not a coder or anything, just playing around for my own fun, so sorry if its a stupid question

but im trying to move and use cc on windows rather than wsl, any mcp im trying to install, is failing to connect

installasion command for example: claude mcp add playwright npx u/playwright/mcp@latest

any idea why? is there a problem with mcps support on windows for now?

🌐
Reddit
reddit.com › r/claudeai › after 3 months of claude code cli: my "overengineered" setup that actually ships production code
r/ClaudeAI on Reddit: After 3 months of Claude Code CLI: my "overengineered" setup that actually ships production code
6 days ago -

Three months ago I switched from Cursor to Claude Code CLI. Thought I'd share what my setup looks like now and get some feedback on what I might be missing.

Context: I'm a non-CS background dev (sales background, learned to code 2 years ago) building B2B software in a heavily regulated space (EU manufacturing, GDPR). So my setup is probably overkill for most people, but maybe useful for others in similar situations.

The Setup

Core:

- Claude Code CLI in terminal (tried the IDE plugins, prefer the raw CLI)

- Max subscription (worth it for the headroom on complex tasks)

- Windows 11 + PowerShell (yes, really)#

MCP Servers (4 active):

ServerWhy I use it
filesystemSafer file operations than raw bash
gitQuick rollbacks when the agent breaks things
sequential-thinkingForces step-by-step reasoning on complex refactors
playwrightE2E test automation

Browser Automation:

- Google Antigravity for visual testing

- Claude for Chrome (can control it from CLI now, game changer)

Custom Skills I Wrote

This is where it gets interesting. Claude Code lets you define custom skills that auto-activate based on context. Here's what I built:

SkillTriggerWhat is does
code-quality-gateBefore any deploy5-stage checks: pre-commit → PR → preview → E2E → pro
strict-typescript-modeAny .ts/.tsx fileBlocks any, enforces generics, suggests type guards
multi-llm-advisorArchitecture decisionsQueries Gemini + OpenAI for alternative approaches
secret-scannerPre-commit hookCatches API keys, passwords, tokens before they hit git
gdpr-compliance-scannerEU projectsChecks data residency, PII handling, consent flows
gemini-image-geOn demandGenerates images via Gemini API without leaving CLI

The multi-llm-advisor has been surprisingly useful. When Claude suggests an architecture, I have it ask Gemini and GPT-4 "what would you do differently?" Catches blind spots I'd never notice.

The Secret Sauce: CLAUDE.md

This file changed everything. It's ~500 lines of project-specific instructions that the agent reads on every prompt. Key sections:

  1. No-Touch Zones

NEVER modify without explicit permission:

- api/auth.ts (authentication)

- api/analyze.ts (core business logic)

- vercel.json (deployment config)

Without this, the agent would "helpfully" refactor my auth code while fixing an unrelated bug. Ask me how I know.

2. Quality Gates

Before ANY commit:

  1. npm run build - MUST succeed

  2. npm run test - All tests pass

  3. npx tsc --noEmit - Zero TypeScript errors

The agent checks these automatically now. Catches ~80% of issues before I even review.

3. Regression Prevention Rules

- ONE change at a time

- List all affected files BEFORE writing code

- If touching more than 3 files, stop and ask

This stopped the "I'll just clean up this code while I'm here" behavior that caused so many bugs.

What Actually Changed My Workflow

  1. "Vibe coding" with guardrails

I describe what I want in natural language. The agent builds it. But the CLAUDE.md rules prevent it from going off the rails. Best of both worlds.

2. The iteration loop

Agent writes code → runs tests → tests fail → agent reads error → fixes → repeat. I just watch until it's green or stuck. Most features ship without me writing a line.

3. Browser-in-the-loop testing

Agent makes UI change → opens Chrome → visually verifies → iterates. Still fails ~30% of the time but when it works, it's magic.

4. Fearless refactoring

With git MCP + quality gates + no-touch zones, I let the agent do refactors I'd never attempt manually. Worst case, git reset --hard and try again.

What Still Sucks, being honest here:

- Setup time: Took 2-3 weeks to dial in. Not beginner friendly at all.

- Browser automation reliability: Antigravity rate limits, Claude for Chrome loses context, ~30% failure rate on complex flows.

- Token usage: Max helps but big refactors can still burn through quota fast.

- Windows quirks: Some MCP servers assume Unix. Had to patch a few things.

- Agent overconfidence: Sometimes it says "done!" when it clearly isn't. Trust but verify.

Questions for This Community

  1. MCP servers: Anyone using others I should try? Especially interested in database or API testing servers.

  2. Preventing scope creep: How do you stop the agent from "improving" code you didn't ask it to touch? My no-touch zones help but curious about other approaches.

  3. Browser automation: Anyone found something more reliable than Antigravity for visual testing?

  4. CLAUDE.md patterns: Would be curious to see how others structure theirs. Happy to share my full file if there's interest.

TL;DR: Claude Code CLI + MCP servers + custom skills + strict CLAUDE.md rules = actual production-ready code from "vibe coding". Took weeks to set up but now I ship faster than I ever did manually. :)

🌐
Simon Willison
til.simonwillison.net › claude-code › playwright-mcp-claude-code
Using Playwright MCP with Claude Code | Simon Willison’s TILs
With the MCP loaded you can run /mcp and then navigate to playwright to view all available tools. Here's the full list: ... You don't have to reference these by name, Claude should usually be smart enough to pick the right one for the task at hand.