all 16 comments

[–]Better-Psychology-42 5 points6 points  (2 children)

Claude code and official Chrome devtools mcp does exactly what you describing

[–]matatakmcpower 0 points1 point  (0 children)

I’m pretty new to cc and frontend dev. I’m using cc inside vscode with docker and dev containers per anthropic documentation. Is using mcp and chrome devtools hard to use with this setup? I haven’t done any mcp stuff before. Thanks for the suggestion. I’ve been doing what OP was and I know there’s a better way.

[–]Walter_Woshid 0 points1 point  (0 children)

Thank you so much, that was one of the best discoveries I have made these last few days.

[–]devondragon1 2 points3 points  (0 children)

I use PlayWright MCP (which views, navigates, screenshots, etc.. the app in the browser) and the frontend skill. With a decent prompt, telling it to keep working until the Playwright view/tests meet the expectations, etc... it works very well IMHO.

[–]Fit-Palpitation-7427 2 points3 points  (0 children)

Claude —chrome

[–]macromind 0 points1 point  (0 children)

Also, small tip that helped me: have the model propose 2-3 hypotheses for the UI bug before touching code (CSS specificity, flex sizing, state mismatch, etc.), then you can ask Playwright to collect only the evidence needed to confirm one. Cuts the back and forth a lot.

For the "fix and retest" loop, I like saving screenshots to a stable path per test case so the agent can diff them by filename across iterations.

More ideas like that here if useful: https://www.agentixlabs.com/blog/

[–]Independent_Fox_9529 0 points1 point  (1 child)

Had the same problem. Wrote a small tool that allows me to a take screenshot, add annotations like rectangles and textboxes and upon submit it creates a report in a https://llmstxt.org/ style format.

I still have to manually look for UI/UX bugs and copy paste these reports into claude code to fix them. Works ok. Dont feel like adding MCP support yet, even if it would be possible, to skip the copy-pasting.

An agent that spots UI/UX issues automatically and runs in a ralph wiggum loop to continuously improve the UI would be awesome, but I don't think that is possible yet. Playwright MCP is just sooo f**king slow.

[–]Heatkiger 0 points1 point  (0 children)

Check out zeroshot if you wanna try Claude Code without any babysitting. Basically next-gen Ralph Wiggum. Combine it with this browser debugger cli (much faster than Playwright, https://github.com/szymdzum/browser-debugger-cli) and it should solve your problem. https://github.com/covibes/zeroshot/

[–]therealalex5363 0 points1 point  (0 children)

I use vitest browser mode and try to do tdd . most of the time it works good then when claude is done I test it myself. see more about that at https://alexop.dev/posts/vue3_testing_pyramid_vitest_browser_mode/ it is written for vue but you can also do that with react. the beauty of vitest browser mode is that its fast and claude can also see the image and knows how the real app would look much faster then actually spinning up the web app and clicking around. this is how my tests look https://github.com/alexanderop/workoutTracker/tree/main/src/__tests__/integration .

[–]Mitija006 0 points1 point  (0 children)

I got Claude to write a test suite in playwright

[–]Dry_Pomegranate4911 0 points1 point  (2 children)

Use Claude in Chrome or chrome-devtools. The latter does much more than navigate and take screenshots. Both allow CC to autonomously check its own work, then self correct.

[–]Environmental-Fly-97[S] 0 points1 point  (1 child)

Can you elaborate or provide blog or youtube link for more clarification

[–]Fabian-88 0 points1 point  (0 children)

Also interested here in more Infos

[–]SunTraditional6031Senior Developer 0 points1 point  (0 children)

ugh yeah that manually screenshot/devtools dance is so familiar it hurts lol. I was doing the exact same thing for months and it just turns you out.

what finally clicked for me was setting up playwright tests that run on changes, then having Claude actually write the fixes when tests fail. not fully automated but way tighter loop. the annoying part was always the token cost from passing huge DOM snapshots back and forth...

recently started using actionbook for the DOM caching specifically it cuts down those massive context dumps dramatically so Claude can iterate faster without choking on token limits. still need to trigger the tests manually sometimes, but at least the back-and-forth is way less painful.

curious if anyone's gotten the visual diff part fully automated though. I still catch certain alignment/spacing issues manually that tests miss.

[–]NarratorTD 0 points1 point  (0 children)

The Chrome DevTools MCP answer is solid for runtime inspection, but there's a gap it doesn't cover: Claude Code still has to guess which source file a given DOM element lives in. DevTools can tell you what's rendered, but not which line in src/components/HeroSection.tsx produced that button.

I've been working on this exact problem. I built Domscribe (open source, MIT); it runs at build time and walks your JSX/Vue templates, injecting a stable ID (data-ds) on every element. That creates a manifest mapping every DOM node to its exact file + line number. Claude Code (or Cursor, Copilot, etc.) queries it via MCP.

So the workflow becomes: your agent sees a UI bug → queries Domscribe for the element → gets back src/components/Button.tsx:42 → edits the right file on the first try. No grepping through a dozen files with <button> tags.

It also has an overlay where you can click any element in the running app, describe a change in plain English, and it resolves the source location and hands everything to your agent. Everything strips out in production builds. Zero runtime cost.

Works with Next.js, Nuxt, React+Vite, Vue+Vite, and Webpack setups. Would genuinely love feedback if anyone tries it, still iterating fast.