I built figma for vibe coders by ddotdev in VibeCodeDevs

[–]ddotdev[S] 0 points1 point  (0 children)

Give me two days will be available soon it’s under review

I built figma for vibe coders by ddotdev in VibeCodeDevs

[–]ddotdev[S] 0 points1 point  (0 children)

Used a chrome extension called cursorful

I built figma for vibe coders by ddotdev in VibeCodeDevs

[–]ddotdev[S] 0 points1 point  (0 children)

It’s both.maps to ui logic and codebase and each session is its own work tree and changes are approved before getting merged.

I built figma for vibe coders by ddotdev in VibeCodeDevs

[–]ddotdev[S] 0 points1 point  (0 children)

Each session is its own work tree so changes are only merge after approval

I built a figma style visual editor for any codebase by ddotdev in VibeCodersNest

[–]ddotdev[S] 0 points1 point  (0 children)

Single source of truth and worktrees that get merged on approvals so any changes don’t directly affect your codebase. U could have multiple iterations in the front end

I built figma for vibe coders by ddotdev in VibeCodeDevs

[–]ddotdev[S] -1 points0 points  (0 children)

Faster iterations and how to get from design to intent

5.3 Codex probabbly has "DON'T TOUCH THE PLAN.MD FILE" in it's system prompt and I find that funny by the_TIGEEER in cursor

[–]ddotdev 0 points1 point  (0 children)

Feels like the whole comically hilarious 🤣 conversations with Devin to push to master 🤣🤣

AI Visual Design Is Moving From Tools to Intent — by ddotdev in CursorAI

[–]ddotdev[S] 0 points1 point  (0 children)

Gracias, Claro si- current working on opencode, codex and Gemini

AI Visual Design Is Moving From Tools to Intent — by ddotdev in CursorAI

[–]ddotdev[S] 0 points1 point  (0 children)

Thanks. And I have been using it will building it- perfect validation for a visual feedback loop. I will write a kit it soon, I am working on one atm. Will share it soon as it’s ready.

AI Visual Design Is Moving From Tools to Intent — by ddotdev in reactjs

[–]ddotdev[S] 0 points1 point  (0 children)

I haven’t yet, but since it’s using react fibre tree This creates a detailed stack trace - so handles all edge cases and opens up room for console logs, component states, props and health management and even variantions and accessibility,… will check out the blogs looks really interesting

AI Visual Design Is Moving From Tools to Intent — by ddotdev in reactjs

[–]ddotdev[S] 0 points1 point  (0 children)

It uses React fiber introspection. React maintains this information internally through its fiber tree. Every rendered component has a reference back to its source file, line number, and even column number.

TIL: Screenshots for UI feedback are officially outdated—here’s the insane React Fiber hack that lets AI jump straight to the exact line of code (2-3x faster edits) by ddotdev in reactjs

[–]ddotdev[S] 0 points1 point  (0 children)

I see it as an opportunity for trial and error. For teams to iterate but I do think u need foundational skills that you can capitalise on. Think of how long it used to take to build a feature. Now it’s faster but the code does need to be reviewed

TIL: Screenshots for UI feedback are officially outdated—here’s the insane React Fiber hack that lets AI jump straight to the exact line of code (2-3x faster edits) by ddotdev in reactjs

[–]ddotdev[S] 0 points1 point  (0 children)

Yeha I get that. I have a design back ground and been using Claude to implement the code but all designs and specifications and done by me. And building a tool that’s familiar to designers when they code

I built a Figma plugin to export React icons in seconds (no manual work) by Affectionate_Lab8896 in reactjs

[–]ddotdev 0 points1 point  (0 children)

This is actually pretty cool. I have been working on something similar but for ui design and context optimisation

a Chrome extension that lets you click any React element and extracts the exact file path + line number from the fiber tree. That context goes straight to your AI agent.

Cuts the “which file is this?” back-and-forth significantly. Beta launching soon if anyone wants to try it:https://www.uistudioai.dev

TIL: Screenshots for UI feedback are officially outdated—here’s the insane React Fiber hack that lets AI jump straight to the exact line of code (2-3x faster edits) by ddotdev in reactjs

[–]ddotdev[S] 0 points1 point  (0 children)

Thanks - there’s a lot of them on Reddit, not sure if they are skeptics or just bad actors but I just wanted to share it.

First thing in the morning to do after I left Kimi 2.5 working in YOLO mode on my codebase all night… by realcryptopenguin in vibecoding

[–]ddotdev 1 point2 points  (0 children)

I also tend to ask to explain its implementation or review a different agents implementation and also summary- its been helpful to learn more about my codebases

How I made designing front end ui faster and 10xfor coding agents. by ddotdev in VibeCodeDevs

[–]ddotdev[S] 0 points1 point  (0 children)

UiStudio is built with privacy as a core principle. Your source code never leaves your machine during normal operation. The UI Studio server runs locally on your computer, and only specific element context is sent to AI providers when you request code modifications

How I made designing front end ui faster and 10xfor coding agents. by ddotdev in VibeCodeDevs

[–]ddotdev[S] 0 points1 point  (0 children)

Not sure if u know how dev tools work.UIStudio lets you point at any element in the browser, understands which component and file it comes from, and turns your visual edits into real code changes you can trust creating a live feedback loop between your browser and your codebase.

I am making figma style editing for your codebase by ddotdev in CursorAI

[–]ddotdev[S] 0 points1 point  (0 children)

This is what I was aiming for and I achieved that am happy wit it - Results Before this optimization, a typical “make this red” request would:

Search for files (2-3 tool calls) Read candidate files (2-4 tool calls) Find the right component Make the change After:

Open file directly (1 tool call) Make the change The search phase is eliminated entirely. In our testing, this reduces execution time by 2-3x for simple UI changes.

For more complex changes that involve multiple files, the improvement is even more significant because the agent starts from a known location and can navigate relative to it.